Skip to Main Content
  • Questions
  • SQL Query aggregation and subqueries

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Munzer.

Asked: March 17, 2002 - 2:09 pm UTC

Last updated: April 24, 2016 - 6:19 am UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Tom:

I have a table that initially stores information about items in a warehouse stored in different bins. Table look like this where manual inventories are done every 3 months. Effective date is the sysdate when record is inserted.

Inventory:

Item_no Qty Bin Effective_Date
AC006 10 DC001 2/1/2002
AC006 20 DC002 2/1/2002

AC006 100 DC001 5/1/2002
AC006 50 DC002 5/2/2002
AC006 30 DC003 5/3/2002
AC006 20 DC008 5/4/2002

I need to calculate two things:

1. Total Qty of an item in inventory in a given bin. which I basically did by taking qty of the item in inventory with the max(effective_Date)+total_received and stored in that bin since that max(effective_date) – total shipped from that bin since that max(effective_date).

2. Total qty of an item in inventory. Here I need to look at the highest effective date for an item at a bin. Bascially I need to run a SQL statement that gives me 200(sum last 4 recrods) for total of an item “AC006”. How do you formulate that statement and exclude all the previous inventories for that item that are out of date.

3. Also, let me say I have the following table. Would it still work where I basically stored the items in different bins.

Inventory:

Item_no Qty Bin Effective_Date
AC006 10 DC001 2/1/2002
AC006 20 DC002 2/1/2002

AC006 100 DC003 5/1/2002
AC006 50 DC004 5/2/2002

The answer here should be 150 and not 180.

Thank you,



and Tom said...

Ok, well

1) that sounds like you are giving me a statement of fact. I don't see a question there, only how you answered your own question.

2) Well, if we start with:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

ITEM_ QTY BIN EFFECTIVE_
----- ---------- ----- ----------
AC006 10 DC001 02/01/2002
AC006 20 DC002 02/01/2002
AC006 100 DC001 05/01/2002
AC006 50 DC002 05/02/2002
AC006 30 DC003 05/03/2002
AC006 20 DC008 05/04/2002
AC007 10 DC001 02/01/2002
AC007 20 DC002 02/01/2002
AC007 77 DC001 05/01/2002
AC007 32 DC002 05/02/2002
AC007 52 DC003 05/03/2002
AC007 33 DC008 05/04/2002

12 rows selected.

I'll show you at least three different ways to get this. The first is the best in my opinion (easy to read, performs well)

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select item_no, sum(qty)
2 from (
3 select distinct item_no, bin,
4 first_value(qty) over(partition by item_no, bin
5 order by effective_date desc) qty
6 from t
7 )
8 group by item_no
9 /

ITEM_ SUM(QTY)
----- ----------
AC006 200
AC007 194

it uses the analytic functions available with 816 EE and up...

The next uses a "max" trick i use alot to avoid correlated subqueries:


ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select substr( data, 14, 5 ) item_no, sum( to_number( substr( data, 19 ) ) ) qty
2 from (
3 select max( to_char( effective_date, 'YYYYMMDD' ) || rpad( bin, 5 ) || rpad( item_no, 5 ) || to_char(qty) ) data
4 from t
5 group by item_no, bin
6 )
7 group by substr( data, 14, 5 )
8 /

ITEM_ QTY
----- ----------
AC006 200
AC007 194


and the last uses a correlated subquery:

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select item_no, sum(qty)
2 from t
3 where effective_date = ( select max(effective_date)
4 from t t2
5 where t2.item_no = t.item_no
6 and t2.bin = t.bin )
7 group by item_no
8 /

ITEM_ SUM(QTY)
----- ----------
AC006 200
AC007 194

ops$tkyte@ORA817DEV.US.ORACLE.COM>

to achieve the same...

3) No, not at all. Why would it be 150 and not 180? How would a computer know that -- you are using some hidden piece of knowledge in this case. I don't even know what that hidden bit of knowledge is myself. Basically you are saying "hey, if there is some long period of time between sets of measurements, ignore the old stuff". Only if you could tell me exactly (procedurally) how to filter this data (eg: only use data whose effective_date is within 5 days of the max effective_date). Otherwise -- it would seem to me that the answer is 180, not 150.



Rating

  (933 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

SQL query

munz, March 17, 2002 - 3:49 pm UTC

Excellent as usual

Why r u Using Distinct and then First_value here

Pascal, March 19, 2002 - 7:38 am UTC

Refer to ur Query just compied from above:

select item_no, sum(qty)
2 from (
3 select distinct item_no, bin,
4 first_value(qty) over(partition by item_no, bin
5 order by effective_date desc) qty
6 from t
7 )
8 group by item_no
9 /

I have been reading ur Book and been trying these Analytic Functions myself..but sometimes i had to use Distinct to get correct results ---otherwise , it doesn't work..

Can u please Clarify a bit more why r u using Distinct and then first_value here..
Wouldn't first_value be sufficient ?

Thanks and Best Regards,

pascal


Tom Kyte
March 19, 2002 - 9:04 am UTC

Well, I'm actually using the ANALYTIC function and then the distinct.

We got -- for every row in T (where there are more then one row for each ITEM_NO, BIN combination) the first_value of qty when sorted descending by date. That would give us 12 rows (using my example) with many duplicates. The distinct will remove the duplicate ITEM_NO/BIN combinations and then the sum collapses out the BIN dimension.

query

mo, June 24, 2003 - 4:09 pm UTC

Tom:

I have this query:

ISD> select a.request_id,a.shipment_id,b.item_number,b.stock_number,
2 max(a.shipment_date) from shipment a, shipped_item b
3 where a.shipment_id = b.shipment_id and b.disposition='B'
4 group by a.request_id,a.shipment_id,stock_number,item_number;

REQUEST_ID SHIPMENT_ID ITEM_NUMBER STOCK_NUMB MAX(A.SHIPMENT_DATE)
---------- ----------- ----------- ---------- --------------------
593 2074 2 AC010 24-jun-2003 00:00:00
593 2074 4 BB005 24-jun-2003 00:00:00
593 2075 2 AC010 25-jun-2003 00:00:00
593 2075 4 BB005 25-jun-2003 00:00:00
594 2076 1 AC010 24-jun-2003 00:00:00
594 2076 2 BB005 24-jun-2003 00:00:00
594 2076 3 MS193 24-jun-2003 00:00:00

7 rows selected.

What i am tring to get is the records of the shipment of the max date. so here it would exclude the first two. how would you do this.

Tom Kyte
June 25, 2003 - 11:31 am UTC

select * (
select a.request_id,a.shipment_id,b.item_number,b.stock_number,
a.shipment_date,
max(a.shipment_date) over (partition by shipment_id) max_ship_date
from shipment a, shipped_item b
where a.shipment_id = b.shipment_id
and b.disposition='B'
)
where shipment_date = max_ship_date
/









actual query

mo, June 24, 2003 - 5:03 pm UTC

Tom:

Please disregard previous query. here is the actual one. I have two tables: shipment (PK is shipment_id) and shipped_item (PK is shipment_id,stock_number and item_number). Every shipment record that has a backorder flag, I want to read the quantity. However if it is the same item and request id I am only intersted in the last shipped one. SO here the result should be so that I skip
the first and third records.

1 select a.request_id,b.stock_number,b.backorder_quantity,
2 a.shipment_date from shipment a, shipped_item b
3 where a.shipment_id = b.shipment_id and b.disposition='B'
5* order by 1,2
ISD> /

REQUEST_ID STOCK_NUMB BACKORDER_QUANTITY SHIPMENT_DATE
---------- ---------- ------------------ --------------------
593 AC010 190 24-jun-2003 00:00:00
593 AC010 145 25-jun-2003 00:00:00
593 BB005 380 24-jun-2003 00:00:00
593 BB005 300 25-jun-2003 00:00:00
594 AC010 90 24-jun-2003 00:00:00
594 BB005 50 24-jun-2003 00:00:00
594 MS193 50 24-jun-2003 00:00:00

7 rows selected.


Tom Kyte
June 25, 2003 - 11:34 am UTC

I think that is the same answer as the other one.

mo, June 25, 2003 - 11:58 am UTC

Tom:

Thanks a lot. Is it any way possible using analytical functions too to get the min(date) in the result set. Your answer gave me what I need. Now is it possible to add the min(shipment_date) from the first record displayed in the initial result set to the final result set which displays the record with maximum(shipment_date)?



Tom Kyte
June 25, 2003 - 7:25 pm UTC

(did you even think to try "min"?????)

just try adding a select of min(...) over (....)

query

mo, June 25, 2003 - 9:34 pm UTC

Tom:

I think you misunderstood me or maybe I misunderstood your last comment.

The max( ) over gave me the result set however I am trying to grab the lowest ship date from the first record. For example, for this data set, I need to grab the record (quantity) from record highest shipment_date but then I want to grab lowest shipment date (initial date).

REQUEST_ID STOCK_NUMB BACKORDER_QUANTITY SHIPMENT_DATE
---------- ---------- ------------------ --------------------
593 AC010 190 24-jun-2003 00:00:00
593 AC010 145 25-jun-2003 00:00:00
593 BB005 380 24-jun-2003 00:00:00
593 BB005 300 25-jun-2003 00:00:00
594 AC010 90 24-jun-2003 00:00:00
594 BB005 50 24-jun-2003 00:00:00
594 MS193 50 24-jun-2003 00:00:00

i WOULD GET FOR AC010:


593 AC010 145 24-jun-2003
or


593 AC010 145 25-jun-2003 24-jun-2003
(here the min(date) is added as another column.



Tom Kyte
June 26, 2003 - 8:51 am UTC

failing to see the problem here, did you even TRY adding


select * (
select a.request_id,a.shipment_id,b.item_number,b.stock_number,
a.shipment_date,
max(a.shipment_date) over (partition by shipment_id) max_ship_date
min(a.shipment_date) over (partition by shipment_id) min_ship_date

from shipment a, shipped_item b
where a.shipment_id = b.shipment_id
and b.disposition='B'
)
where shipment_date = max_ship_date
/

that'll get you the MIN shipment date by shipment_id for dispositions = 'B' You'll have max ship date, min ship date, etc etc et.

Great. This solves lot of my Issues.

A reader, June 27, 2003 - 10:16 am UTC

Thank you Tom.

sql query

mo, September 11, 2003 - 6:27 pm UTC

Tom:

I have this query that gives me all storage codes for each stock item for "WASHDC" warehouse. Problem is that if a stock item is not stored anywhere in "WASHDC" it will be excluded from the result set. I want a master report of all stock items that shows blank entries if a stock item does not have an entry. ALl stock items are listed in stock_item table.

What can i do to get this "resultset" to union with all other stock items that are not in the result set just to be displayed on the report.

select stock_number,description,
max(decode(seq,1,storage_code,null)) Loc#1,
max(decode(seq,1,qty_Available,null)) qty#1,
max(decode(seq,2,storage_code,null)) Loc#2,
max(decode(seq,2,qty_Available,null)) qty#2,
max(decode(seq,3,storage_code,null)) loc#3,
max(decode(seq,3,qty_Available,null)) qty#1,
max(decode(seq,4,storage_code,null)) loc#4,
max(decode(seq,4,qty_Available,null)) qty#4
from (
select stock_number, storage_code,description,qty_available,
row_number() over(partition by stock_number order by stock_number nulls last) seq
from (select a.stock_number, b.storage_code,a.description,
compute_qty_stored(b.warehouse_id,a.stock_number,b.storage_Code) qty_available
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id in (''WASHDC'')
and a.stock_number <> ''99999''
union all
select a.stock_number, b.storage_code,a.description,
compute_qty_stored(b.warehouse_id,a.stock_number,b.storage_Code) qty_available
from stock_item a, storage_receipt b
where a.stock_number = b.stock_number(+)
and b.warehouse_id in (''WASHDC'')
)
group by stock_number,storage_Code,description,qty_available )
where seq <=4

THanks,

Tom Kyte
September 11, 2003 - 8:21 pm UTC

you don't need a union, you just need to outer join to a distinct set of all stock items at the end.

take that query -- and outer join it to the distinct set of stock items.

query

mo, September 11, 2003 - 9:43 pm UTC

TOm:

1. DO you mean doing like this :

select * from (query) a, stock_item b where a.stock_number=b.stock_number(+);

2. or do the join in the innermost (warehouse column) query like :

select a.stock_number, b.storage_code,a.description,

compute_qty_stored(b.warehouse_id,a.stock_number,b.storage_Code) qty_available
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id(+) in (''WASHDC'')
and a.stock_number <> ''99999''

thanks

Tom Kyte
September 12, 2003 - 9:47 am UTC

select *
from ( your_query ) a,
( query to get DISTINCT stock numbers ) b
where a.stock#(+) = b.stock#

Very Nice

R.Chacravarthi, September 12, 2003 - 10:07 am UTC

Hello Tom,
I have a question for you regarding a query.It deals with
selecting min and max values in a query.For example:
select ename,sal from emp where sal = (select min(sal) from
emp ) or sal = (select max(sal) from emp);
It returns me the employees getting min and max sal values
in the table emp.My question is
"Can this query be put using the In operator like
select ename,sal from emp where sal in('query1','query2');" query1 deals with min(sal) and query2
deals with max(sal).Is this possible?I tried but it throws errors.Could you please help?
Thanks in advance.Please do reply.
P.S:) Please specify the other formats of the same query.



Tom Kyte
September 12, 2003 - 10:50 am UTC

ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename
  2   from emp
  3   where sal in ( (select min(sal) from emp), (select max(sal) from emp) );
 
ENAME
----------
SMITH
KING
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename
  2   from emp
  3   where sal in (select min(sal) from emp union select max(sal) from emp);
 
ENAME
----------
SMITH
KING
 


both work 

what do u think of this query...still doing a table scan??

A reader, September 12, 2003 - 1:59 pm UTC

SQL> sELECT
  2           employees.USID,
  3           review_status,
  4           jobs.ID,
  5           jobs.STATUS,
  6           JOB_JAVA_PKG.GET_CHEMIST(jobs.ID) ASSIGNED_TO,
  7           jobs.TEST ,
  8           JOB_JAVA_PKG.get_review_status(jobs.ID) REVIEW_STATUS,
  9           jobs.CREATED_BY ,
 10           jobs.REASON_FOR_CHANGE,
 11           DECODE(STATUS,'CANCELLED',jobs.MODIFIED_BY,NULL) MODIFIED_BY
 12      FROM JOBS, job_chemists, employees, enotebook_reviews
 13          -- where jobs.id is not null AND jobs.RECORD_STATUS='CURRENT'
 14           where jobs.RECORD_STATUS='CURRENT'
 15           and jobs.id = job_fk_id
 16           and job_chemists.RECORD_STATUS='CURRENT'
 17           and enotebook_reviews.RECORD_STATUS='CURRENT'
 18           and job_chemists.employee_fk_id = employee_id
 19           and e_notebook_entity_id = jobs.id
 20           and job_fk_id = e_notebook_entity_id
 21  /

611 rows selected.

Elapsed: 00:00:06.04

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=30 Card=1 Bytes=172)
   1    0   NESTED LOOPS (Cost=30 Card=1 Bytes=172)
   2    1     HASH JOIN (Cost=29 Card=1 Bytes=155)
   3    2       TABLE ACCESS (FULL) OF 'JOBS' (Cost=11 Card=615 Bytes=
          47970)

   4    2       HASH JOIN (Cost=17 Card=709 Bytes=54593)
   5    4         TABLE ACCESS (FULL) OF 'JOB_CHEMISTS' (Cost=11 Card=
          611 Bytes=25051)

   6    4         INDEX (FAST FULL SCAN) OF 'LOU1' (NON-UNIQUE) (Cost=
          5 Card=2156 Bytes=77616)

   7    1     TABLE ACCESS (BY INDEX ROWID) OF 'EMPLOYEES' (Cost=1 Car
          d=485 Bytes=8245)

   8    7       INDEX (UNIQUE SCAN) OF 'EMPLOYEES_PK' (UNIQUE)




Statistics
----------------------------------------------------------
       1833  recursive calls
          0  db block gets
      42323  consistent gets
          0  physical reads
          0  redo size
      43153  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        611  rows processed 

Tom Kyte
September 12, 2003 - 2:34 pm UTC


why do you think a full scan is a bad thing??

looks awesome I guess -- unless you can show me something better or explain why you think it is "bad"

I would guess most of the run time is in the stuff that has the word "java" in it.

Thanks!!!

A reader, September 12, 2003 - 3:16 pm UTC

Thanks Tom,

do you think is a good idea to call a fuction like this
in side the select statement. The reason why I have it
througth my code is because I don't want to do a lot joins.
Please tell me if is a good idea to call it. Anyway,
here is the function I called get chemist from the previous
select. Any idea will be great to make it better!!!

FUNCTION get_chemist (p_job IN VARCHAR2)
RETURN VARCHAR2
IS
v_emp_id VARCHAR2 (35);
v_emp VARCHAR2 (50);
BEGIN
SELECT employee_fk_id
INTO v_emp_id
FROM job_chemists
WHERE job_fk_id = TRIM (p_job) AND record_status = 'CURRENT';

SELECT usid
INTO v_emp
FROM employees
WHERE TRIM (employee_id) = TRIM (v_emp_id);

RETURN v_emp;
END

Tom Kyte
September 12, 2003 - 7:51 pm UTC

umm, so you think this would be better then a join??????

it is a join -- just a really really really slow join, the slowest way to do it.

don't second guess the database engine, it is pretty good at doing what it does -- joins!


I would not even consider calling such a function -- never in a billion years, not a chance.

do it in 100% SQL

SQL Statement to grab the next lowest value within a single row

David, September 12, 2003 - 5:16 pm UTC

Hi Tom,

I have been trying hard to think of a way to do this with just one easy SQL statement, as opposed to many PL/SQL code blocks.

The main issue is that we loaded data from a networked database to an Oracle 9i database without a hitch. During development, we discovered that in designing the new tables prior to the load, we missed the need to add a key column in one of the tables to establish a relationship with another. This was a big mistake. Unfortunately, we do not have time now to unload and reload the data into re-designed tables. The population of the owner key can only be done during the unload process, and there is simply no time left. The only option we have is to come up with a workable solution. I prefer, if possible, a single SQL statement that can retrieve the correct results.

So for example, in Table A:

SEQNO EFFECTIVE_DATE SERVICE_STATUS
2 20020426 ND
2 20020130 AB
2 20020129 AB
2 20011009 AB
2 20011002 AB
2 20010912 AB
2 20001028 AB
2 20001026 NP
1 20030328 AB
1 20020722 AB
1 20020720 AB
1 20020719 AB
1 20020717 NP

And in Table B:

SERIAL_NUMBER SEQNO EFFECTIVE_DATE
21814290378 2 20020426
21814290378 2 20020130
21814290378 2 20020129
21814290378 2 20011009
21814290378 2 20011002
21814317918 2 20010912
21814317918 2 20001028
9300134799 1 20030328
9300134799 1 20020722
9300134799 1 20020720
9300134799 1 20020719

There should be a SERIAL_NUMBER field in Table A so that we can pin-point the EFFECTIVE_DATE. This is the column that is missing. Table A should look like:

SEQNO EFFECTIVE_DATE SERVICE_STATUS SERIAL_NUMBER
2 20020426 ND 21814290378
2 20020130 AB 21814290378
2 20020129 AB 21814290378
2 20011009 AB 21814290378
2 20011002 AB 21814290378
2 20010912 AB 21814317918
2 20001028 AB 21814317918
2 20001026 NP
1 20030328 AB 9300134799
1 20020722 AB 9300134799
1 20020720 AB 9300134799
1 20020719 AB 9300134799
1 20020717 NP


So the way this works is that for a SEQNO, you can have multiple serial numbers. We need to produce a report that looks like:

SERIAL_NUMBER SRV_BEGIN
------------------------------------
9300134799 20020717
21814290378 20011002
21814317918 20001026

The logic is this: For each sequence number, list the effective dates of each serial number. The effective date should be the earliest date for that serial number. If the SERVICE_STATUS column in Table A is 'NP', then use the effective date for that serial number. If it isn't, then use the date that is in Table B. This seems simple, but is quite complicated. I am hoping that you can come up with a single SQL statement to do this.

Thanks.

David

Tom Kyte
September 12, 2003 - 8:03 pm UTC

isn't the logic more easily stated as:

for every row in table A where service_status <> NP, set serial_number = first serial number for that sequence you find in table b?



Whaa?

David, September 12, 2003 - 9:17 pm UTC

Do you mean "where service_status = NP"? How would you do it with one SQL statement?

Tom Kyte
September 13, 2003 - 9:01 am UTC

no, i meant

for every row -- where the status is NOT "NP" -- update that column to the first value for that sequence you find in the other table.

that matches your output. is that is what you want -- just


update tablea
set that_column = ( select that_column from tableb where tableb.seq = tablea.seq and rownum = 1 )
where status <> 'NP';

Nice

R.Chacravarthi, September 13, 2003 - 12:50 am UTC

Thanks for your response.Sorry to disturb you again.I need a
query which returns deptno wise highest and lowest salary
getting employees.For ex. I expect the resultset to be as
follows:
deptno Min(sal) Minsal_ename Max(sal) Maxsal_ename
10 800 Smith 5000 King
20 1500 ... ...
30 ... .... .... ...
I need a query like this?Could you please Provide a query?
Please specify different formats of the same query.

Tom Kyte
September 13, 2003 - 9:15 am UTC


please specify different formats?  do we get bonus points on homework for that :)


tell me, what happens when two people (or thirty people, whatever) make the most or least -- are we to pick on at random?  

well, here are three ways -- note that the last query returns a different -- yet equally valid -- answer due to this duplicity

ops$tkyte@ORA920> create table emp as select * from scott.emp order by ename;
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select deptno,
  2         to_number(substr(xmax,1,14)) max_sal,
  3         substr(xmax,15) max_name,
  4         to_number(substr(xmin,1,14)) min_sal,
  5         substr(xmin,15) min_name
  6    from (
  7  select deptno,
  8         max( to_char(sal,'0000000000.00') || ename ) xmax,
  9         min( to_char(sal,'0000000000.00') || ename ) xmin
 10    from emp
 11   group by deptno
 12         )
 13  /
 
    DEPTNO    MAX_SAL MAX_NAME      MIN_SAL MIN_NAME
---------- ---------- ---------- ---------- ----------
        10       5000 KING             1300 MILLER
        20       3000 SCOTT             800 SMITH
        30       2850 BLAKE             950 JAMES
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select deptno,
  2         max(case when rn<>1 then sal else null end) max_sal,
  3         max(case when rn<>1 then ename else null end) max_ename,
  4         max(decode(rn,1,sal)) min_sal,
  5         max(decode(rn,1,ename)) min_ename
  6    from (
  7  select *
  8    from (
  9  select deptno,
 10         row_number() over ( partition by deptno order by sal ) rn,
 11         max(sal) over (partition by deptno) max_sal,
 12         ename,
 13         sal
 14    from emp
 15         )
 16   where rn = 1 or sal = max_sal
 17         )
 18   group by deptno
 19  /
 
    DEPTNO    MAX_SAL MAX_ENAME     MIN_SAL MIN_ENAME
---------- ---------- ---------- ---------- ----------
        10       5000 KING             1300 MILLER
        20       3000 SCOTT             800 SMITH
        30       2850 BLAKE             950 JAMES
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select deptno,
  2         max_sal,
  3         (select ename from emp where sal = max_sal and rownum = 1 ) max_ename,
  4         min_sal,
  5         (select ename from emp where sal = min_sal and rownum = 1 ) min_ename
  6    from (
  7  select deptno, min(sal) min_sal, max(sal) max_sal
  8    from emp
  9   group by deptno
 10         )
 11  /
 
    DEPTNO    MAX_SAL MAX_ENAME     MIN_SAL MIN_ENAME
---------- ---------- ---------- ---------- ----------
        10       5000 KING             1300 MILLER
        20       3000 FORD              800 SMITH
        30       2850 BLAKE             950 JAMES
 

Excellent

A reader, September 13, 2003 - 10:46 pm UTC

How to reverse the rows of a table?i.e.starting from the last row and proceeding upwards.Can this be done in sql?I
tried in pl/sql using index by tables with attributes last
and prior.It works fine.Please show me a way how to do it
sql.
Thanks in advance.
Bye!

Tom Kyte
September 14, 2003 - 9:50 am UTC

there is no such thing as the "last row"...

order by DESC

and

order by ASC

are the only ways to have "first" and "last" rows. so, if you want the "last row" first and were using order by asc, use desc

Ok

Peter, September 15, 2003 - 12:46 am UTC

Dear Tom,
Read your response.One of my Oracle University Tutors said
the following information."A row in the table can be the last row if it has the maximum rowid and a row can be the
first if it has the minimum rowid".Is this correct information or misleading one?I have a question also for you
and kindly bear with us.It is
select * from emp where &N = (select ....);
Keep the data as it is (i.e keeping data in the order of row insertion ) and suppose if I enter "1" for N it should return the first row and if I enter "5" it should return the fifth row.Is this possible in
sql?Please do reply.
Thanks in advance.

Tom Kyte
September 15, 2003 - 9:34 am UTC

it is incorrect.  

first, what does it mean to be "first" or "last"??????? nothing, nothing at all - not in SQL, not in relational databases.  But, anyway, let me demonstrate:

borrowing the example from:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4423420997870

I'll reuse it here:

ops$tkyte@ORA920> create table t ( x int primary key, a char(2000), b char(2000), c char(2000), d char(2000), e char(2000) );
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t(x) values ( 1 );
1 row created.
 
ops$tkyte@ORA920> insert into t(x) values ( 2 );
1 row created.
 
ops$tkyte@ORA920> insert into t(x) values ( 3 );
1 row created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> commit;
Commit complete.
 
ops$tkyte@ORA920> column a noprint
ops$tkyte@ORA920> column b noprint
ops$tkyte@ORA920> column c noprint
ops$tkyte@ORA920> column d noprint
ops$tkyte@ORA920> column e noprint
ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from t;
 
         X
----------
         1
         2
         3

<b>so, data comes out in the order it is inserted -- or not???  the "last row" is 3, first row is "1" right?  or is it??</b>
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> update t set a = 'x', b = 'x', c = 'x' where x = 3;
 
1 row updated.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> commit;
 
Commit complete.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> update t set a = 'x', b = 'x', c = 'x' where x = 2;
 
1 row updated.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> commit;
 
Commit complete.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> update t set a = 'x', b = 'x', c = 'x' where x = 1;
 
1 row updated.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> commit;
 
Commit complete.

<b>just some updates right -- but carefully planned ones:</b>

 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from t;
 
         X
----------
         3
         2
         1

<b>now 3 is the "first" row and 1 is the "last row" but look at the rowids</b>
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select x, rowid, min(rowid) over (), max(rowid) over (),
  2         decode( rowid, min(rowid) over (), 'min rowid' ),
  3         decode( rowid, max(rowid) over (), 'max rowid' )
  4    from t;
 
  X ROWID              MIN(ROWID)OVER()   MAX(ROWID)OVER()   DECODE(RO DECODE(RO
--- ------------------ ------------------ ------------------ --------- ---------
  3 AAAJ49AALAAAQaiAAC AAAJ49AALAAAQaiAAA AAAJ49AALAAAQaiAAC           max rowid
  2 AAAJ49AALAAAQaiAAB AAAJ49AALAAAQaiAAA AAAJ49AALAAAQaiAAC
  1 AAAJ49AALAAAQaiAAA AAAJ49AALAAAQaiAAA AAAJ49AALAAAQaiAAC min rowid
 

<b>3 has max rowid, 1 has min rowid but -- 3 is first and 1 is last!!!</b>

so, that shows the "maximum rowid theory" is false.....





As for the last question -- hopefully you see now that there is NO SUCH THING as the n'th row in a relation database (unless you yourself number them!)


You can get the n'th row from a result set -- but the n'th row can and will change over time.  Only by numbering the rows yourself and using order by can you get the same n'th row time after time after time 


select *
  from ( select a.*, rownum r
           from ( query-to-define-your-result-set ) a
          where rownum <= &n )
 where r = &n;


gets the n'th row from a result set (NOT a table, a result set) 

Problem with SQL Query

Raghu, September 15, 2003 - 8:20 am UTC

Hi TOm,

I don't understand the reason behind this behaviour. 

SQL> desc dept
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DEPTNO                                             NUMBER(2)
 DNAME                                              VARCHAR2(14)
 LOC                                                VARCHAR2(13)

SQL> desc emp
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPNO                                     NOT NULL NUMBER(4)
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 MGR                                                NUMBER(4)
 HIREDATE                                           DATE
 SAL                                                NUMBER(7,2)
 COMM                                               NUMBER(7,2)
 DEPTNO                                             NUMBER(2)

SQL> select * from dept
  2  where dname in
  3  (select dname from emp);

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

DNAME column is missing in the emp table but still it does not give any error. 

Tom Kyte
September 15, 2003 - 9:43 am UTC

that query is the same as:

select * from dept
where DEPT.dname in ( select DEPT.dname from emp );


you just did a correlated subquery. as long as EMP has at least one row -- that query is the same as:

select * from dept
where DEPT.dname is not null;


it is the defined, expected behaviour.

Thanks!!!

A reader, September 15, 2003 - 9:01 am UTC

Thanks Tom,

do you think is a good idea to call a fuction like this
in side the select statement. The reason why I have it
througth my code is because I don't want to do a lot joins.
Please tell me if is a good idea to call it. Anyway,
here is the function I called get chemist from the previous
select. Any idea will be great to make it better!!!

FUNCTION get_chemist (p_job IN VARCHAR2)
RETURN VARCHAR2
IS
v_emp_id VARCHAR2 (35);
v_emp VARCHAR2 (50);
BEGIN
SELECT employee_fk_id
INTO v_emp_id
FROM job_chemists
WHERE job_fk_id = TRIM (p_job) AND record_status = 'CURRENT';

SELECT usid
INTO v_emp
FROM employees
WHERE TRIM (employee_id) = TRIM (v_emp_id);

RETURN v_emp;
END

Tom Kyte
September 15, 2003 - 9:49 am UTC

it is a horrible idea.

joins are not evil.

calling plsql (or any function) from sql when you don't need to is.

joins are not evil.

databases were BORN to join

joins are not evil.

you are doing a join yourself -- that can only be slower then letting the datbase do what it was programmed to do!

joins are NOT evil.


don't do this (and that abuse of trim() -- on a database column!! precluding indexes in most cases! you want to rethink that -- why would you need to trim() ?? that would indicate bad data -- something you want to fix on the ingest process, not on the way out!

simple sql question

A reader, September 16, 2003 - 12:37 pm UTC

Hi Tom
Here is the test case:
9:25:11 test@ORA901> @test1
09:25:14 test@ORA901> drop table t1;

Table dropped.

09:25:14 test@ORA901> drop table t2;

Table dropped.

09:25:14 test@ORA901> create table t1 ( x int , y varchar2(30) );

Table created.

09:25:14 test@ORA901> create table t2 ( x int , y varchar2(30) );

Table created.

09:25:14 test@ORA901> insert into t1 values ( 1, 'xx' );

1 row created.

09:25:14 test@ORA901> insert into t1 values ( 2, 'xx' );

1 row created.

09:25:14 test@ORA901> select t1.x , t1.y from t1, t2
09:25:14 2 where not exists ( select null from t2 where t2.x = t1.x );

no rows selected

09:25:14 test@ORA901>
09:25:14 test@ORA901> insert into t2 values ( 1, 'xx' );

1 row created.

09:25:14 test@ORA901> select t1.x, t1.y from t1, t2
09:25:14 2 where not exists ( select null from t2 where t2.x = t1.x );

2 xx


My question is:
Why in the first case we get "no rows selected" - I would
have expected all the rows of t1. When I have at least one row in t2 (matching or not), then I get the desired result.
Basically, I want to select t1's values that don't exist
in t2 - . If t2 is empty I want to get all rows of t1.


Thank you!!!!

Tom Kyte
September 16, 2003 - 1:02 pm UTC


why are you cartesian product'ing the tables together?

the query should be:


ops$tkyte@ORA920> select t1.x , t1.y from t1
  2  where not exists ( select null from t2 where t2.x = t1.x );
 
         X Y
---------- ------------------------------
         1 xx
         2 xx


that is what the query should be -- when you join to an empty table -- you get no rows. 

thanx tom!!

A reader, September 16, 2003 - 1:32 pm UTC

It was a stupid mistake - thanx for correcting me!

Good

Ferdinand Maer, September 18, 2003 - 8:25 am UTC

Dear Sir,
Suppose if I have a string like "Metalink " ,How can I insert a blank space between each character so that string
appears as "M e t a l i n k"?Is a query possible here?
Please do reply.
Thanks
Maer


Tom Kyte
September 18, 2003 - 10:41 am UTC

procedurally -- sure, its easy. this would be one where I would write a small plsql function to do it.

OK

Ferdinand Maer, September 18, 2003 - 11:11 am UTC

Thanks for your reply.Please provide some hints to do the
coding.That's where we need your help.
Bye!
with regards,
Maer

Tom Kyte
September 18, 2003 - 11:18 am UTC

really?  seems pretty straight forward...

ops$tkyte@ORA920> create or replace function spacer( p_string in varchar2 ) return varchar2
  2  as
  3          l_string varchar2(4000);
  4  begin
  5          for i in 1 .. length(p_string)
  6          loop
  7                  l_string := l_string || substr(p_string,i,1) || ' ';
  8          end loop;
  9          return l_string;
 10  end;
 11  /
 
Function created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> column ename format a20
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename, spacer(ename) ename from emp;
 
ENAME                ENAME
-------------------- --------------------
ADAMS                A D A M S
ALLEN                A L L E N
BLAKE                B L A K E
CLARK                C L A R K
FORD                 F O R D
JAMES                J A M E S
JONES                J O N E S
KING                 K I N G
MARTIN               M A R T I N
MILLER               M I L L E R
SCOTT                S C O T T
SMITH                S M I T H
TURNER               T U R N E R
WARD                 W A R D
 
14 rows selected.
 
 

query

mo, January 10, 2004 - 2:26 pm UTC

Tom:

I would like to ask you about the best way to accomplish this query. It is used to generate a report.

I have 5 tables: Library table stores all libraries. Request table stores all requests submitted by libraries. Requested_Items stores all individual items requested. Shipment table stores all shipments sent to libraries. Shipped_Item stores all shipped items sent to libraries.

Library (LibCode,Name)
Request(Request_ID,LibCode)
Requested_Items(Request_ID,Item_no,Material_ID)
Shipment(Shipment_ID,Request_ID,LibCode)
Shipped_Item(Shipment_ID,Item_no,material_id)

Physical relationships are as follows:
Request table is child table to parent table Library.
Requested_Items table is a child table to parent table Request.
Shipment table is child table to parent table Request.
Shipped_Item is child table to parent table Shipment.

I want to generate a columnar report as follows:

Library Total Total Total Total
Request Request Items Shipments Shipped Items

Here is how I want to do it. Simple way but may be inefficient due to the several table scans.

For x in (select libcode from library)
Loop
Select count(*) into tot_request from request where libcode = x.libcode;
Select count(*) into tot_request_item from requested_items where request_id in (select request_id from request where libcode = x.libcode);
Select count(*) into tot_shipment from shipment where libcode = x.libcode;
Select count(*) into tot_shipped_item from shipped_items where shipment_id in (select shipment_id from shipment where libcode = x.libcode);
Htp.p(tot_request);
Htp.p(tot_request_item);
Htp.p(tot_shipment);
Htp.p(tot_shipped_item);
END LOOP;

Is this an efficient way of doing it or not. If not how would you do it?


Tom Kyte
January 10, 2004 - 4:14 pm UTC

select l.libcode, r.cnt1, r.cnt2, s.cnt1, s.cnt2
from library L,
(select request.libcode,
count(distinct request.request_id) cnt1,
count(*) cnt2
from request, requested_items
where request.request_id = requested_items.request_id(+)
group by request.libcode ) R,
(select shipment.libcode,
count(distinct shipment.shipment_id) cnt1,
count(*) cnt2
from shipment, shipped_items
where shipment.shipment_id = shipped_items.shipment_id(+)
group by shipment.libcode ) S
where l.libcode = r.libcode(+)
and l.libcode = s.libcode(+)


ONLY use the (+) if necessary. If the relations are not "optional" (eg: every libcode is in request and shipment that is in library and every row in request has at least a row in requested_items and every row in shipment has a row in shipped_items -- remove them -- remove as many as you can)

that gets them all in one statement.

SQL Query

Natasha, January 10, 2004 - 10:23 pm UTC

Tom,

I have a query pls. have a look at that

Query :
Select * From <Tab> Where Session_ID = <Session_Number>
point_of_sale, origin,destination,decode(category,'Y',1,'F',2,'J',3),
fare_id,rid,mfid,mrid.

Result Set:

MRBD BKGFRM BKGTO DEPFRM DEPTO FID RID MFID MRID
M 3/5/03 6/15/03 4/1/03 6/15/03 149 154 146 5427
M 3/5/03 6/19/03 6/16/03 6/19/03 149 154 147 5430
H 3/5/03 6/19/03 4/1/03 6/19/03 149 154 12364 16149

M 3/5/03 7/15/03 6/20/03 7/15/03 150 155 147 5430
M 3/5/03 7/31/03 7/16/03 7/31/03 150 155 3492 5432
H 3/5/03 7/31/03 6/20/03 7/31/03 150 155 12364 16149

M 3/5/03 8/31/03 8/16/03 8/31/03 151 8351 3491 5429
M 3/5/03 8/15/03 8/1/03 8/15/03 151 8351 3492 5432
H 3/5/03 8/31/03 8/1/03 8/31/03 151 8351 12364 16149

M 3/5/03 12/9/03 12/1/03 12/9/03 152 8353 149 160
M 3/5/03 12/25/0312/10/0312/25/03152 8353 149 160
M 3/5/03 3/31/04 1/1/04 3/31/04 152 8353 150 161
M 3/5/03 12/31/0312/26/0312/31/03152 8353 150 161
M 3/5/03 11/19/039/1/03 11/19/03152 8353 3491 5429
M 3/5/03 11/30/0311/20/0311/30/03152 8353 3491 5429
H 3/5/03 12/9/03 12/1/03 12/9/03 152 8353 12364 16149
H 3/5/03 11/19/039/1/03 11/19/03152 8353 12364 16149
H 3/5/03 11/30/0311/20/0311/30/03152 8353 12364 16149
H 3/5/03 3/31/04 1/1/04 3/31/04 152 8353 12364 16149
H 3/5/03 12/31/0312/10/0312/31/03152 8353 12364 16149
M 3/5/03 8/31/03 8/16/03 8/31/03 4888 8394 3491 5429
M 3/5/03 8/15/03 8/1/03 8/15/03 4888 8394 3492 5432
H 3/5/03 8/31/03 8/1/03 8/31/03 4888 8394 12364 16149
M 3/5/03 12/9/03 12/1/03 12/9/03 4889 8395 149 160
M 3/5/03 12/25/0312/10/0312/25/034889 8395 149 160
M 3/5/03 3/31/04 1/1/04 3/31/04 4889 8395 150 161
M 3/5/03 12/31/0312/26/0312/31/03 4889 8395 150 161
M 3/5/03 11/19/039/1/03 11/19/03 4889 8395 3491 5429
M 3/5/03 11/30/0311/20/0311/30/03 4889 8395 3491 5429
H 3/5/03 12/9/03 12/1/03 12/9/03 4889 8395 12364 16149
H 3/5/03 11/19/039/1/03 11/19/03 4889 8395 12364 16149
H 3/5/03 11/30/0311/20/0311/30/03 4889 8395 12364 16149
H 3/5/03 3/31/04 1/1/04 3/31/04 4889 8395 12364 16149
H 3/5/03 12/31/0312/10/0312/31/03 4889 8395 12364 16149
M 3/5/03 6/15/03 4/1/03 6/15/03 15682 23867 146 5427
M 3/5/03 7/15/03 6/16/03 7/15/03 15682 23867 147 5430
M 3/5/03 12/25/0312/1/03 12/25/03 15682 23867 149 160
M 3/5/03 3/31/04 12/26/03 3/31/04 15682 23867 150 161
M 3/5/03 11/30/038/16/03 11/30/03 15682 23867 3491 5429
M 3/5/03 8/15/03 7/16/03 8/15/03 15682 23867 3492 5432
H 3/1/03 3/31/04 3/1/03 3/31/04 15682 23867 12364 16149
H 4/1/07 3/31/08 4/1/07 3/31/08 15756 23990 12454 16232




What I am looking for :

Tom, If you check FID, and their Departure From & Departure To dates...there should be continuity. I mean the sequence of date breaks.

For Ex :-

FID DEPFRM DEPTO MRBD
149 01/04/2003 15/06/2003 M
149 16/06/2003 19/06/2003 M
149 01/04/2003 19/06/2003 H

This is the perfect output according to the date break sequence. But When you check FID '151', the date breaks are improper.

For Ex:-

FID DEPFRM DEPTO MRBD
151 16/08/2003 31/08/2003 M
151 01/08/2003 15/08/2003 M
151 01/08/2003 31/08/2003 M

So the output should come as

FID DEPFRM DEPTO MRBD
151 01/08/2003 15/08/2003 M
151 16/08/2003 31/08/2003 M
151 01/08/2003 31/08/2003 M


Pls. ensure that the date sequence should come for all the FID's.

I would appreciate, if you can answer my query.

thxs in advance.

Tom Kyte
January 11, 2004 - 6:09 am UTC

er? sorry, don't understand what you might be looking for here. don't know if you are looking for a constraint to avoid overlaps, a query to filter them out, whatever.

(example could be lots smaller too I think)

Query to filter

Natasha, January 11, 2004 - 7:56 am UTC

Tom,

I need a query to filter the records according to the above examples I had enclosed.

thxs in advance.


Tom Kyte
January 11, 2004 - 9:07 am UTC

don't get it. filter what like what?

looks like "where" with "order by" to me.

RE: Query to filter (Natasha)

Marcio, January 11, 2004 - 10:31 am UTC

<quote>
For Ex :-

FID DEPFRM DEPTO MRBD
149 01/04/2003 15/06/2003 M
149 16/06/2003 19/06/2003 M
149 01/04/2003 19/06/2003 H

This is the perfect output according to the date break sequence. But When you
check FID '151', the date breaks are improper.

For Ex:-

FID DEPFRM DEPTO MRBD
151 16/08/2003 31/08/2003 M
151 01/08/2003 15/08/2003 M
151 01/08/2003 31/08/2003 M

So the output should come as

FID DEPFRM DEPTO MRBD
151 01/08/2003 15/08/2003 M
151 16/08/2003 31/08/2003 M
151 01/08/2003 31/08/2003 M
</quote>
PMJI, but you don't show the query, like Tom said -- I used just order by depto and figured it out.
BTW: you have a 'H' on MRBD for 151, 01/08/2003, 31/08/2003.

ops$marcio@MARCI9I1> select fid, depfrm, depto, mrbd
2 from t
3 where fid=151
4* order by depto
ops$marcio@MARCI9I1> /

FID DEPFRM DEPTO M
---------- ---------- ---------- -
151 01/08/2003 15/08/2003 M
151 16/08/2003 31/08/2003 M
151 01/08/2003 31/08/2003 H

regards,


query

mo, February 26, 2004 - 6:52 pm UTC

Tom:

1. If I have two tables one for Requests (Orders) and one for shipments linked by request_id and each table has a child table for items. Is it possible to run a sql statement to get count of items in each order (requested and shipped).

Requests
(request_id number(10),
request_date date);

Requested_items
(request_id number(10),
item_number number(5),
stock_number varcahr2(10),
quantity number(8))

Shipment
(shipment_id number(10),
request_id number(10),
shipment_date date)

Shipped_items
(shipment_id number(10),
item_number number(10),
stock_number varchar2(10),
quantity number )

What i want is count the items requested versus items shipped and the quantities for each request. One request can have multiple shipments.

Request ID Tot req Items Tot req Qty
Tot ship Items Tot Qty shipped

100 5 3 1000 800

Thank you



Thank you,



Tom Kyte
February 26, 2004 - 7:17 pm UTC

aggregate everything up to the same level, just like we did above a while back.

use inline views, get things aggregated to the request_id level and join them.

reader, February 27, 2004 - 4:53 pm UTC

You give 3 way to get result. For performance, which one is better if we have a big table?

Tom Kyte
February 27, 2004 - 5:06 pm UTC

they are in relative order in the answer.

QUESTION

PRS, February 28, 2004 - 11:31 am UTC

Hi Tom,
We have oracle 9.2.0.4 installed on solaris 9.
I have 2000 oracle connections to the database at a
given point of time. Means user may be doing something
or they are idle.

My question is how do I find out the number of
concurrent users at the database level at a given second
always doing something. We just want to find out the
number of concurrent users on the database.
Any help is appreciated.
Thanks,
PRS

Tom Kyte
February 28, 2004 - 12:50 pm UTC

concurrently "active" or concurrent users.

select status, count(*) from v$session group by status

will in fact answer both - you'll see "inactive", "active" and perhas other status depending on your configuration. add them up and that is the number of concurrent users. if you are just interested in concurrent AND active users -- just use that number.

query

mo, March 31, 2004 - 7:07 pm UTC

Tom:

How would you do this query:

Requests
(request_id number(10),
request_date date);

Requested_items
(request_id number(10),
item_number number(5),
stock_number varcahr2(10),
quantity number(8))

Shipment
(shipment_id number(10),
request_id number(10),
shipment_date date)

Shipped_items
(shipment_id number(10),
stock_number varchar2(10),
quantity number,
code varchar2(1) )

I want to get a count of the backorder items that have been fullfilled in later shipments.

Initially when items shipped they get a code of "A" if shipped in full or "B" if it is on backorder. Later, if the backorder item was shipped it will get a code of "A".

Let us say I have this in shipments(shipment_id,request_id,date)

100,1,04/01/04
105,2,04/05/04

and in shipped_items(shipment_id,stock_no,qty,code)

100,AC002,10,A
100,AC006,5,B
101,AC006,5,A (here backorder for request 1(item2) gets filled)
105,BB005,6,A
105,BB006,10,B (here backorder for request 2(item2) stays open).

When I run the query i should get a count of 1 since only 1 item has been marked with A and then (same item) with B.



Tom Kyte
March 31, 2004 - 7:14 pm UTC

no idea how your tables relate to one another or how you are tracking data -- seems very strange to have shipped items table set up that way. shipment id 101 doesn't see to go anywhere.

if that 101 was really supposed to be 100 then a simple


select shipment_id, stock_number
from shipped_items
group by shipment_id, stock_number
having count(distinct code) = 2

gets all of the rows that you then probably want to count

select count(*)
from (
select shipment_id, stock_number
from shipped_items
group by shipment_id, stock_number
having count(distinct code) = 2
)

query

mo, March 31, 2004 - 9:43 pm UTC

No I mean 101 however I forgot to include it in the shipment table. One Request can have many shipments.

Request and shipments are linked ussing Request_ID. Shipment and Shipped_items are linked using shipment_id.

You gave me the idea though?

select count(*)
from (
select a.request_id,b.stock_number,b.code
from shipment a, shipped_items b
where a.shipment_id = b.shipment_id
group by request_id, stock_number
having count(distinct code) = 2
)


Thank you,


query

mo, April 08, 2004 - 10:36 pm UTC

Tom:

In the query above I should have told you that are about 8 shipment codes and not only "A" or "B". However I am only interested in couting the item that has a "B" ijnitially and then they shipped it and got a code of "A". In this case what would you add to this?

having count(distinct code) = 2 and code in ('A','B')

Is this right?


Tom Kyte
April 09, 2004 - 7:36 am UTC

code in ('a',b') goes in the where clause, not the having clause.

query

mo, April 09, 2004 - 6:59 pm UTC

Tom:

Can you check why this query is not giving the right answer. In the first set of data the count should be 0 because not one item has had two codes "A" and "B". In the second set of data I shipped one backorder item, and the count should change to 1 but it gave me 2 instead.

I am trying to get a total count of all items that have been shipped by request that get two codes "A" and "B".


SELECT e.request_id,disposition,f.stock_number
from shipment e, shipped_item f
where e.shipment_id = f.shipment_id(+) and
e.ship_to_org = 'DC1A' and
f.disposition in ('A','B')
group by request_id, disposition,stock_number;

REQUEST_ID D STOCK_NUMB
---------- - ----------
795 A AC008
795 A BB010
795 A CA294
795 B MA187
796 A AC008
796 A BB010
796 B CA294


select count(*) from
(
SELECT e.request_id,disposition,f.stock_number
from shipment e, shipped_item f
where e.shipment_id = f.shipment_id(+) and
e.ship_to_org = 'DC1A' and
f.disposition in ('A','B')
group by request_id, disposition,stock_number
)
group by request_id,stock_number
having count(distinct disposition)=2


no rows selected


Now when I ship one of the backorder items the count changes to 2 instead of 1.

REQUEST_ID D STOCK_NUMB
---------- - ----------
795 A AC008
795 A BB010
795 A CA294
795 A MA187
795 B MA187
796 A AC008
796 A BB010
796 B CA294

Thank you,

Tom Kyte
April 10, 2004 - 11:38 am UTC

should be obvious -- if the count( distinct something ) = 2, count(*) for that group will be AT LEAST 2.... at least 2 (and maybe more).....


seems like you need two levels of aggregation here -- you want to count the groups after the fact. use another inline view to count the rows that pass the having clause test.

help with the query

P, April 14, 2004 - 5:49 pm UTC

hi tom,
how should i change this query so it will return only one row, where hct.batch_num=max(hct.batch_num)...that is the first row
here is my query and the resultset

SELECT hct.batch_num,hct.batch_seq_num,hct.ap_order_num,hct.svc_num,sm.svc_num
FROM hist_cust_trans hct,service_mstr sm
WHERE hct.svc_num=sm.svc_num
AND hct.svc_type_cd=sm.svc_type_cd
AND hct.trans_status_cd not in ('NP', 'AN')
AND decode(hct.gcc_dest_num,NULL,'0','1')=sm.rstrctd_ind
AND substr(hct.ap_order_num,1,1)='T'
AND hct.svc_num=4042091777

45474 49472 TOFHNBQ2 4042091777 4042091777
45474 49473 TOFHNBQ2 4042091777 4042091777
45474 49474 TOFHNBQ2 4042091777 4042091777
43775 39582 TO803XY6 4042091777 4042091777
43775 39583 TO803XY6 4042091777 4042091777
43574 52701 TO8D5JX0 4042091777 4042091777
43574 52700 TO8D5JX0 4042091777 4042091777


Thank You

Tom Kyte
April 15, 2004 - 7:53 am UTC

there are many ways to accomplish this.  assume this is your current query and you want the row(s) with the max(sal) reported:

ops$tkyte@ORA9IR2> select sal, empno, ename
  2    from emp
  3   where deptno = 20;
 
       SAL      EMPNO ENAME
---------- ---------- ----------
       800       7369 SMITH
      2975       7566 JONES
      3000       7788 SCOTT
      1100       7876 ADAMS
      3000       7902 FORD
 
<b>analytics get that easily:</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select sal, max(sal) over () max_sal, empno, ename
  2    from emp
  3   where deptno = 20;
 
       SAL    MAX_SAL      EMPNO ENAME
---------- ---------- ---------- ----------
       800       3000       7369 SMITH
      2975       3000       7566 JONES
      3000       3000       7788 SCOTT
      1100       3000       7876 ADAMS
      3000       3000       7902 FORD
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (
  3  select sal, max(sal) over () max_sal, empno, ename
  4    from emp
  5   where deptno = 20
  6         )
  7   where sal = max_sal;
 
       SAL    MAX_SAL      EMPNO ENAME
---------- ---------- ---------- ----------
      3000       3000       7788 SCOTT
      3000       3000       7902 FORD

<b>but that shows top-n queries can be tricky -- there are two highest salaries -- we can "solve" that by just getting "one" if that is an issue</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (
  3  select sal, row_number() over (order by sal DESC) rn, empno, ename
  4    from emp
  5   where deptno = 20
  6         )
  7   where rn = 1;
 
       SAL         RN      EMPNO ENAME
---------- ---------- ---------- ----------
      3000          1       7788 SCOTT

<b>alternatively, we can use order by in a subquery:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (
  3  select sal, empno, ename
  4    from emp
  5   where deptno = 20
  6   order by sal desc
  7         )
  8   where rownum = 1;
 
       SAL      EMPNO ENAME
---------- ---------- ----------
      3000       7788 SCOTT
 
<b>as well -- there are other ways (if interested, i have a list of them in my book Effective Oracle by Design) but those are the two "easiest"</b>
 

some more explanation

P, April 15, 2004 - 11:02 am UTC

Hi tom
That’s not what I am looking for, may be you did not understand my question (I guess I did not give all the information)
First 2 columns make the primary key for the table and I want to return only 1 row where batch_num is greatest. I need to pass this to an update statement meaning I am going to use it as a sub query in an update statement (I am updating service_mstr table) so I need only one row to be returned. In the update statement I would join svc_num to update table. So instead of hct.svc_num=4042091777 I would use join column from update table which is service_mstr. And I cannot use rownum as I am using it in an update statement.
so from the resultset i want to return only "TOFHNBQ2" which is from the last last group.


SELECT hct.batch_num,hct.batch_seq_num,hct.ap_order_num,hct.svc_num,sm.svc_num
FROM hist_cust_trans hct,service_mstr sm
WHERE hct.svc_num=sm.svc_num
AND hct.svc_type_cd=sm.svc_type_cd
AND hct.trans_status_cd not in ('NP', 'AN')
AND decode(hct.gcc_dest_num,NULL,'0','1')=sm.rstrctd_ind
AND substr(hct.ap_order_num,1,1)='T'
AND hct.svc_num=4042091777

43574 52701 TO8D5JX0 4042091777 4042091777
43574 52700 TO8D5JX0 4042091777 4042091777

43775 39582 TO803XY6 4042091777 4042091777
43775 39583 TO803XY6 4042091777 4042091777

45474 49472 TOFHNBQ2 4042091777 4042091777
45474 49473 TOFHNBQ2 4042091777 4042091777
45474 49474 TOFHNBQ2 4042091777 4042091777


Tom Kyte
April 15, 2004 - 11:17 am UTC

(i can only answer that which is actually ASKED)...

what happens if

45474 49472 TOFHNBQ2x 4042091777 4042091777
45474 49473 TOFHNBQ2y 4042091777 4042091777
45474 49474 TOFHNBQ2z 4042091777 4042091777

is the data? you'll get a random one back


but in any case


set c = substr( select max( to_char(batch_num,'fm0000000000' ) || ap_order_num )
from ....
where .....
and hct.svc_num = updated_table.svc_num ), 11 )


assuming

o batch num is a max 10 digit positive number (if larger, add more zeroes, increase 11 approapriately)



If I had lots of rows to update, i might two step this with a global temporary table and

o get all of the svc_numbers/ap_order_nums into a GTT with a primary key of svc_num

o update the join of that gtt to the details




Question

PRS, April 21, 2004 - 12:07 pm UTC

Tom,
   I have a following view defined on oracle 9.2.0.4 box.
CREATE OR REPLACE VIEW test_view (
   opportunity_id,
   bo_id_cust,
   wsi_account_id,
   wsi_account_no,
   first_name,
   last_name,
   opportunity_status,
   long1,
   td_opp_flw_up_act,
   long2,
   person_id,
   name,
   ra_campaign_id,
   ra_cmpgn_name,
   row_added_dttm,
   row_lastmant_dttm,
   next_activity_dt,
   act_close_dt,
   act_revenue,
   est_revenue,
   phone,
   dummy
  ,note )
AS
SELECT B.opportunity_id 
,B.BO_ID_CUST 
,A.WSI_ACCOUNT_ID 
,E.WSI_ACCOUNT_NO 
,B.FIRST_NAME 
,B.LAST_NAME 
,B.OPPORTUNITY_STATUS 
,f.xlatlongname long1
,A.TD_OPP_FLW_UP_ACT 
,g.xlatlongname long2 
,B.PERSON_ID 
,D.NAME 
,A.RA_CAMPAIGN_ID 
,C.RA_CMPGN_NAME 
,B.ROW_ADDED_DTTM 
,B.ROW_LASTMANT_DTTM 
,B.NEXT_ACTIVITY_DT
,B.ACT_CLOSE_DT 
,B.ACT_REVENUE 
,B.EST_REVENUE 
,B.PHONE 
,'' dummy
,crm_get_values_ref_cursor_pkg.get_opp_note(B.opportunity_id) note
FROM PS_WSI_RSF_OPP_FLD A 
, PS_RSF_OPPORTUNITY B 
, PS_RA_CAMPAIGN C 
, PS_RD_PERSON_NAME D 
, PS_WSI_ACCOUNT E 
,PSXLATITEM F
,PSXLATITEM G
WHERE A.OPPORTUNITY_ID = B.OPPORTUNITY_ID 
AND A.RA_CAMPAIGN_ID = C.RA_CAMPAIGN_ID(+) 
AND B.PERSON_ID = D.PERSON_ID(+) 
AND A.WSI_ACCOUNT_ID = E.WSI_ACCOUNT_ID(+)
AND f.fieldvalue(+) = b.opportunity_status
AND f.fieldname(+) = 'OPPORTUNITY_STATUS'
AND g.fieldvalue(+) = A.TD_OPP_FLW_UP_ACT 
AND g.fieldname(+) = 'TD_OPP_FLW_UP_ACT'
/

   My query is as shown below.
select /*+ FIRST_ROWS */ * from test_view where person_id = '50004100' and ra_campaign_id  = '4'
   This query comes back in .7 second. Following is the explain plan.

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=6444 Card= 3300 Bytes=745800)
   1    0   MERGE JOIN (OUTER) (Cost=6444 Card=3300 Bytes=745800)
   2    1     MERGE JOIN (OUTER) (Cost=5619 Card=3300 Bytes=666600)
   3    2       NESTED LOOPS (OUTER) (Cost=4794 Card=3300 Bytes=587400  )
   4    3         NESTED LOOPS (OUTER) (Cost=3144 Card=3300 Bytes=537900)
   5    4           NESTED LOOPS (OUTER) (Cost=2319 Card=3300 Bytes=435600)
   6    5             NESTED LOOPS (Cost=1494 Card=3300 Bytes=333300)
   7    6               TABLE ACCESS (BY INDEX ROWID) OF 'PS_RSF_OPPORTUNITY' (Cost=669 Card=3300 Bytes=264000)
   8    7                 INDEX (RANGE SCAN) OF 'PSJRSF_OPPORTUNITY' (NON-UNIQUE) (Cost=12 Card=3300)
   9    6               TABLE ACCESS (BY INDEX ROWID) OF 'PS_WSI_RSF_OPP_FLD' (Cost=2 Card=1 Bytes=21)
  10    9                 INDEX (UNIQUE SCAN) OF 'PS_WSI_RSF_OPP_FLD' (UNIQUE)
  11    5             TABLE ACCESS (BY INDEX ROWID) OF 'PSXLATITEM' (Cost=2 Card=1 Bytes=31)
  12   11               INDEX (RANGE SCAN) OF 'PSAPSXLATITEM' (NON-UNIQUE)
  13    4           TABLE ACCESS (BY INDEX ROWID) OF 'PSXLATITEM' (Cost=2 Card=1 Bytes=31)
  14   13             INDEX (RANGE SCAN) OF 'PSAPSXLATITEM' (NON-UNIQUE)
  15    3         TABLE ACCESS (BY INDEX ROWID) OF 'PS_WSI_ACCOUNT' (Cost=2 Card=1 Bytes=15)
  16   15           INDEX (RANGE SCAN) OF 'PSAWSI_ACCOUNT' (NON-UNIQUE ) (Cost=1 Card=1)
  17    2       BUFFER (SORT) (Cost=5617 Card=1 Bytes=24)
  18   17         TABLE ACCESS (BY INDEX ROWID) OF 'PS_RD_PERSON_NAME' (Cost=2 Card=1 Bytes=24)
  19   18           INDEX (RANGE SCAN) OF 'PSDRD_PERSON_NAME' (NON-UNIQUE)
  20    1     BUFFER (SORT) (Cost=6442 Card=1 Bytes=24)
  21   20       INDEX (FULL SCAN) OF 'PS0RA_CAMPAIGN' (NON-UNIQUE) (Cost=1 Card=1 Bytes=24)
Statistics
----------------------------------------------------------
        287  recursive calls
          0  db block gets
      22191  consistent gets
          0  physical reads
       3420  redo size
      58875  bytes sent via SQL*Net to client
        842  bytes received via SQL*Net from client
         19  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        264  rows processed

As soon as I add order by clause query take 30 seconds to run. Please see below.

SQL> 
  1  select /*+ FIRST_ROWS */ * from test_view where person_id = '50004100' and ra_campaign_id  = '4'
  2* Order by person_id,row_added_dttm

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=12866 Card =3300 Bytes=745800)
   1    0   MERGE JOIN (OUTER) (Cost=12866 Card=3300 Bytes=745800)
   2    1     MERGE JOIN (OUTER) (Cost=12041 Card=3300 Bytes=666600)
   3    2       NESTED LOOPS (OUTER) (Cost=11216 Card=3300 Bytes=587400)
   4    3         NESTED LOOPS (OUTER) (Cost=10391 Card=3300 Bytes=485100)
   5    4           NESTED LOOPS (Cost=8741 Card=3300 Bytes=435600)
   6    5             NESTED LOOPS (OUTER) (Cost=7916 Card=3300 Bytes=366300)
   7    6               TABLE ACCESS (BY INDEX ROWID) OF 'PS_RSF_OPPORTUNITY' (Cost=7091 Card=3300 Bytes=264000)
   8    7                 INDEX (FULL SCAN) OF 'PSGRSF_OPPORTUNITY' (NON-UNIQUE) (Cost=26928 Card=3300)
   9    6               TABLE ACCESS (BY INDEX ROWID) OF 'PSXLATITEM'(Cost=2 Card=1 Bytes=31)
  10    9                 INDEX (RANGE SCAN) OF 'PSAPSXLATITEM' (NON-UNIQUE)
  11    5             TABLE ACCESS (BY INDEX ROWID) OF 'PS_WSI_RSF_OPP_FLD' (Cost=2 Card=1 Bytes=21)
  12   11               INDEX (UNIQUE SCAN) OF 'PS_WSI_RSF_OPP_FLD' (UNIQUE)
  13    4           TABLE ACCESS (BY INDEX ROWID) OF 'PS_WSI_ACCOUNT' (Cost=2 Card=1 Bytes=15)
  14   13             INDEX (RANGE SCAN) OF 'PSAWSI_ACCOUNT' (NON-UNIQUE) (Cost=1 Card=1)
  15    3         TABLE ACCESS (BY INDEX ROWID) OF 'PSXLATITEM' (Cost= 2 Card=1 Bytes=31)
  16   15           INDEX (RANGE SCAN) OF 'PSAPSXLATITEM' (NON-UNIQUE)
  17    2       BUFFER (SORT) (Cost=12039 Card=1 Bytes=24)
  18   17         TABLE ACCESS (BY INDEX ROWID) OF 'PS_RD_PERSON_NAME'(Cost=2 Card=1 Bytes=24)
  19   18           INDEX (RANGE SCAN) OF 'PSDRD_PERSON_NAME' (NON-UNIQUE)
  20    1     BUFFER (SORT) (Cost=12864 Card=1 Bytes=24)
  21   20       INDEX (FULL SCAN) OF 'PS0RA_CAMPAIGN' (NON-UNIQUE) (Cost=1 Card=1 Bytes=24)





Statistics
----------------------------------------------------------
        294  recursive calls
          0  db block gets
      57940  consistent gets
          0  physical reads
       1140  redo size
      60741  bytes sent via SQL*Net to client
        842  bytes received via SQL*Net from client
         19  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        264  rows processed

SQL>

    It uses a different index. This query comes from peoplesoft application, so that puts the hint. We do not put hint. Our tables are 35% analyzed and indexes are 100% analyzed daily in the night.

    Indexes are as under on PS_RSF_OPPORTUNITY table.
psarsf_opportunity
    next_activity_dt                ASC,
    person_id                       ASC
psbrsf_opportunity
    bo_id_cust                      ASC,
    bo_id_contact                   ASC
psdrsf_opportunity
    row_lastmant_dttm               ASC,
    person_id                       ASC
psersf_opportunity
    last_name                       ASC,
    first_name                      ASC
psgrsf_opportunity
    row_added_dttm                  ASC,
    person_id                       ASC
pshrsf_opportunity
    phone                           ASC
psjrsf_opportunity
    person_id                       ASC,
    opportunity_status              ASC
pskrsf_opportunity
    first_name                      ASC
psmrsf_opportunity
    act_close_dt                    ASC,
    person_id                       ASC
ps_rsf_opportunity
    opportunity_id                  ASC

    Indexes on PS_WSI_RSF_OPP_FLD tables are as under.
psawsi_rsf_opp_fld
    wsi_account_id                  ASC
psbwsi_rsf_opp_fld
    ra_campaign_id                  ASC
pscwsi_rsf_opp_fld
    td_opp_flw_up_act               ASC,
    opportunity_id                  ASC
ps_wsi_rsf_opp_fld
    opportunity_id                  ASC

    Why order by is causing problem? Any insight is appreciated?

Thanks,
PRS 

Query

mo, April 29, 2004 - 5:06 pm UTC

Tom:

Do you know why I am not getting the answer of 5 here.

SQL> select course_no from course;

 COURSE_NO
----------
         1
         2
         3
         4
         5

5 rows selected.

SQL> select distinct course_no from certificate;

 COURSE_NO
----------
         1
         2
         3
         4


5 rows selected.

SQL> select course_no from course where course_no not in (select distinct course_no
  2  from certificate);

no rows selected
 

Tom Kyte
April 29, 2004 - 5:17 pm UTC

give me

a) create table
b) insert into's

in order to reproduce.

query

mo, April 30, 2004 - 7:45 pm UTC

Tom:

This is really strange. When i replicate the tables/data in another instance I get correct resutls. When I run the query in TEST instance it gives me corect results. Only when I run the query in DEV instance I get the "no rows found".

Here are the DMLs:

create table course
(course_no number(2) );


create table certificate
(certificate_no number(2),
course_no number(2) );

insert into course values (1);
insert into course values (2);
insert into course values (3);
insert into course values (4);
insert into course values (5);

insert into certificate(10,1);
insert into certificate(10,2);
insert into certificate(10,3);
insert into certificate(10,4);

Tom Kyte
May 01, 2004 - 9:11 am UTC

post the query plans for both a working and non-working (use sql_trace=true and post the tkprof reports for the queries making sure to point out which is which)

Query

Brian Camire, May 01, 2004 - 1:54 pm UTC

It looks like you have NULL values in certificate.course_no as...

SQL> select distinct course_no from certificate;

 COURSE_NO
----------
         1
         2
         3
         4


5 rows selected.

...suggests (since there were 5 rows selected instead of 4).  This would cause the NOT IN operator to return FALSE and explain the behavior you are observing. 

Tom Kyte
May 01, 2004 - 5:50 pm UTC

that is what I thought -- but he said the output was:


SQL> select course_no from course;

 COURSE_NO
----------
         1
         2
         3
         4
         5

5 rows selected.

no nulls apparent there. 

Please help

Roselyn, May 01, 2004 - 2:22 pm UTC

Hello Sir,
Whenever we issue a Select(query) statement,whether Oracle fetches a
copy of data to be operated upon or the entire original data?
I would like to know this.
Please do reply.



Tom Kyte
May 01, 2004 - 7:47 pm UTC

we do not "copy the data"

we use multi-versioning, if you issue a query such as:

select * from a_1_kabillion_row_table;

there will be NO IO, until you actually fetch a row, then and only then will we read data and starting processing.



If you haven't read the Concepts Guide -- you should, especially the chapter on concurrency and multi-versioning. It is what sets Oracle apart from all of the rest.

Query

Brian Camire, May 01, 2004 - 7:21 pm UTC

Yes, but it's the certificate table that's being selected from in the subquery.

Tom Kyte
May 01, 2004 - 7:56 pm UTC

AHHH, of course

that is it 100%

the null in there is doing it, definitely.

query

mo, May 02, 2004 - 12:19 pm UTC

Tom:

yes it was the null value. however i thought it should run like this:

select course_no from course where course_no not in (select distinct
course_no
from certificate);

select (1,2,3,4,5) where this not in (1,2,3,4,null)

5 is still is not in the set?

Tom Kyte
May 02, 2004 - 4:25 pm UTC

it is "unknown" whether 5 is not in that set, that is the essence of NULL.

query

mo, June 15, 2004 - 7:14 pm UTC

Tom:

For the tables listed above, I am trying to get a report that lists several computations per material item in the MATERIAL table as Total requests that include each item, total shipments that include that item etc.

I got two queries working separately and I could not combine them correctly. Can and how do you do it:

select material_id,
count(case when request_id is not null and (request_status = 'Open' or request_status = 'Partial') and request_date < to_date('01-APR-2004') then 1 end) pending_req_cnt,
count(case when request_id is not null and request_date between to_date('01-APR-2004') and to_date ('01-JUL-2004') then 1 end) new_req_cnt,
count(case when request_id is not null and (request_status = 'Open' or request_status = 'Partial') and request_date < to_date ('01-JUL-2004') then 1 end) remain_req_cnt
from
(
select a.request_id,a.request_date,a.request_status,priority,b.item_number,
substr(c.description,1,15) description,c.material_id
from request a,requested_item b,material c
where a.request_id(+)=b.request_id and
b.material_id(+) = c.material_id
order by a.request_id
)
group by material_id


select material_id,
count(case when shipment_id is not null then 1 end) tot_shipments
from
(
select a.shipment_id,c.material_id
from shipment a, shipped_item b, material c
where a.shipment_id(+) = b.shipment_id and
b.material_id(+) = c.material_id
order by a.shipment_id
)
group by material_id

Tom Kyte
June 16, 2004 - 12:08 pm UTC

i don't know how you want them "combined".....

if you want a single row per material_id, just

select material_id, sum(cnt1), sum(cnt2), sum(cnt3), sum(cnt4)
from
(
select material_id, case( ... ) cnt1, case( .... ) cnt2,
to_number(null)cnt3, to_number(null) cnt4
from .....
group by ....
UNION ALL
select material_id, to_number(null), to_number(null),
case( ..... ) cnt3, case(....) cnt4
from ...
group by ...
)
group by material_id;

(i am assuming you would need a full outer join to put your two queries together, this trick avoids the need for a full outer join, otherwise, if a material_id is either in BOTH results or NIETHER result - -just join)

query

mo, June 16, 2004 - 4:25 pm UTC

Tom:
Your way was truly brilliant. However it worked with this format:

select X.*,Y.* from
(
QUERY1
) X,
(
QUERY2
) Y

where x.material_id = y.material_id

Is your format better/faster or they both the same since they involve two table scans.

Tom Kyte
June 16, 2004 - 4:30 pm UTC

they should be more or less equivalent, as long as a join is all you need.

if there are material ids in X not in Y or vice versa, you'd need the full outer join or my technique.

Max,Min and Last value combined

Segun Jeks, June 26, 2004 - 11:37 am UTC

Tom,

Kindly assist me. I do pray i'll get your response.
I have table :
create table mkt_daily
(ddate date,
price number(5,2),
coy varchar2(9),
constraints pk_md primary key (ddate,coy));
with these data:
insert into mkt_daily values(to_date('24-05-2004','dd-mm-yyyy'),'6.2','ASKTOM');
insert into mkt_daily values(to_date('25-05-2004','dd-mm-yyyy'),'6.1','ASKTOM');
insert into mkt_daily values(to_date('27-05-2004','dd-mm-yyyy'),'5.2','ASKTOM');
insert into mkt_daily values(to_date('26-05-2004','dd-mm-yyyy'),'5.2','ASKTOM');
insert into mkt_daily values(to_date('27-05-2004','dd-mm-yyyy'),'5.5','ASKTOM');
insert into mkt_daily values(to_date('28-05-2004','dd-mm-yyyy'),'6.0','ASKTOM');
insert into mkt_daily values(to_date('01-06-2004','dd-mm-yyyy'),'6.8','ASKTOM');
insert into mkt_daily values(to_date('02-06-2004','dd-mm-yyyy'),'5.5','ASKTOM');
insert into mkt_daily values(to_date('03-06-2004','dd-mm-yyyy'),'5.5','ASKTOM');
insert into mkt_daily values(to_date('07-06-2004','dd-mm-yyyy'),'5.6','ASKTOM');
insert into mkt_daily values(to_date('08-06-2004','dd-mm-yyyy'),'5.8','ASKTOM');
insert into mkt_daily values(to_date('09-06-2004','dd-mm-yyyy'),'5.9','ASKTOM');
insert into mkt_daily values(to_date('10-06-2004','dd-mm-yyyy'),'6.0','ASKTOM');
insert into mkt_daily values(to_date('11-06-2004','dd-mm-yyyy'),'6.1','ASKTOM');

My challenge is this: i want to fetch the max(price)High, min(price)Low and last(price)"Closing_Price" for every week.
i.e
week high low Closing_Price
22 6.2 5.2 6.0
23 6.8 5.5 5.5
24 6.1 5.6 6.1

I have tried:
a) select max(price)high,to_char(ddate,'IW') from mkt_daily GROUP BY to_char(ddate,'IW');
b) select min(price),to_char(ddate,'IW') from mkt_daily GROUP BY to_char(ddate,'IW');

Thanks for your anticipated positive response.


Tom Kyte
June 26, 2004 - 7:03 pm UTC

ops$tkyte@ORA10G> select to_char(ddate,'iw'), max(price), min(price),
  2         to_number( substr( max(to_char(ddate,'yyyymmdd')||price), 9)) mp
  3    from mkt_daily
  4   group by to_char(ddate,'iw')
  5  /
 
TO MAX(PRICE) MIN(PRICE)         MP
-- ---------- ---------- ----------
22        6.2        5.2          6
23        6.8        5.5        5.5
24        6.1        5.6        6.1
 

Max,min and last combined

Segun Jeks, June 28, 2004 - 7:09 am UTC

Tom,

Many thanks. I am impressed.Your response was timely and fantastic.

Do you have any book that I can read to understand some of the sql tricks, especially combining the sql functions to get desired results like you just did.

Tom Kyte
June 28, 2004 - 8:58 am UTC

well, mostly they are not 'tricks', they are the results of doing SQL for 17 years ;)

What I do (did) is list out the requirements in english to understand the question.

"by week, report the min price, the max price and the last price (price with the highest date for that week)"


That said it all -- it was easy after that. min/max -- trivial. The only 'hard' one was the "last price". I wrote about that a while ago:

</code> https://asktom.oracle.com/Misc/oramag/on-html-db-bulking-up-and-speeding.html
PLAYING WITH AGGREGATION

and demonstrated a couple of techniques.  Since we were already aggregating (the by week told me that), i used the technique of encoding the thing we needed to max (the date) in a string and putting the related fields on the end of the string - so we'd get the data for the max(date) plus the other stuff - to substring back out.


if you are interested:

http://asktom.oracle.com/~tkyte/asktom_articles.html <code>

has links to 4 years of columns -- each one pretty much has a different technique or two in it. (i also wrote about many of them in my books as well).


Anyone out there have a favorite "sql tricks" book?


quesiton on query rewriting

A reader, June 29, 2004 - 11:24 am UTC

somewhere on your site I saw a way of changing
the text of a query using a pl/sql package ( I think.) This is
useful for queries that you can not directly change
(3rd party application) and so want to intercept
it and silently modify it before executing...
Can you please point me to the thread or give the pl/sql
package name that does it?

thanx!

Tom Kyte
June 29, 2004 - 4:13 pm UTC

thanx!

A reader, June 29, 2004 - 4:14 pm UTC


query

mo, July 09, 2004 - 12:28 pm UTC

Tom:

Is there a way to do sql query on a number value(use between) for a varchar2 field that has values as either numbers or text. I want to search only the number values for field between 5000 and 6000.

is it possible?

Tom Kyte
July 09, 2004 - 1:53 pm UTC

you'd need to first validate that what you have is in fact a number. if you expected only "digits", you can use replace/translate to determine that the string you have contains only the characters 0..9 and then to_number it.

You should have *some* metadata in your generic (as life goes on i am beginning to hate generic) implementation that you can look at to see if it is a number or not. If not, you just have random bits and bytes.

SQL Query

A reader, July 10, 2004 - 3:48 am UTC

How can I return multiple rows as a single row using sql queries?

From SQL*Plus, how can I return a single row when the result set contains multiple rows.

For example:

select tablespace_name from dba_segments

returns

SYSTEM
USER
TEMP

but I want this to return as 'SYSTEM','USER','TEMP'.

How can I achieve this from SQL*plus using sql queries?

Tom Kyte
July 10, 2004 - 9:19 am UTC

search this site for stragg

single row result for multiple rows

Mir, July 19, 2004 - 5:47 pm UTC

i am trying to write a sql query which will publish single row info from multiple rows.


create table t(
id number,
cusid number,
cat varchar2(3))
/

INSERT INTO t VALUES (1,1,'crd')
/
INSERT INTO t VALUES (2,2,'ins')
/
INSERT INTO t VALUES (3,2,'crd')
/
INSERT INTO t VALUES (4,2,'lns')
/
INSERT INTO t VALUES (5,3,'ins')
/
INSERT INTO t VALUES (6,3,'crd')
/
INSERT INTO t VALUES (7,4,'lns')
/
INSERT INTO t VALUES (8,4,'ins')
/
INSERT INTO t VALUES (9,4,'crd')
/
INSERT INTO t VALUES (10,5,'ins')
/
INSERT INTO t VALUES (11,6,'ins')
/
INSERT INTO t VALUES (12,6,'ins')
/
INSERT INTO t VALUES (13,7,'lns')
/
INSERT INTO t VALUES (14,7,'lns')
/

from the above data i want ID, CUSID = CUSID where cat = 'crd' OR CUSID should be the first value from the group

the output should be
id cusid cat
1 1 crd
3 2 crd
6 3 crd
9 4 crd
10 5 ins
11 6 ins
13 7 lns

Thanks


Tom Kyte
July 19, 2004 - 7:09 pm UTC

CUSID = CUSID where cat = 'crd' OR CUSID should 
be the first value from the group

doesn't make sense to me.  "cusid = cusid"??  "first value from the group"??  what group?

near as I can tell, you might mean:

ops$tkyte@ORA9IR2> select distinct
  2         first_value(id) over (partition by cusid order by decode(cat,'crd',1,2), cat ) fid,
  3         cusid,
  4         first_value(cat) over (partition by cusid order by decode(cat,'crd',1,2), cat ) fcat
  5    from t
  6  /
 
       FID      CUSID FCA
---------- ---------- ---
         1          1 crd
         3          2 crd
         6          3 crd
         9          4 crd
        10          5 ins
        11          6 ins
        13          7 lns
 
7 rows selected.
 
 

Need help with combining multiple queries

denni50, July 20, 2004 - 5:07 pm UTC

Hi Tom

could you please help with the following queries.
I would like to combine them into one massive(relatively
speaking) query. I have them all working correctly
independently. I now have analytics so that would be
even better.

Finds first gift given by donor:
select i.giftamount,i.giftdate
from gift i,acga_dr a where i.giftdate in(select min(g.giftdate)
from gift g
where g.idnumber=i.idnumber
and g.giftamount>0)
and i.idnumber=a.idnumber

Finds last gift given by donor:
select i.giftamount,i.giftdate
from gift i,acga_dr a where i.giftdate in(select max(g.giftdate)
from gift g
where g.idnumber=i.idnumber
and g.giftamount>0)
and i.idnumber=a.idnumber

Performs aggregates on individual donor history and
inserts into new table:
begin
for i in(select g.idnumber,sum(g.giftamount)LTGifts,count(g.giftamount)LTCount,max(g.giftamount)LargestGift, min(g.giftdate)FirstGiftDate,max(g.giftdate)LastGiftDate
from gift g,acga_drL a
where g.idnumber=a.idnumber
group by g.idnumber
)loop
insert into acga_drL(ltgifts,ltcount,largestgift,firstgiftdate,lastgiftdate)
values(i.ltgifts,i.ltcount,i.largestgift,i.firstgiftdate,i.lastgiftdate);
end loop;
end;

thanks!..(sorry about the formatting here)




Tom Kyte
July 20, 2004 - 8:47 pm UTC

why did you run the first two queries? where did you use that output?

first 2 queries(should have explained better)

denni50, July 20, 2004 - 9:16 pm UTC

I haven't done anything with them yet. I just ran those to
test and see if the correct results would get produced(which
they did).

Now I want to include them as part of the "insert" query
so that the results(will create columns) of the first 2 queries can be combined with the columns of the insert query...and that's where I'm kind of stuck.

hope this makes more sense... :~)

ps: analytics is really cool and man does it speed things up

Tom Kyte
July 20, 2004 - 10:01 pm UTC

are all of the table names right? the L on the end of some but not others?

two tables

denni50, July 21, 2004 - 8:56 am UTC

The acga_drL table is the actual table the acga_dr table
is a copy for testing purposes.

Upon testing the 2 queries for correctness using the
test table(acga_dr) I would then combine the two queries
with the insert query to insert the results to the
acga_drL table.

thanks


Tom Kyte
July 21, 2004 - 9:06 am UTC

please clarify -- how many tables will the *real* thing be against?

is "acga_dr" there for the long haul -- the example confuses me with two tables, one of which might not be there for "long"

where I am heading is:

select g.idnumber,
sum(g.giftamount) LTGifts,
count(g.giftamount) LTCount,
max(g.giftamount) LargestGift,
min(g.giftdate) FirstGiftDate,
max(g.giftdate)LastGiftDate,
to_number(substr(max( case when g.giftamount>0 then to_char(g.giftdate,'yyyymmddhh24miss')||g.giftamount end), 15 )),
to_number(substr(min( case when g.giftamount>0 then to_char(g.giftdate,'yyyymmddhh24miss')||g.giftamount end), 15 )),
from gift g,acga_dr a
where g.idnumber=a.idnumber
group by g.idnumber


which uses the technique from:

</code> https://asktom.oracle.com/Misc/oramag/on-html-db-bulking-up-and-speeding.html <code>

"Playing with aggregation" to get the giftamount for the min/max date.

thanks Tom..

A reader, July 21, 2004 - 9:53 am UTC

sorry for the obvious confusion(I'm still developing the
plan as I go along).

I was just given this assignment yesterday with a friday
deadline.

This is a donor research project and what I'm looking at is this:

a) first find all the lapsed donors with 2+ years on file
with a lastgiftamount > 0 between 18-36 months.
I already have that data and inserted that into acga_dr

b) of those donors give us:
Lifetime giftamounts
Lifetime count of gifts
Largest gift amount
First gift date
Last giftdate
First giftamount
Last giftamount
(I am at this stage)

c)then give us an annual cumulative giving history based
on the date of the lastgiftamount>0,-12 months.
This sum will fall into buckets(example):
$10-74 bucket
$75-149 "
$150-499 "
...and so on

d) then give a report on how many of the donors fall
into the above buckets...along with the column results
from step b.

my plan is to use one table to get the results from step
a, use that table to gather the data for steps b & c and
put those results into a separate table. Then create a
view to process the results from both tables for output to a .txt or .csv file.

If you have a better solution I'm all eyes and "ears"

hth






that last post is mine..forgot to insert my username

denni50, July 21, 2004 - 9:54 am UTC


THANKS TOM!

denni50, July 21, 2004 - 10:25 am UTC

your query did the trick as as far as getting the
First Giftamount and Last Giftamount.

one last question(so I understand your case statement and
can learn from it).

why did you use to_number,substr and to_char to achieve
the results and what does '15' represent?

one last THANKS!

Tom Kyte
July 21, 2004 - 11:00 am UTC

the 15 removes the leading 14 characters of yyyymmddhh24miss information (the date :)

we glued the date on front in a fixed width field that "sorts string wise" and minned/maxxed that. that'll give us the giftamount we glued onto the end -- which we just substr off.

SQL tricks

Syed, August 20, 2004 - 7:24 am UTC

Hi Tom

your ability to do things with SQL is so impressive Like a previous reply, i'm sure many of us would love a small book / list of top 20 SQL tricks etc.

Anyway, I am sure the following cursor / loop could be combined inot a single SQL statements (following your mantra of do it in a single statement if possible), but am not sure how to go about it ?


cursor c1 is

select
y.rowid get_rowid
from
tab1 y,
(select col1, max(col2) collection from tab1 group by col1) x
where
x.col1 = y.col1
and
x.collection = y.collection;

for REC in c1 loop

insert into tab2 (a, b)
select
tab1.clid,
tab2.blid
from
tab1, tab2
where
tab1.rowid = REC.get_rowid
and
tab1.clid = tab2.blid(+)
end loop;


Thanks

Syed

Tom Kyte
August 20, 2004 - 11:19 am UTC

insert into tab2 (a, b)
select
tab1.clid,
tab2.blid
from
tab1, tab2, (YOUR CURSOR QUERY) tab3
where
tab1.rowid = tab3.get_rowid
and
tab1.clid = tab2.blid(+)
/

just "join" -- you are doing the classic "do it yourself nested loop join" there.

using analytics you could further simplify this. looks like you want the row from tab1 such that col2 is the max grouped by col1:

from
(select ...
from (select ..., max(col2) over (partition by col1) max_col2
from tab1 )
where col2 = max_col2) tab1, tab2
where tab1.xxx = tab2.xxxx (+)

would do that.

Brilliant

Syed, August 23, 2004 - 5:40 am UTC


simplify further

Tom Burrows, August 23, 2004 - 10:36 am UTC

Hi tom

Can you show the final query from above with the analytical function in it ?

Thanks

Tom

Tom Kyte
August 23, 2004 - 10:41 am UTC

i did?!?

max(col2) over (partition by col1) max_col2

is the analytic

Why doesn't it work!!!!!!!!!!

A reader, August 23, 2004 - 3:48 pm UTC

Hi Tom,
I am executing the following query:
select
c.cmpname Company,
to_char(t.trnsaledate,'Day, DD-fmMonth-YYYY') Fecha,
TO_CHAR(sum(r.recvalue),'9,999,999,999.00') Total
from
stkrecord r,
stktransaction t,
stkcompany c
where
t.trnsaledate > to_date('30-Jul-2004','DD-Mon-YYYY')
and t.trnsaledate < to_date('23-Aug-2004','DD-Mon-YYYY')
and to_char(t.trnsaledate,'Day') = 'Friday'
and c.cmpid = t.trncmpid
and r.rectrnID = t.trnID
group by c.cmpname, to_char(t.trnsaledate,'Day, DD-fmMonth-YYYY')

It returns me 0 rows.
Why does not the sentence and to_char(t.trnsaledate,'Day') = 'Friday' work.
Am I doing anything wrong? I need only the data for Friday.
Is the approach wrong?
Thanks as always......

Tom Kyte
August 23, 2004 - 3:54 pm UTC

ops$tkyte@ORA9IR2> select '"' || to_char(d,'Day') || '"'
  2  from ( select sysdate+rownum d from all_users where rownum <= 7);
 
'"'||TO_CHA
-----------
"Tuesday  "
"Wednesday"
"Thursday "
"Friday   "
"Saturday "
"Sunday   "
"Monday   "
 
7 rows selected.
 
ops$tkyte@ORA9IR2> 1
  1* select '"' || to_char(d,'Day') || '"'
ops$tkyte@ORA9IR2> c/Day/fmDay/
  1* select '"' || to_char(d,'fmDay') || '"'
ops$tkyte@ORA9IR2> /
 
'"'||TO_CHA
-----------
"Tuesday"
"Wednesday"
"Thursday"
"Friday"
"Saturday"
"Sunday"
"Monday"
 
7 rows selected.



trailing blanks -- fm, the format modifier, trims them.

probably better to use:

where to_char(t.trnsaledate,'d') = 
         to_char(to_date('31-dec-1965','dd-mon-yyyy'),'d')


that 31-dec-1965 is a friday, it'll tell us the day of the week a friday is using your NLS setting....

It is safe (no whitespace)
It is international (works in places that spell friday differently!)

 

A reader, August 23, 2004 - 7:41 pm UTC


function in order by clause

friend, August 24, 2004 - 8:18 am UTC

hai tom,

i have one function fun

this returns 1

if i use this function in the following query

> select * from table_name order by fun;

fun in order by clause

it's not giving any syntactic error but it simply ignoring the order by clause

??? is it correct to use functions in order by clause

??? can we use functions after " from clause" insted of table name, by returing that table name from function





Tom Kyte
August 24, 2004 - 8:55 am UTC

if the function fun returns the constant one, how can you possibly say "it is ignoring it"

??????


select * from table_name order by 'X';

think about that -- what does that do - it orders by the constant 'X'


perhaps you are thinking that:

select * from table_name order by f()

where f() returns the number 1 is the same as:

select * from table_name order by 1;


-- but it cannot be (think about this:

ops$tkyte@ORA9IR2> create table t as select 1 xxx from all_users;
 
Table created.
 
ops$tkyte@ORA9IR2> select * from t order by xxx;
 
       XXX
----------
         1
         1
         1
....


here xxx is not ANY DIFFERENT than your function f())


You are simply ordering by a constant, you are NOT specifying to "order by column 1"
 

excellent Sql

A reader, August 25, 2004 - 4:32 am UTC

Hi,Tom ,Your sql is very userful to me,thankyou

How to get distinct values from a table?

RB, September 02, 2004 - 11:37 am UTC

Tom:
I have a table
TABLE A
(src varchar2(30)
Dest varchar2(30),
Start_date DATE,
End_date DATE,
Sent_bytes NUMBER(30)
Received_Bytes NUMBER(30)
);

This is what I am doing now:

We obtain the first connection to the database
We send the initial SQL statement “select * from A” using the first connection. The first connection returns a ResultSet, which contains metadata about the columns of table A and a pointer to the first record We obtain a second connection to the database

For each column defined by the metadata (repeat):
We send a new SQL statement “select distinct(src) from (select * from A) order by src desc” using the second connection.
The second connection returns a ResultSet, which contains all the distinct values of src in descending order
We pre-initialize dimension(src) using the records in this second ResultSet
Close the second connection to the database
Allocate the number of records defined by the first connection’s ResultSet
Now extract all the records returned by the first connection’s ResultSet populating our data structure with “data”
Once data is made available, the application can now define a visualization for this data

We extract out and define each dimension and their unique values in order to optimize user interactions and queries.


The problem is performance is really bad because we are doing column by column and doing distinct on top and sorting on top.

Is there a best way so that I can get the result that I want. If I can get the whole thing in one fetch that will be great - again this is just a sample table - our original table has 20+ columns and 300 million records - this table grows as well.

Any help is greatly appreciated

R





Tom Kyte
September 02, 2004 - 12:59 pm UTC

how many bad things did I see in the first paragraph!!

why you have more than one connection is far far far beyond my comprehension.

but beyond that -- why do you need the distinct values -- what are you doing? do you really retrieve 300million records to a client application????

How to get distinct values from a table?

R, September 02, 2004 - 3:05 pm UTC

Tom:
Having multiple connection - When I saw the code I felt the same way. I have already reommended them to use a single connection.
Dinstinct Values -
We are developing a visualization package. To chart the data properly the application requires to get all the distinct values from each column.

RIght now for each column one query is being executed. I would like to eliminate that and get all the columns in one query if possible.

R

Tom Kyte
September 02, 2004 - 4:18 pm UTC


ops$tkyte@ORA9IR2> create or replace procedure all_distinct_values( p_tname in varchar2, p_cursor in out sys_refcursor )
  2  as
  3      l_stmt  long := 'select distinct ';
  4      l_decode_1 long := 'decode( r';
  5      l_decode_2 long := 'decode( r';
  6      l_decode_3 long := 'decode( r';
  7  begin
  8
  9      for x in (select column_id, data_type, column_name,
 10                       decode( data_type, 'DATE', 'to_char(' || column_name || ', ''yyyymmddhh24miss'' )',
 11                                          'NUMBER', 'to_char(' || column_name || ' )',
 12                                          column_name) fcname
 13                       from user_tab_columns where table_name = p_tname)
 14      loop
 15          l_decode_1 := l_decode_1 || ', ' || x.column_id || ', ''' || x.column_name || '''';
 16          l_decode_2 := l_decode_2 || ', ' || x.column_id || ', ''' || x.data_type || '''';
 17          l_decode_3 := l_decode_3 || ', ' || x.column_id || ', ' || x.fcname ;
 18      end loop;
 19      l_decode_1 := l_decode_1 || ') cname, ';
 20      l_decode_2 := l_decode_2 || ') dtype, ';
 21      l_decode_3 := l_decode_3 || ') value';
 22
 23      l_stmt := l_stmt || l_decode_1 || l_decode_2 || l_decode_3 || ' from ' || p_tname ||
 24      ', (select rownum r from user_tab_columns where table_name = :x )';
 25      open p_cursor for l_stmt using p_tname;
 26  end;
 27  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> show errors
No errors.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table emp;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> variable x refcursor
ops$tkyte@ORA9IR2> exec all_distinct_values( 'EMP', :x );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> print x
 
CNAME    DTYPE    VALUE
-------- -------- ----------------------------------------
COMM     NUMBER   0
COMM     NUMBER   1400
COMM     NUMBER   300
COMM     NUMBER   500
COMM     NUMBER
DEPTNO   NUMBER   10
DEPTNO   NUMBER   20
......
 

Would you please explain this stored procedure?

Vipul, September 02, 2004 - 5:44 pm UTC

This is so cool - Would you please explain this stored procedure line by line so that those who are not that familiar with some of these techniques will be benefited?

I read close to 50+ queries and responses today - I never so this kind of technical resources anywhere else. Thanks Tom.

Vipul

Tom Kyte
September 03, 2004 - 9:35 am UTC

ops$tkyte@ORA9IR2> create or replace procedure prt ( p_str in varchar2 )
  2  is
  3     l_str   long := p_str;
  4  begin
  5     loop
  6        exit when l_str is null;
  7        dbms_output.put_line( substr( l_str, 1, 250 ) );
  8        l_str := substr( l_str, 251 );
  9     end loop;
 10  end;
 11  /
 
Procedure created.


if we add a prt call to the procedure, you see what it does:

ops$tkyte@ORA9IR2> exec all_distinct_values( 'EMP', :x );
select distinct decode( r, 1, 'EMPNO', 2, 'ENAME', 3, 'JOB', 4, 'MGR', 5,
'HIREDATE', 6, 'SAL', 7, 'COMM', 8, 'DEPTNO') cname, decode( r, 1, 'NUMBER', 2,
'VARCHAR2', 3, 'VARCHAR2', 4, 'NUMBER', 5, 'DATE', 6, 'NUMBER', 7, 'NUMBER', 8,
'NUMBER') dtype,
decode( r, 1, to_char(EMPNO ), 2, ENAME, 3, JOB, 4, to_char(MGR ), 5,
to_char(HIREDATE, 'yyyymmddhh24miss' ), 6, to_char(SAL ), 7, to_char(COMM ), 8,
to_char(DEPTNO )) value from EMP, (select rownum r from user_tab_columns where
table_name = :x )


it (the procedure) doesn't do much, the query does all of the work.

All we do is build a 3 column query.  We'll start at the end though -- the cartesian product.

Basically, we want to take each row in EMP and output it once for each column in that row.  (eg: emp has 8 columns -- the first row in emp should be output 8 times -- once for each column).  We use rownum R from user_tab_columns for that -- we KNOW user_tab_columns will have *at least* enough rows since it has a row/column.

Then, we output three columns:

column 1 -- is going to be the column name.  The first time we output row 1, we'll pump out "EMPNO", the second time we output row 1 -- ENAME and so on.

column 2 -- the datatype, I'm assuming the application would want this

column 3 -- the value -- taking care to convert everything to a "string" and taking extra care with dates. 

Is there a limit of records that the variable x can hold?

R, September 02, 2004 - 5:50 pm UTC

Tom:
Thanks a lot for the solution. This is exactly what I was looking for.

If the table has several million records, do u think the variable can hold that many records or is there a limit?

R


Tom Kyte
September 03, 2004 - 9:36 am UTC

you will exceed your end users patience well before you exceed an Oracle result set (eg: it is just a query, a cursor, a result set -- it is not a "variable holding data in memory" really)

How do we get this data in a sorted order - desc or asc

Alex, September 02, 2004 - 7:03 pm UTC

Tom - How can get the result set in a sorted order - asc or desc order?



Tom Kyte
September 03, 2004 - 9:37 am UTC

add an order by ?

order by clause

vipul, September 07, 2004 - 1:40 pm UTC

Tom - would you please point me in the stored procedure where I can put that order by <column_name> clause? so that the data will be in a sorted order. somehow I couldnt figure this out and I am getting compilation error.

Tom Kyte
September 07, 2004 - 2:25 pm UTC

umm, it is just a "sql query"? order by's go on the end of it.

suggest you print it out (like I did), cut and paste it, add the order by (to get the syntax correct) and modify the code.

you probably just want to add:

order by 1, 3

to the end of the query to order by column name and within a column name by the values.

please take a look at it

vipul, September 07, 2004 - 2:54 pm UTC

Tom:
Here is what I have done and still its giving me error when I execute the query. I have made the changes where we create the statement. Please help to resolve this issue.

What I am looking for is that the value field sorted and grouped by column name


create or replace procedure all_distinct_values( p_tname in
varchar2, p_cursor in out sys_refcursor )
as
l_stmt long := 'select distinct ';
l_decode_1 long := 'decode( r';
l_decode_2 long := 'decode( r';
l_decode_3 long := 'decode( r';
begin
for x in (select column_id, data_type, column_name,
decode( data_type, 'DATE', 'to_char(' || column_name
|| ', ''yyyymmddhh24miss'' )',
'NUMBER', 'to_char(' || column_name
|| ' )',
column_name) fcname
from user_tab_columns where table_name = p_tname)
loop
l_decode_1 := l_decode_1 || ', ' || x.column_id || ', ''' ||
x.column_name || '''';
l_decode_2 := l_decode_2 || ', ' || x.column_id || ', ''' ||
x.data_type || '''';
l_decode_3 := l_decode_3 || ', ' || x.column_id || ', ' || x.fcname
;
end loop;
l_decode_1 := l_decode_1 || ') cname, ';
l_decode_2 := l_decode_2 || ') dtype, ';
l_decode_3 := l_decode_3 || ') value';

l_stmt := l_stmt || l_decode_1 || l_decode_2 || l_decode_3 || ' from '
|| p_tname || ' order by 1, 3 ' ||
', (select rownum r from user_tab_columns where table_name = :x)';
open p_cursor for l_stmt using p_tname;
end;



Tom Kyte
September 07, 2004 - 3:00 pm UTC

order by's go AT THE END OF A QUERY.


you have


... from TABLE order by 1, 3 , ( select rownum r from user....)


please -- take a second to take a look at these things. If you were to have printed this out on your screen, it might have been forehead smacking "obvious"?




prt procedure and its use

keshav, September 07, 2004 - 3:00 pm UTC

How do you make a prt call to the procedure? Or how did you get the detailed view of the proecedure that you wrote when you explained the procedure?


Tom Kyte
September 07, 2004 - 3:01 pm UTC

the code to prt is given along with the "detailed view" above?

There is a strange behaviour with SAL column?

vipul, September 07, 2004 - 3:52 pm UTC

Tom - Attached is the updated procedure. I have tried with the SCOTT.EMP table and I got the result correctly except the last column "SAL". There two values out of order and the rest looks good. This is the results that I got with the SAL COLUMN - The first two rows in this result set is not sorted in the right way.

SAL NUMBER 950
SAL NUMBER 800
SAL NUMBER 5000
SAL NUMBER 3000
SAL NUMBER 2975

CNAME DTYPE VALUE
-------- -------- ---------
SAL NUMBER 2850
SAL NUMBER 2450
SAL NUMBER 1600
SAL NUMBER 1500
SAL NUMBER 1300
SAL NUMBER 1250
SAL NUMBER 1100

Modified procedure - sorry for my mistakes in the prev query
create or replace procedure all_distinct_values( p_tname in
varchar2, p_cursor in out sys_refcursor )
as
l_stmt long := 'select distinct ';
l_decode_1 long := 'decode( r';
l_decode_2 long := 'decode( r';
l_decode_3 long := 'decode( r';
begin
for x in (select column_id, data_type, column_name,
decode( data_type, 'DATE', 'to_char(' || column_name
|| ', ''yyyymmddhh24miss'' )',
'NUMBER', 'to_char(' || column_name
|| ' )',
column_name) fcname
from user_tab_columns where table_name = p_tname)
loop
l_decode_1 := l_decode_1 || ', ' || x.column_id || ', ''' ||
x.column_name || '''';
l_decode_2 := l_decode_2 || ', ' || x.column_id || ', ''' ||
x.data_type || '''';
l_decode_3 := l_decode_3 || ', ' || x.column_id || ', ' || x.fcname
;
end loop;
l_decode_1 := l_decode_1 || ') cname, ';
l_decode_2 := l_decode_2 || ') dtype, ';
l_decode_3 := l_decode_3 || ') value';

l_stmt := l_stmt || l_decode_1 || l_decode_2 || l_decode_3 || ' from '
|| p_tname ||
', (select rownum r from user_tab_columns where table_name = :x ) order by 1,3 desc ';
open p_cursor for l_stmt using p_tname;
end;





Tom Kyte
September 07, 2004 - 3:59 pm UTC

because everything is sorted "as a string"

so, use:

24 ', (select rownum r from user_tab_columns where table_name = :x )
25 order by 1,
26 decode( dtype, ''NUMBER'', null, value ),
27 decode(dtype, ''NUMBER'', to_number(value) )';
28 open p_cursor for l_stmt using p_tname;
29 end;
30 /


to order by cname and if a string/date (dates are sortable as encoded) then by that, else by to_number of the number...



where clause

karthick, September 08, 2004 - 2:11 am UTC

i have two tables sl_master and br_master
both have a field sub_code.

sl_master --> sub_code -- type is char(6)

br_master --> sub_code -- type is char(10)

i gave the join condition,

"where sl_master.sub_code = br_master.sub_code"

but it is not equating properly.

but when i gave the conditin as

"where trim(sl_master.sub_code) = trim(br_master.sub_code)"

it works fine.

the data in both the tables sub_code field are "CLNT01".

Why is it so.


Tom Kyte
September 08, 2004 - 9:32 am UTC

oh, it is equating "properly"

but the facts of life are

'KING ' <> 'KING '
^^ ^^^^^^

i despise the char(n) type -- a char(n) is nothing more than a varchar2(n) with trailing blanks.

you should never use char.


In your case, you are doing to want to trim OR rpad one or the other - but not both.


say you have

from S, B
where s.something = :x
and s.sub_code = b.sub_code


here, you probably use an index on something to find S rows and you want to then use the index on b(sub_code) to find the related B rows. so, you should apply the function to S.sub_code:


from S, B
where s.something = :x
and rpad( s.sub_code, 10 ) = b.sub_code


so the indexes on s(something) and b(sub_code) can be used.

conversely:


from S, B
where b.something = :x
and s.sub_code = b.sub_code


would lead me towards:



from S, B
where b.something = :x
and s.sub_code = trim(b.sub_code)

instead


if you have:


from S, B
where s.something = :x
and b.something_else = :y
and s.sub_code = b.sub_code

well, then YOU pick one or the other.


But, I would encourage you to change your char's to varchar2's if you are still in design mode.




To karthick

Marco van der Linden, September 08, 2004 - 9:21 am UTC

Because of the datatypes..........

> sl_master --> sub_code -- type is char(6)
> br_master --> sub_code -- type is char(10)

.....the join condition

"where sl_master.sub_code = br_master.sub_code"

equates to 'CLNT01' = 'CLNT01 '
which is obviously not equal.
If the datatypes would have been VARchar(n)

"where sl_master.sub_code = br_master.sub_code"

would return a result, UNLESS the value inserted into br_master.sub_code had been 'CLNT01 '

why so...

P.Karthick, September 08, 2004 - 11:06 am UTC

is there any specific reason for having that difference between char and varchar2.....

Tom Kyte
September 08, 2004 - 1:00 pm UTC

because it is what makes a CHAR a CHAR -- it is a fixed length character string padded out with blanks (the standard bodies said so)...



Strange Result Showing up while executing a Query

Sujit, September 13, 2004 - 10:35 am UTC

Dear Tom,
     I am executing the query "select * from scott.emp where (1 = 0 or empno = 7369) and 1 = 0;" in Oracle "Oracle9i Enterprise Edition Release 9.2.0.1.0". But it's faching one row where as it should fetch none as "1 = 0" shouls always be evaluated to FALSE.

SQL> select * from scott.emp where (1 = 0 or empno = 7369) and 1 = 0;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20

SQL> 


Kindly Advice.....

Regards...
Sujit 

Tom Kyte
September 13, 2004 - 1:09 pm UTC

Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production

scott@ORA9IR2> select * from scott.emp where (1 = 0 or empno = 7369) and 1 = 0;

no rows selected

scott@ORA9IR2> alter session set optimizer_goal=first_rows;

Session altered.

scott@ORA9IR2> select * from scott.emp where (1 = 0 or empno = 7369) and 1 = 0;

no rows selected


I cannot reproduce.... please contact support for this one.

faster way to select distinct records?

celia Wang, September 13, 2004 - 6:39 pm UTC

Table t1(c1 NUMBER)
C1
1
2

Table t2 (c1, number, c2 char(1), c3 varchar2(5))

C1 C2 C3
1 Y M1
1 Y M2
1 N M3
2 Y A1
2 N A2

The relationship t1 and t2 is one to many through t1.c1 = t2.c1

The purpose is to query all rows in table t1 when t2.c2 = ‘Y’

Select DISTINCT t1.c1
From t1, t2
Where t1.c1 = t2.c1
And t2.c3 = ‘Y’

If I have million of row in table t1, this query runs very slow. So, “distinct” is very expensive performance.

Select t1.c1
From t1,
(select distinct c1, c2 from t2
where c2= ‘Y’) a
where t1.c1 = a.c1

By using embedded SQL, the elapsed time has been decreased by 50%.

Do you have any better way to accomplish the same result?

Thanks a lot.


Tom Kyte
September 13, 2004 - 9:08 pm UTC

it would have been natural to use "in"

select c1 from t1 where c1 in ( select c1 from t2 where c2 = 'Y' );



To Sujit

A reader, September 13, 2004 - 9:22 pm UTC

You used 92010, Tom used 92015. Perhaps this is a bug
fixed in a patchset?

Strange Cases in Oracle 9i (9.2.0.1.0)

Sujit, September 14, 2004 - 5:00 am UTC

Yes i Also think it's some kind of Bug with 9.2.0.1.0 vertion.



SQL*Plus: Release 9.0.1.0.1 - Production on Tue Sep 14 14:09:24 2004

(c) Copyright 2001 Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> set lin 2000
SQL> select * from scott.emp where (1 = 0 or empno = 7369) and 1 = 0;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20

SQL> 


    Anyways thanks TOM for your reply.

Regards...
Sujit
 

Doubts / Error in Group By

Udanks, September 20, 2004 - 9:42 am UTC

Hi Tom,


I have a table supplier which has supplier ID and Supplier Name. I have another table Order_formats which has Order_ID - which is unique for every order placed and Order_XML which holds all the information regarding the order placed. When I run the following query 

SELECT T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal(),
--s.supplier_name,
--formats.order_id,
COUNT(DISTINCT(formats.order_id)) OrderCount,
COUNT(formats.order_id) ItemCount,
--T.extract('/ItemOut/Components/ExtendedPriceFloat/text()').getNumberVal() ItemPrice,
SUM(T.extract('/ItemOut/Components/ExtendedPriceFloat/text()').getNumberVal()) ItemTotal
FROM ORDER_FORMATS1 formats,
TABLE(XMLSEQUENCE(EXTRACT(order_xml,'//ItemOut'))) T,
SUPPLIERS s
WHERE T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()
= s.supplier_id (+)
GROUP BY T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()
ORDER BY
T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()

which is correct answer.

SUP_ID
-----------------------------------------
ORDERCOUNT  ITEMCOUNT  ITEMTOTAL
---------- ---------- ----------
000072
         2          2        1.1

054054
         1          1        200

30000072
         4          4      73.15

30054054
         1          1      -77.5

30068737
         1          1       89.9

30361119
         1          1      65.25

But when I try to add supplier_name, order_id,

  1  SELECT T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal() Sup_ID,
  2  s.supplier_name,
  3  formats.order_id,
  4  COUNT(DISTINCT(formats.order_id)) OrderCount,
  5  COUNT(formats.order_id) ItemCount,
  6  T.extract('/ItemOut/Components/ExtendedPriceFloat/text()').getNumberVal() ItemPrice,
  7  SUM(T.extract('/ItemOut/Components/ExtendedPriceFloat/text()').getNumberVal()) ItemTotal
  8  FROM ORDER_FORMATS1 formats,
  9  TABLE(XMLSEQUENCE(EXTRACT(order_xml,'//ItemOut'))) T,
 10  SUPPLIERS s
 11  WHERE T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()
 12  = s.supplier_id (+)
 13  GROUP BY T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()
 14  ORDER BY
 15* T.extract('/ItemOut/ItemID/SupplierPartID/text()').getStringVal()
SQL> /
s.supplier_name,
*
ERROR at line 2:
ORA-00979: not a GROUP BY expression


Where I am going wrong ... plz do let me know. 

Tom Kyte
September 20, 2004 - 10:41 am UTC

seems self explainatory?

It is still such a bummer you've taken the worlds most structured problem ever -- Orders and Line Items -- and made it "really cool" via XML.

look at your query, you have the equivalent of:


select a, b, c
from t
group by a

what about B? what about C? any column that is not aggregated would have to be grouped by.

sql joins

Ishaque Hussain, September 22, 2004 - 12:30 am UTC

I have heard that doing a what I call "loop join" is not a good practice in sql. What I am referring to here is having three tables x,y,z. x is joined to y, y is joined to z and x is joined to z. I know that the design should change but in this case, I can't change the design. However, in terms of data I need to join one of the columns in table x with one of the columns in z. Is it a bad idea to have this type of "loop join".
I have done tests on my personal oracle and the result sets that I am getting using this so called "loop join" are correct. I just wanted to get your opinion on this one.



SELECT DISTINCT
HDR.HDR_NUM,
ITEM.INVC_NO,
ITEM.INVC_DT,
ITEM.PROD_CD,
HDR.DM_NO,
TRCG.DM_NO,
DISTRNO
FROM HDR, --this is x
ITEM, --this is y
TRCG --this is z
WHERE HDR.HDR_NUM = ITEM.HDR_NUM
AND ITEM.INVC_NO = TRCG.INVC_NO
AND ITEM.INVC_DT = TRCG.INVC_DT
AND ITEM.PROD_CD = TRCG.PROD_CD
AND HDR.DM_NO = TRCG.DM_NO


Thanks,

Ishaque

Tom Kyte
September 22, 2004 - 7:45 am UTC

umm, you heard wrong. where have you heard that normalization (which is what you've basically done here) is "wrong"?

You should use the CBO for sure, it'll be brutally efficient and full scan the three tables, hash joining them.

question for you -- are you sure that distinct is necessary -- not possible to tell from where I sit, but are you sure you need it?

sql query

Ishaque Hussain, September 22, 2004 - 2:35 pm UTC

I have actually ran the query and it is efficient(I actually have parameters which I didn't include here) and you are correct in that I don't need the distinct. I like normalization and I like to use it as well. My concern was having what I call the "loop" (I made this terminology up so I can have a name for what I was trying to describe). "Normally" in a situation like this x would be joined to y, and y would be joined to z. However, in this situation, I am joing z back to x because of that dm_no column which would "close the circuit". I didn't see anything wrong with this. I have done a report and I didn't do the hdr to trcg (x to z join) because somebody mentioned it wasn't a good idea. Lesson learned, don't always listen to others instead try it for yourself first.


Thanks,

Ishaque





How to write this query

A reader, September 24, 2004 - 10:52 am UTC

CREATE TABLE DTAB( DKEY NUMBER(10))
/

CREATE TABLE IDTAB(DKEY NUMBER(10),
AAFLAG VARCHAR2(1))
/



INSERT INTO DTAB VALUES(1)
/
INSERT INTO DTAB VALUES(2)
/
INSERT INTO DTAB VALUES(3)
/
INSERT INTO DTAB VALUES(4)
/
INSERT INTO DTAB VALUES(5)
/


INSERT INTO IDTAB VALUES(1,'Y')
/
INSERT INTO IDTAB VALUES(2, 'Y')
/
INSERT INTO IDTAB VALUES(2,'N')
/
INSERT INTO IDTAB VALUES(3, 'Y')
/
INSERT INTO IDTAB VALUES(3,'N')
/
INSERT INTO IDTAB VALUES(4,'N')
/


My requirement is to

1.select those dkey's in idtab table where for a given dkey the aaflag is 'Y' or both 'Y' and 'N'
2.do not select those dkeys where aaflag is just 'N' for a given dkey
3.select all those dkeys which are in dtab table and are not in idtab table.

Please help

Tom Kyte
September 24, 2004 - 12:05 pm UTC

select distinct dkey from idtab where aaflag = 'Y'

gets 1 & 2 ok.


3) seems to be another query?? for we are back to the full idtab again? that would be

select * from dtab where dkey not in ( select dkey from idtab );

A reader, September 24, 2004 - 12:20 pm UTC

Well, Tom how can you combine the above two ?

Tom Kyte
September 24, 2004 - 12:34 pm UTC

i have no idea, they are asking two different questions.

Merging 2 Queries!

A reader, October 12, 2004 - 11:50 pm UTC

Hi Tom,

Welcome back!

I have the following queries:

SELECT Z.location_nme,A.Candidate_Name, A.IVR
FROM IVR as A , gctlocation as Z
WHERE A.gctlocation_srl = Z.srl;

location_nme | candidate_name | ivr
--------------+----------------+-----
NT | Bob Benger | 1
NS | Bob Benger | 1
BC | Bob Benger | 1
NS | Bob Eisen | 1
ON | Bob Langer | 1
NT | Bob McNamara | 1
(6 rows)


SELECT Z.location_nme,B.Candidate_Name, B.WEB
FROM web as B , gctlocation as Z
WHERE B.gctlocation_srl = Z.srl;
location_nme | candidate_name | web
--------------+----------------+-----
NB | Bob Redmond | 2
NS | Bob Benger | 2
ON | Bob Langer | 4
YK | Bob Burke | 1
NT | Bob Cobban | 1
(5 rows)

I want to merge them and have a result set as follow:

location_nme | candidate_name | ivr |web
--------------+----------------+-----+-----
NT | Bob Benger | 1 |
NS | Bob Benger | 1 |2
BC | Bob Benger | 1 |
NS | Bob Eisen | 1 |
ON | Bob Langer | 1 |4
NT | Bob McNamara | 1 |
NB | Bob Redmond | |2
YK | Bob Burke | 1
NT | Bob Cobban | 1
..............
.............
The order, doesn't mattar. Just showing in one line for the same 'location_nme' and 'candidate_nme' is important. (e.g NS , Bob Benger )

How can I do this? Could you please help me on this?

Thank you so much for your help.
- Arash

P.S. I tried to use in-line views or outer join, but everytime I missed some records!


Tom Kyte
October 13, 2004 - 8:08 am UTC



select location_nme, candidate_name, max(ivr), max(web)
from ( SELECT Z.location_nme,A.Candidate_Name, A.IVR, to_number(null) web
FROM IVR as A , gctlocation as Z
WHERE A.gctlocation_srl = Z.srl
union all
SELECT Z.location_nme,B.Candidate_Name, to_number(null), B.WEB
FROM web as B , gctlocation as Z
WHERE B.gctlocation_srl = Z.srl )
group by location_name, candidate_name;


Thank you so much! You are great!

A reader, October 13, 2004 - 10:10 am UTC


How can I have max(ivr)+max(web) as Total?

A reader, October 13, 2004 - 12:18 pm UTC

Hi Tom,

I don't know why this doesn't work:

select location_nme, candidate_name, max(ivr), max(web), max(web)+max(ivr) as total
from ( SELECT Z.location_nme,A.Candidate_Name, A.IVR, to_number(null) web
FROM IVR as A , gctlocation as Z
WHERE A.gctlocation_srl = Z.srl
union all
SELECT Z.location_nme,B.Candidate_Name, to_number(null), B.WEB
FROM web as B , gctlocation as Z
WHERE B.gctlocation_srl = Z.srl )
group by location_name, candidate_name;


It may be because of 'NULL' values in each subqueries.

Please let me know how I can do this?

Thank you again,
- Arash


A reader, October 13, 2004 - 1:32 pm UTC

Tom,

I used NVL and it fixed the problem.

Thanks


Where would we be without Tom?

Ben Ballard, October 20, 2004 - 10:55 pm UTC

Tom,

Here I am alone in the office at 10:30pm, missing the ALCS Game 7, trying to correct some truly horrible code and save a project. I'm working day and night so that maybe I won't have to be away from my wife again next week. Once again, I've found the answer to my problem here at asktom. It is in these moments, in the clutch, that your site is most indispensible and that you are most appreciated. Thank you.

character mode reports on windows xp

siddartha m.s, October 28, 2004 - 8:24 am UTC

hello sir
i need to know how to print character mode reports on windows xp please send the solution as soon as posible
thanking you
siddartha




Passing value to a procedure.

karthick, December 04, 2004 - 2:31 am UTC

hai tom,

I have a front end form that downloads data form a excel file. the data in the file has a number of client code.
and i have used a oracle stored procedure that does some process with those client code. each client code is of 15 characters. at every download i get a minimum of 500 client code.

is there any way that i can send all the client code to that procedure as an arguement at one shot. i dont want to call the procedure in a loop as we are using a three tire consept.

i tryed to pad the client code into a string and send it as an arguement. but most of the time it exceeds the size of varchar2 and i get error.

can you please help me with this.

Thank you

karthick.

Tom Kyte
December 04, 2004 - 10:59 am UTC

well, you can send 2047 of them in a single string (a varchar2 is 32k in plsql)

but you want to read the docs for whatever language you are using. it would sort of depend on that don't you think. I mean, the solution I would use for java would be slightly different than VB.

(and unless it is an open language, one that runs on more than a single OS, you'll have to ask around, i only use things that work on linux/unix in addition to that os that shall not be named)

passing arguement to procedure

karthick, December 05, 2004 - 11:42 pm UTC

ok let me put it in this way,

the fornt end language can store x characters in a vairable. i need y such variables to store.this y is unknown and determined only at run time. so is it possible to pass that y variables to my stored procedure as arguement.y can be 1,2,3....

Tom Kyte
December 06, 2004 - 11:23 am UTC


Ok, let me put it this way:

YES

there are collections, there are plsql table types. how you use them depends on the client language you are programming in (as stated)

Order by clause

Laxman Kondal, December 22, 2004 - 4:21 pm UTC

Hi Tom

How can I use order by in procedure to return ref cursor.
This one has no effect. I used string also as cloumn name.

scott@ORA9I> CREATE OR REPLACE PROCEDURE P(p_list NUMBER, p_rc OUT sys_refcursor)
2 AS
3 BEGIN
4 OPEN p_rc FOR SELECT EMPNO, ENAME, JOB MGR FROM Emp ORDER BY p_list;
5 END;
6 /

Procedure created.

scott@ORA9I> var cur refcursor
scott@ORA9I> set autoprint on
scott@ORA9I> exec p(2, :cur)

PL/SQL procedure successfully completed.


EMPNO ENAME JOB MGR
---------- ---------- --------- ----------
7369 SMITH CLERK 7902
7499 ALLEN SALESMAN 7698
7521 WARD SALESMAN 7698
7566 JONES MANAGER 7839
7654 MARTIN SALESMAN 7698
7698 BLAKE MANAGER 7839
7782 CLARK MANAGER 7839
7788 SCOTT ANALYST 7566
7839 KING PRESIDENT
7844 TURNER SALESMAN 7698
7876 ADAMS CLERK 7788
7900 JAMES CLERK 7698
7902 FORD ANALYST 7566
7934 MILLER CLERK 7782

14 rows selected.


scott@ORA9I>

Is there any way this proc can accept order by as IN parameter and do the order by.

Thanks and regards.

Tom Kyte
December 22, 2004 - 6:54 pm UTC

it absolutely has an effect, it is just like:

select empno, ename, job mgr from emp order by '2';


you could do this:

select empno, ename, job, mgr from emp
order by decode(p_list,1,empno),
decode(p_list,2,ename),
decode(p_list,3,job),
decode(p_list,4,mgr);




Order by clause

Laxman Kondal, December 22, 2004 - 4:43 pm UTC

Hi Tom

If I use:

OPEN p_rc FOR SELECT empno, ename, job FROM Emp ORDER BY DECODE(p_list, 2, ename, 3, job);

then it works:

scott@ORA9I> exec p(2, :rc)

PL/SQL procedure successfully completed.


EMPNO ENAME JOB
---------- ---------- ---------
7876 ADAMS CLERK
7499 ALLEN SALESMAN
7698 BLAKE MANAGER
7782 CLARK MANAGER
7902 FORD ANALYST
7900 JAMES CLERK
7566 JONES MANAGER
7839 KING PRESIDENT
7654 MARTIN SALESMAN
7934 MILLER CLERK
7788 SCOTT ANALYST
7369 SMITH CLERK
7844 TURNER SALESMAN
7521 WARD SALESMAN

14 rows selected.

scott@ORA9I> exec p(3, :rc)

PL/SQL procedure successfully completed.


EMPNO ENAME JOB
---------- ---------- ---------
7788 SCOTT ANALYST
7902 FORD ANALYST
7369 SMITH CLERK
7876 ADAMS CLERK
7934 MILLER CLERK
7900 JAMES CLERK
7566 JONES MANAGER
7782 CLARK MANAGER
7698 BLAKE MANAGER
7839 KING PRESIDENT
7499 ALLEN SALESMAN
7654 MARTIN SALESMAN
7844 TURNER SALESMAN
7521 WARD SALESMAN

14 rows selected.

scott@ORA9I>

IS there any better way and can take more then one order by in one parameter?

Thanks for help.
Regards


Tom Kyte
December 22, 2004 - 6:56 pm UTC

see above.

Order by Value || Asc / Desc dynamically

Vaishnavi, December 23, 2004 - 8:27 am UTC

Hi Tom,

How to get data Asc or Desc dynamically? (Order by column is fixed)

I tried this query:

select * from t order by qty || (select decode(:x, 'A', 'Asc', 'Desc' from dual);

But its not giving the data as desired or not throwing any error.

Can I do this?

Sincerely
Vaishnavi

Tom Kyte
December 23, 2004 - 11:28 am UTC

why would it throw an error, not any different than:


select * from t order by qty || 'asc';


that is perfectly valid.


select * from t order by
case when :x = 'A' then qty end ASC,
case when :x <> 'A' then qty end DESC;

just like above for multiple columns........ variation on a theme

I want to make oracle take same amount of time, while running same sql again

Ashutosh, December 31, 2004 - 3:36 am UTC

I am facing some problem, while testing my code on development database. After some code enahncement, I want to match the time with earlier run, but it is unmatchable because of SGA. Running the same SQL 2nd or 3rd time takes, small time. So I am unable to perform real testing. How can I do that.

Tom Kyte
December 31, 2004 - 11:05 am UTC

guess what -- you are in fact performing real testing!!!!!

in the "real world (tm)" would your cache ever be empty?
in the "real world (tm)" would you start from scratch each time?

what you need to do is vary the inputs into your query and model the real world.


(in the real world (tm), would you have a single user as you probably do now!!)

(and please -- no one tell how to flush the buffer cache, not in any release -- what a waste of time if you are using a buffered file system.......... an utter and complete waste of time)

non moving items report

Dajani, February 02, 2005 - 5:34 am UTC

Hi Tom
I am currently developing a report in Crystal connected to Maximo Database.
This report is suppose to generate all those items that where neither been issued nor received in another word no "transactions"

The problem is that I do have "last issue date" in my inventory master table. So I can say if isnull (inventory. last issuedate) but I can not follow that with and isnull (inventory. last receiveddate) then "non moving"
Because the last received date is not available in the inventory master table

How can I write an sql statement or formula to get the "last received date" from the inventory transactions table

no des bal cos tot lstissudte lstrecdte moving status

1 Baring 100 100 1,0000 ------- ----- non moving

Like I said the last issue date will show blank but last received date is not available in the master table .. if I can get the last received date to show NULL then the item is "a non moving item"



Thanks




Tom Kyte
February 02, 2005 - 5:37 am UTC

"if isnull"? what is that, it isn't sql.

Efficient way to get counts

Thiru, February 02, 2005 - 3:15 pm UTC

Hi Tom,

What is an efficient way to get the counts from a table (around 10 million rows) based on varied conditions?

The actual table has around 50 columns.
drop table temp_del;
create table temp_del (c1 varchar2(3),c2 varchar2(3),c3 varchar2(3),c4 varchar2(3),flag number);
insert into temp_del values('abc','bcd','cde','def',0);
insert into temp_del values('abc','bcd','cde','def',1);
insert into temp_del values('abc','bcd','cde','def',2);

insert into temp_del values('bcd','cde','def','efg',0);
insert into temp_del values('bcd','cde','def','efg',1);
insert into temp_del values('bcd','cde','def','efg',2);

insert into temp_del values('cde','def','efg','fgh',0);
insert into temp_del values('cde','def','efg','fgh',1);
insert into temp_del values('cde','def','efg','fgh',2);
commit;

select count(*) from temp_del where c1='abc' and c2='bcd' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and flag=1;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=1;
select count(*) from temp_del where c1='bcd' and c2='cde' and c3='def' and flag=2;
and so many other combinations similar to this..

Is there a way the table can be accessed once and get the varied counts like above?

Thanks a million.

Tom Kyte
February 03, 2005 - 1:35 am UTC

need more info, is this table modified during the day or is this a warehouse.

is this example cut way down and there are 500 columns
or is this it.

is c1, c2, flag always involved,
or was that just a side effect of your example..

before anyone can suggest the "best way", details about how the data is used need to be known.

How to write this query - is this possible at all

Chenna, February 03, 2005 - 8:12 pm UTC



create table test5(
begdate date,
enddate date,
location varchar2(10),
status varchar2(10),
price varchar2(50) )
/

insert into test5 values('05-jan-05' , '31-jan-05' ,   'Expedite', 40000,   'A price of $6.89 flat rate')
/
insert into test5 values('01-feb-05' , '28-feb-05',     'Expedite', 1000 ,   'A price of $7.87 flat rate' )
/
insert into test5 values('05-mar-05' , '30-mar-05' ,   'Expedite', 40000,   'A price of $6.89 flat rate')
/

insert into test5 values ('01-dec-04', '31-dec-04', 'expedite',40000,'A price of $6.89 flat 

rate')

SQL> SELECT * FROM TEST5;

BEGDATE   ENDDATE   LOCATION   STATUS     PRICE
--------- --------- ---------- ---------- --------------------------------------------------
05-JAN-05 31-JAN-05 EXPEDITE   40000      A price of $6.89 flat rate
01-FEB-05 28-FEB-05 EXPEDITE   1000       A price of $7.87 flat rate
05-MAR-05 30-MAR-05 EXPEDITE   40000      A price of $6.89 flat rate
01-DEC-04 31-DEC-04 EXPEDITE   40000      A price of $6.89 flat rate

SQL> select min(begdate), max(enddate) , location, status, price
  2  from test5
  3  group by LOCATION ,STATUS,PRICE
  4  /

MIN(BEGDA MAX(ENDDA LOCATION   STATUS     PRICE
--------- --------- ---------- ---------- --------------------------------------------------
01-FEB-05 28-FEB-05 EXPEDITE   1000       A price of $7.87 flat rate
01-DEC-04 30-MAR-05 EXPEDITE   40000      A price of $6.89 flat rate

01-DEC-04 30-MAR-05 EXPEDITE   40000      A price of $6.89 flat rate row of out put is 

conveying wrong message, it is as if febraury is also for the price of 'A price of $6.89 flat 

rate' which is not the case

The output I'm looking for is

01-DEC-04 31-jan-05 EXPEDITE   40000      A price of $6.89 flat rate
01-FEB-05 28-FEB-05 EXPEDITE   1000       A price of $7.87 flat rate
05-MAR-05 30-MAR-05 EXPEDITE   40000      A price of $6.89 flat rate


Please help me write this query. Is this possible at all.
 

Tom Kyte
February 04, 2005 - 1:50 am UTC

same concept as outlined here:

https://www.oracle.com/technetwork/issue-archive/2014/14-mar/o24asktom-2147206.html

you might have to tweak for nulls of location/status/price is NULLABLE.


ops$tkyte@ORA9IR2> select min(begdate), max(enddate), location, status, price
  2    from (
  3  select begdate, enddate,
  4         location, status, price,
  5         max(grp) over (order by begdate) grp2
  6    from (
  7  select t.*,
  8         case when lag(location) over (order by begdate) <> location
  9                or lag(status) over(order by begdate) <> status
 10                or lag(price) over (order by begdate) <> price
 11                or row_number() over (order by begdate) = 1
 12              then row_number() over (order by begdate)
 13          end grp
 14    from t
 15         )
 16         )
 17   group by location, status, price, grp2
 18   order by 1;
 
MIN(BEGDA MAX(ENDDA LOCATION   STATUS     PRICE
--------- --------- ---------- ---------- ---------------------------
01-DEC-04 31-JAN-05 Expedite   40000      A price of $6.89 flat rate
01-FEB-05 28-FEB-05 Expedite   1000       A price of $7.87 flat rate
05-MAR-05 30-MAR-05 Expedite   40000      A price of $6.89 flat rate
 

showing parent -child relationship in a row joining with second table!

A reader, February 04, 2005 - 2:56 pm UTC

Hi Tom,

Sorry for my cause of confusion. This time I tried to make my question more understandable and readable. And I hope that I posted my question to the right thread this time.



CREATE TABLE DW_TBL_DIM_PRODUCT
(
SRL NUMBER(10) NOT NULL,
BRAND_DESC VARCHAR2(50)
)
/

Insert into DW_TBL_DIM_PRODUCT values (1, 'prd1');
Insert into DW_TBL_DIM_PRODUCT values (2, 'prd2');

CREATE TABLE DW_TBL_DIM_MARKET
(
SRL NUMBER(10) NOT NULL,
MARKET_DESC VARCHAR2(50),
MARKET_PAR_SRL VARCHAR2(30),
MARKET_PAR_DESC VARCHAR2(50),
MARKET_CLASS NUMBER(1),
MARKET_LEVEL NUMBER(1)
);




Insert into DW_TBL_DIM_MARKET values (1, 'PPPAB', null, null, 1, 0);
Insert into DW_TBL_DIM_MARKET values (6, 'PPA', 1, 'PPPAB', 1, 1);
Insert into DW_TBL_DIM_MARKET values (7, 'PPB', 1, 'PPPAB', 1, 1);
Insert into DW_TBL_DIM_MARKET values (12, 'PA', 6, 'PPA', 1, 2);
Insert into DW_TBL_DIM_MARKET values (13, 'PB', 7, 'PPB', 1, 2);
Insert into DW_TBL_DIM_MARKET values (20, 'A', 12, 'PA', 1, 3);
Insert into DW_TBL_DIM_MARKET values (21, 'B', 13, 'PB', 1, 3);


Insert into DW_TBL_DIM_MARKET values (2, 'PPEF', null, null, 2, 0);
Insert into DW_TBL_DIM_MARKET values (8, 'PEF', 2, 'PPEF', 1, 1);
Insert into DW_TBL_DIM_MARKET values (14, 'E', 8, 'PEF', 1, 2);
Insert into DW_TBL_DIM_MARKET values (15, 'F', 8, 'PEF', 1, 2);


CREATE TABLE DW_TBL_PRODUCT_MARKET
(
FK_DIM_PRODUCT NUMBER(10) NOT NULL,
FK_DIM_MARKET NUMBER(10) NOT NULL
);

Insert into DW_TBL_PRODUCT_MARKET values (1,20);
Insert into DW_TBL_PRODUCT_MARKET values (2,21);

SELECT
LPAD(' ',10*(LEVEL-1)) || market_desc market_tree
FROM dw_tbl_dim_market
START WITH market_desc = 'PPPAB'
CONNECT BY PRIOR srl = market_par_srl;

MARKET_TREE
----------------------------------------------
PPPAB
PPA
PA
A
PPB
PB
B
7 rows selected.

*Now my question is how I can show the result in two rows than 7 rows including the products like this:


1 PPPAB PPA PA A prd1
1 PPPAB PPB PB B prd2


Many thanks for your time,
Elahe


A reader, February 07, 2005 - 10:50 am UTC

Hi Sir,

Just have a question regarding previous posted question on Feb. 4th.

Is it really doable by just using SQL? Or have to use reporting tools?

Thanks



Tom Kyte
February 07, 2005 - 11:14 am UTC

ops$tkyte@ORA9IR2> SELECT
  2      sys_connect_by_path(market_desc,' ' )
  3  FROM dw_tbl_dim_market     y
  4  where not exists ( select * from dw_tbl_dim_market x where x.market_par_srl = y.srl)
  5  START WITH market_desc = 'PPPAB'
  6  CONNECT BY PRIOR srl = market_par_srl
  7  /
 
SYS_CONNECT_BY_PATH(MARKET_DESC,'')
-------------------------------------------------------------------------------
 PPPAB PPA PA A
 PPPAB PPB PB B


start with that, not really knowing how the other stuff comes "along"....


also, in 10g, the where becomes "where connect_by_isleaf = 1" instead of a where exists. 

equi join in a query

Vapan, February 08, 2005 - 12:23 pm UTC

Tom,

If could find some time for this sql query for the results required:

create table TBL1(ty VARCHAR2(2),ID VARCHAR2(2),MID VARCHAR2(3),REM NUMBER);


insert into TBL1 values('S','C','ABC',1);
insert into TBL1 values('B','C','ABC',1);
insert into TBL1 values('S','L','BCD',1);
insert into TBL1 values('B','C','BCD',1);
insert into TBL1 values('S','D','CDE',1);
insert into TBL1 values('B','C','CDE',1);
insert into TBL1 values('S','C','DEF',1);
insert into TBL1 values('B','C','DEF',1);
insert into TBL1 values('S','L','EFG',1);
insert into TBL1 values('B','C','EFG',1);


SQL> SELECT * FROM TBL1;

TY ID MID        REM
-- -- --- ----------
S  C  ABC          1
B  C  ABC          1
S  L  BCD          1
B  C  BCD          1
S  D  CDE          1
B  C  CDE          1
S  C  DEF          1
B  C  DEF          1
S  L  EFG          1
B  C  EFG          1

RESULT REQUIRED WITH THE FOLLOWING CONDITIONS:

a)The result should not include ID='C' if it's TY='B' and its' contra ID='D'.
b)And the result should also not include where ID='C' and its contra ID='L'.

The contra id determined by the column MID. And every TY with value 'S' will 
have a corresponding TY with value 'B'. MID values' are same for each contra item.


RESULT REQUIRD FOR THE DATA ABOVE IS:

TY   ID  MID  REM

S     C  ABC    1
B     C  ABC    1
S     L  BCD    1
S     D  CDE    1
S     C  DEF    1
B     C  DEF    1
S     L  EFG    1

My attempt:

I could get only a part of this, but when I add the 'C' value for ID column, the query blows :

SELECT A1.MID,A1.TY,A1.ID,A1.REM
FROM
(
SELECT MID,TY,ID,REM FROM TBL1 
WHERE 
ID IN('L') AND REM=1 )A1,
(SELECT MID FROM TBL1 WHERE ID IN'C' AND REM=1) A2
WHERE A1.MID=A2.MID

MID TY ID        REM
--- -- -- ----------
BCD S  L           1
EFG S  L           1
 

Tom Kyte
February 09, 2005 - 1:31 am UTC

looks like a new question some day, I have more questions than answers (reading this three times quick and I still didn't get it)

Date

Andrea Maia Gaziola, February 09, 2005 - 9:05 am UTC

I need to make (trunc(sydate)-90) with field DATA_ABERTURA, as I must proceed?
SELECT DISTINCT DECODE(A.DIA_ABERTURA_DOC, '-1', ' ',
LPAD(A.DIA_ABERTURA_DOC, 2, '00') || '/' || LPAD(A.MES_ABERTURA_DOC, 2, '00') || '/' || A.ANO_ABERTURA_DOC) DATA_ABERTURA,
FROM BILHETE_ATIVIDADE A
,SGE_BA_SERV_ASSOC B
,ACIONAMENTO C
,ACIO_TECNICO_MICRO D
WHERE B.NUM_DOCUMENTO = A.NUM_DOCUMENTO
AND C.NUM_DOCUMENTO(+) = B.NUM_DOCUMENTO
AND D.NUM_DOCUMENTO(+) = C.NUM_DOCUMENTO
AND D.NUM_ACIONAMENTO(+) = C.NUM_ACIONAMENTO

Grateful

Vapan, February 09, 2005 - 10:55 am UTC

Tom,

Sorry for not being clear with the test case I gave above. The requirement is :

The query needs to do this:

a. Give all the records where ID='C' and REM=1 and also give it's contra record also ID= 'C' and REM=1 where MID for this pair of records is the same.
b. Give all the records where ID='L' and REM=1  but do not  give  its' contra record that is ID='C' and key is MID column value.
c. Give all the records where ID='D' and REM=1 but do not give its' contra record that is ID='C' (key is MID col value)

SQL> SELECT * FROM TBL1;

TY ID MID        REM
-- -- --- ----------
S  C  ABC          1
B  C  ABC          1
S  L  BCD          1
B  C  BCD          1
S  D  CDE          1
B  C  CDE          1
S  C  DEF          1
B  C  DEF          1
S  L  EFG          1
B  C  EFG          1




Expected REsult: (for the data above)

TY   ID  MID  REM

S     C  ABC    1
B     C  ABC    1
S     L  BCD    1
S     D  CDE    1
S     C  DEF    1
B     C  DEF    1
S     L  EFG    1

Thanks again for the time. 

Tom Kyte
February 09, 2005 - 2:49 pm UTC

same answer as before, looks like a new question some day.

reader

A reader, February 10, 2005 - 11:54 pm UTC

I like to query v$database for name of the database and
dba_users to get the username.

If the username exists, the query must return the name
of the database and the username. If the username does
not exist, the query will return no rows

This is to cycle through all the ORACLE_SID from
oratab and find if username exists in any of the
databases in the server

Is it possible to construct a query like this

Tom Kyte
February 11, 2005 - 7:56 pm UTC

sure, v$database has one row.

so, just join

select * from dba_users, v$database where dba_users.username = :x





SYSDATE

ANDREA MAIA GAZIOLA, February 11, 2005 - 12:16 pm UTC

Tom,

I need to make (trunc(sydate)-90) with field DATA_ABERTURA, as I must proceed when the date (day, month and year) is in separate fields?
SELECT DISTINCT DECODE (A.DAY_DOC, '-1', ' ',
LPAD (A.DAY_DOC, 2, '00') || '/' || LPAD(A.MONTH_DOC, 2, '00') || '/' || A.YEAR_DOC) DATE_ OPENING
FROM BILHETE_ATIVIDADE A
,SGE_BA_SERV_ASSOC B
,ACIONAMENTO C
,ACIO_TECNICO_MICRO D
WHERE B.NUM_DOCUMENTO = A.NUM_DOCUMENTO
AND C.NUM_DOCUMENTO(+) = B.NUM_DOCUMENTO
AND D.NUM_DOCUMENTO(+) = C.NUM_DOCUMENTO
AND D.NUM_ACIONAMENTO(+) = C.NUM_ACIONAMENTO

I need its aid with urgency
Grateful.


Tom Kyte
February 12, 2005 - 8:01 am UTC

<quote>
as I must proceed
when the date (day, month and year) is in separate fields?
</quote>

if you are asking me what is the most logical set of steps you should undertake when the above is true, the "most truthful" answer I can give you is:

find the person that did this, sign them up for database training. Then, fix the grievous error by taqking these three (maybe 6) fields and put them into one, the way they belong.


The answer you probably want is, you need to "to_date()" these three fields together and turn them into a date.

to_date( decode( a.day_doc,-1, NULL, a.month||'/'||a.day_doc||'/'||a.year_doc),
'mm/dd/yyyy' )

(i'm praying that they actually used YYYY -- 4 digits, don't know why I hold out hope, because they probably didn't, which means you really have a 2 character string with meaningless data in it... but anyway)

reader

A reader, February 11, 2005 - 2:03 pm UTC

The following query works without syntax error in SQL
select username,
CASE length(username)
when '' then ''
else (select name from v$database)
END
from dba_users where username = 'TEST19'

When used within unix shell script I get the error

1 select username,
2 CASE username
3 when '' then ''
4 else (select name from v$database)
5 END
6* from dba_users where username = 'TEST'
when '' then ''
*
ERROR at line 3:
ORA-00923: FROM keyword not found where expected

when the username does not exist in the database.



Tom Kyte
February 12, 2005 - 8:24 am UTC

() would be shell special characters.

wait till you try <, > or || or ..............

actually, I'd guess that v$database is becoming v (unless you have an environment variable $database that is...)

but you give us nothing to work with, so we cannot suggest a fix, short of "maybe try putting "\" in front of shell special characters...

reader

A reader, February 12, 2005 - 10:45 am UTC

Your suggested query,

select username, (select name from v$database)
from dba_users where username = :x

works

Thanks.

As for my subsequent posting, I DID use \$
for v$database in the shell script. Not sure why
I got the error from the shell script. I'll look
further into it

Thanks

Tom Kyte
February 12, 2005 - 12:48 pm UTC

because you didn't \( and \)



A reader, February 12, 2005 - 12:27 pm UTC

Hi Tom,
I have a requirment where I've to break all rows in table in 10 batches.We have a column batch_number in the table.
For e.g If I have 10 rows in a table, 1st row would have batch_number 1,2nd row would have batch_number 2 etc...If I have 1000 rows in a table then first 100 rows would in batch 1,next 200 rows would be in batch 2 etc...
How can I do this? Can I update this in a single udpdate statement?

Thanks

Tom Kyte
February 12, 2005 - 1:01 pm UTC

I'd rather break the table into 10 pieces:

set echo off
                                                                                              
set verify off
define TNAME=&1
define CHUNKS=&2
                                                                                              
                                                                                              
select grp,
       dbms_rowid.rowid_create( 1, data_object_id, lo_fno, lo_block, 0 ) min_rid,
       dbms_rowid.rowid_create( 1, data_object_id, hi_fno, hi_block, 10000 ) max_rid
  from (
select distinct grp,
       first_value(relative_fno) over (partition by grp order by relative_fno, block_id
                   rows between unbounded preceding and unbounded following) lo_fno,
       first_value(block_id    ) over (partition by grp order by relative_fno, block_id
                   rows between unbounded preceding and unbounded following) lo_block,
       last_value(relative_fno) over (partition by grp order by relative_fno, block_id
                   rows between unbounded preceding and unbounded following) hi_fno,
       last_value(block_id+blocks-1) over (partition by grp order by relative_fno, block_id
                   rows between unbounded preceding and unbounded following) hi_block,
       sum(blocks) over (partition by grp) sum_blocks
  from (
select relative_fno,
       block_id,
       blocks,
       trunc( (sum(blocks) over (order by relative_fno, block_id)-0.01) /
              (sum(blocks) over ()/&CHUNKS) ) grp
  from dba_extents
 where segment_name = upper('&TNAME')
   and owner = user order by block_id
       )
       ),
       (select data_object_id from user_objects where object_name = upper('&TNAME') )
/

that'll give you 10 non-overlapping rowid ranges that cover the table (feed those into your batch process)


short of that, you'd be using "ntile" and merge.

ops$tkyte@ORA10G> create table t as select * from all_users;
 
Table created.
 
ops$tkyte@ORA10G> alter table t add nt number;
 
Table altered.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> merge into t
  2  using ( select rowid rid, ntile(10) over (order by username) nt from t ) t2
  3  on ( t.rowid = t2.rid )
  4  when matched then update set nt = t2.nt;
 
35 rows merged.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select * from t order by nt;
 
USERNAME                          USER_ID CREATED           NT
------------------------------ ---------- --------- ----------
BI                                     63 19-DEC-04          1
ANONYMOUS                              43 10-AUG-04          1
CTXSYS                                 40 10-AUG-04          1
DBSNMP                                 22 10-AUG-04          1
DEMO                                   75 02-JAN-05          2
DMSYS                                  37 10-AUG-04          2
DIP                                    19 10-AUG-04          2
EXFSYS                                 32 10-AUG-04          2
LEAST_PRIVS                            68 02-JAN-05          3
HR                                     58 19-DEC-04          3
IX                                     60 19-DEC-04          3
MDDATA                                 49 10-AUG-04          3
OE                                     59 19-DEC-04          4
MGMT_VIEW                              56 10-AUG-04          4
MDSYS                                  36 10-AUG-04          4
OLAPSYS                                46 10-AUG-04          4
OPS$TKYTE                              80 17-JAN-05          5
ORDPLUGINS                             34 10-AUG-04          5
OUTLN                                  11 10-AUG-04          5
ORDSYS                                 33 10-AUG-04          5
PERFSTAT                               82 28-JAN-05          6
PM                                     62 19-DEC-04          6
SCOTT                                  57 10-AUG-04          6
SH                                     61 19-DEC-04          7
SYS                                     0 10-AUG-04          7
SI_INFORMTN_SCHEMA                     35 10-AUG-04          7
TEST                                   65 23-DEC-04          8
SYSMAN                                 54 10-AUG-04          8
SYSTEM                                  5 10-AUG-04          8
WKPROXY                                51 10-AUG-04          9
WKSYS                                  50 10-AUG-04          9
WK_TEST                                53 10-AUG-04          9
WSMGMT                                 81 28-JAN-05         10
XDB                                    42 10-AUG-04         10
WMSYS                                  23 10-AUG-04         10
 
35 rows selected.


(in 9i, you'd need to have a "when not matched then insert (nt) values ( null)" but it would basically be a big "no-operation" since all of the rows would in fact match....)

 

A reader, February 12, 2005 - 8:09 pm UTC

Excellent..Thanks tom.

A reader, February 12, 2005 - 8:26 pm UTC

Hi Tom,
We are using oracle 9i
You told
"in 9i, you'd need to have a "when not matched then insert (nt) values ( null)"
but it would basically be a big "no-operation" since all of the rows would in
fact match....)"

I didn't got that part..Could you please explain?

Thanks

Tom Kyte
February 13, 2005 - 9:10 am UTC

in 9i, merge MUST have:

when matched then update ....
when not matched then insert ......



in 10g, you do not. So, in 9i there would have to be a "when not matched", but since we are joining the entire table with itself by rowid -- the "when not matched" clause would in fact never "happen", so you can just put a "dummy" insert in there -- that'll never actually happen

A reader, February 13, 2005 - 11:49 am UTC

Hi Tom,
I got it and implemented Merge statement to break the table into 10 batches..
Thanks a lot for the help


SYSDATE

Andrea Maia Gaziola, February 15, 2005 - 11:52 am UTC

What I would like to know is as I use ((SYSDATE)-90) when the composition of the date this in different columns (day, month and year)


Tom Kyte
February 15, 2005 - 3:37 pm UTC

huh? did not understand.

OK

Raju, February 17, 2005 - 12:05 pm UTC

Hi Tom,
Please have a look at this query.

SQL> create table t(x int,y int)
  2  /

Table created.

SQL> insert into t select rownum,rownum+1 from cat where rownum 

<= 5
  2  /

5 rows created.

SQL> commit;

Commit complete.

SQL> select * from t
  2  /

         X          Y                                             

              
---------- ----------                                             

              
         1          2                                             

              
         2          3                                             

              
         3          4                                             

              
         4          5                                             

              
         5          6                                             

              

SQL> select x,y,x+y from t
  2  union all
  3  select sum(x),sum(y),sum(x+y) from t
  4  /

         X          Y        X+Y                                  

              
---------- ---------- ----------                                  

              
         1          2          3                                  

              
         2          3          5                                  

              
         3          4          7                                  

              
         4          5          9                                  

              
         5          6         11                                  

              
        15         20         35                                  

              

6 rows selected.

Can this query be put in any other way??

I would like to eliminate union all from this query.
I even tried 
SQL > select decode(grouping(..

But it is not working properly.Can you provide
some other way??

 

Tom Kyte
February 17, 2005 - 1:58 pm UTC

ops$tkyte@ORA9IR2> select decode( grouping(rowid), 0, null, 1, 'the end' ) label,
  2         sum(x), sum(y), sum(x+y)
  3   from t
  4  group by rollup(rowid);
 
LABEL       SUM(X)     SUM(Y)   SUM(X+Y)
------- ---------- ---------- ----------
                 1          2          3
                 2          3          5
                 3          4          7
                 4          5          9
                 5          6         11
the end         15         20         35
 
6 rows selected.
 

Efficient way to get counts

Thiru, March 14, 2005 - 2:13 pm UTC

Tom,

Excuse me for not replying immediately to your followup above with title "Efficient way to get counts"

Your followup was:

"need more info, is this table modified during the day or is this a warehouse.

is this example cut way down and there are 500 columns
or is this it.

is c1, c2, flag always involved,
or was that just a side effect of your example..

before anyone can suggest the "best way", details about how the data is used
need to be known"

The case example repeated :

drop table temp_del;
create table temp_del (c1 varchar2(3),c2 varchar2(3),c3 varchar2(3),c4
varchar2(3),flag number);
insert into temp_del values('abc','bcd','cde','def',0);
insert into temp_del values('abc','bcd','cde','def',1);
insert into temp_del values('abc','bcd','cde','def',2);

insert into temp_del values('bcd','cde','def','efg',0);
insert into temp_del values('bcd','cde','def','efg',1);
insert into temp_del values('bcd','cde','def','efg',2);

insert into temp_del values('cde','def','efg','fgh',0);
insert into temp_del values('cde','def','efg','fgh',1);
insert into temp_del values('cde','def','efg','fgh',2);
commit;

select count(*) from temp_del where c1='abc' and c2='bcd' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and flag=1;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and
flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and
flag=1;
select count(*) from temp_del where c1='bcd' and c2='cde' and c3='def' and
flag=2;


Here is the data spread asked for:

No. of columns used by the query : 25
Is c1,c2,flag always involved : yes and a combination of the 25 columns.
Is this table modified during the day or is this a warehouse: This table is modified regularly during
the day. But the query that I have asked for is run only for reporting purposes.


I went with the sum(case.. ) method after creating bitmap indexes for all the columns used in the query where the
distinct values were actually less than 5 or 6. And normal indexes for other columns where the
distinct values were actually close to 60% of the no of rows.

select sum(case when c1='abc' and c2='bcd' and flag=0 then 1 else 0 end) r1,
sum(case when c1='abc' and c2='bcd' and c3='cde' and flag=1 then 1 else 0 end)r2
from temp_del

Kindly advise if this is a preferred solution for 10 million rows table or please suggest something better.


Tom Kyte
March 14, 2005 - 2:48 pm UTC

you have bitmaps on a table that is modified regularly??????

Thiru, March 14, 2005 - 3:47 pm UTC

Having a very few values for the columns, that's what I thought of. Does this affect performance in the normal running of the application and in the queries for reporting purposes?

Thanks for the time.

Tom Kyte
March 14, 2005 - 7:47 pm UTC

do you have bitmaps on columns that are regularly updated?

Thiru, March 15, 2005 - 10:26 am UTC

Yes. A few columns are updated quite frequently.

Tom Kyte
March 15, 2005 - 10:26 am UTC

and you have not found this to be an issue with

a) deadlocks
b) massive bitmap index growth

??

Steve, March 29, 2005 - 4:36 pm UTC

Tom,

I am not sure if the folowing logic can be done by a single sql.

I have two tables

receipts
store_id, order_id, prod_id, rec_date, rec_qty
-------------------------------------------------
1003 10001 2001, 3/5/2005, 3
1003 10002 2001, 3/8/2005, 1
1004 10003


sales
sales_date, store_id, prod_id, sales_qty, ...
----------------------------------------------
3/8/05, 1003, 2001, 1
3/10/05, 1003, 2001, 2
3/11/05, 1003, 2001, 1
3/12/05, 1003, 2001, 1
.....

I want to create a received_sales match report

store_id, order_id, prod_id, rec_date, rec_qty, sale_qty
1003 10001 2001, 3/5/2005, 2 2
1003 10002 2001, 3/8/2005, 1 1

Matching rule:
Given a receipt of product, if there is a sale for that type of the product within 14 day of receipt_date, then we conside a sale match.

Thanks in advanced!

Steve



Tom Kyte
March 29, 2005 - 5:07 pm UTC

no create tables, no inserts....

join by store id, where one_date-other_date <= 14.

Joining three scripts and get values for three columns.

Dawar, April 05, 2005 - 2:28 pm UTC


Tom,
I need to run below script which displays only three columns.

EMPLOYEE_NAME
V.DEPT_NO DEPT_NO
V.PAY_LOCATION PAY_LOCATION

I got those three script from Our current Oracle reports.
emp_no from the first script is PK and other two columns has this column (emp_no) FK.
here are three scripts, how will I join below scripts and get values for only EMPLOYEE_NAME, V.DEPT_NO DEPT_NO
& V.PAY_LOCATION PAY_LOCATION columns.


SELECT V.LAST_NAME || ', ' || V.FIRST_NAME || ' ' || V.MIDDLE_INITIAL
EMPLOYEE_NAME
,E.home_phone PHONE
,E.home_street STREET
,E.home_city CITY
,E.home_zip1 ZIP1
,E.home_zip2 ZIP2
,nvl(E.bilingual_code, 'Not Applicable') BC
,nvl(V.specialty, 'Not Applicable') SPECIALTY
,nvl(V.sub_specialty,'Not Applicable') SBSP
,E.BILINGUAL_REF_CD BRC
,V.EMP_NO EMP_NO
,V.DEPT_NO DEPT_NO
,C.DEPT_TITLE DEPT_TITLE
,V.PAY_LOCATION PAY_LOCATION
,V.LAYOFF_SENIORITY_DATE LAYOFF_SENIORITY_DATE
,V.TIME_IN_GRADE TIME_IN_GRADE
,V.PAYROLL_ITEM_NO PAYROLL_ITEM_NO
,V.ITEM_LETTER ITEM_LETTER
,V.CLASS_TITLE CLASS_TITLE
,DECODE(V.REPRESENTATION_STATUS,'Y','REPRESENTED','NON-REPRESENTED')
REPRESENTATION_STATUS
,DECODE(V.PROBATION_STATUS,'1','FIRST','2','SUBSEQUENT','NONE')
PROBATION_STATUS
,DECODE(V.IMPROVEMENT_NEEDED_END_DATE,NULL,'NONE',
TO_CHAR(V.IMPROVEMENT_NEEDED_END_DATE,'MM/DD/YY')) IMPROVEMENT_NEEDED_END_DATE
FROM EMPINFO_VIEW V,
DEPARTMENTS C,
EMPLOYEES E
WHERE V.PRIOR_ITEM_INDICATOR IS NULL
AND V.JOB_CODE = 'A'
AND (V.EMP_NO = :EMP_PARM OR :EMP_PARM IS NULL)
AND (V.DEPT_NO = :DEPT_PARM OR :DEPT_PARM IS NULL)
AND (V.BUDGETED_GROUP_ID = :GROUP_PARM OR :GROUP_PARM IS NULL)
AND (C.DEPT_NO = :DEPT_PARM OR :DEPT_PARM IS NULL)
and (v.payroll_item_no = :ITEM_PARM or :ITEM_PARM is null)
AND C.DEPT_NO = V.DEPT_NO
and V.emp_no = E.emp_no
ORDER BY V.DEPT_NO
,V.PAY_LOCATION
,V.LAST_NAME


***********************************************

SELECT EMPITM.PAYROLL_ITEM_NO PAYROLL_ITEM_NO2
,ITEM.CLASS_TITLE CLASS_TITLE2
,EMPITM.ITEM_LETTER ITEM_LETTER2
,EMPITM.EMP_NO EMP_NO2
FROM EMPLOYEE_ITEMS EMPITM
,ITEMS ITEM
WHERE ITEM.ITEM_NO = EMPITM.PAYROLL_ITEM_NO
AND EMPITM.PRIOR_ITEM_INDICATOR = 'Y'
ORDER BY ITEM.MAXIMUM_ANNUAL_SALARY DESC

****************************************************************

SELECT EMP_NO EMP_NO3,
PRIORITY_GEO_A PGA,
PRIORITY_GEO_B PGB,
PRIORITY_GEO_C PGC
FROM EMPLOYEES

Regards,
Dawar

Tom Kyte
April 05, 2005 - 6:47 pm UTC

no idea what you mean

SQL Query

Reader, April 06, 2005 - 1:12 am UTC

Dear Tom,

The following query takes around 45 seconds...All tables have been analyzed using dbms_stats. Indexes are in place.  The view called in this query runs very fast and the response for the view is less than a second. Can you please give inputs to improve the performance?  Are joins bad here?
Should we rewrite the query?

Regards


=======================
SQL> SELECT  count(1) noofrecs FROM
OPAY.OC_CHEQUE_INTERFACE OCI,
OPAY.FILE_MST_PAY FIM,
OPAY.OC_CHEQUE_DETAIL OCD,
OPMAY.CLIENTALL VW_CLA
WHERE
OCD.COD_TXN_STATUS = 'ISSUED' and
OCI.AMT_CREDIT BETWEEN
0 AND 999999999999.99 and
OCD.ISSUE_DATE BETWEEN
TO_DATE('21-08-2004','DD-MM-YY') AND
TO_DATE ('29-08-2004','DD-MM-YY')
AND oci.cod_cust LIKE
DECODE ('VOLTEST', '', '%', 'VOLTEST')
AND vw_cla.dividend_number
LIKE DECODE ('VOLTESTDIV', '', '%','VOLTESTDIV')
AND
VW_CLA.COD_STOCK LIKE DECODE ('', '', '%', '') AND
TO_DATE (vw_cla.dat_payment, 'DD-MM-YY')
BETWEEN TO_DATE
('08-06-2006','DD-MM-YYYY') AND
TO_DATE ('15-06-2006','DD-MM-YYYY') AND
OCI.COD_FILEID = FIM.COD_FILEID AND
OCI.TXN_ID = OCD.TXN_ID AND
vw_cla.cycle_id = fim.cycle_id
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17
SQL> /

Elapsed: 00:00:43.65
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=759 Card=1 Bytes=133
          )

   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS (Cost=759 Card=1 Bytes=133)
   3    2       HASH JOIN (Cost=757 Card=1 Bytes=109)
   4    3         NESTED LOOPS (Cost=11 Card=84 Bytes=5880)
   5    4           VIEW OF 'CLIENTALL' (Cost=8 Card=2 Bytes=112)
   6    5             SORT (UNIQUE) (Cost=8 Card=2 Bytes=122)
   7    6               UNION-ALL
   8    7                 NESTED LOOPS (Cost=2 Card=1 Bytes=57)
   9    8                   TABLE ACCESS (FULL) OF 'DVD_CYCLE_MST_PAY'
           (Cost=1 Card=1 Bytes=29)

  10    8                   TABLE ACCESS (BY INDEX ROWID) OF 'ORBICASH
          _CLIENTMST' (Cost=1 Card=119 Bytes=3332)

  11   10                     INDEX (UNIQUE SCAN) OF 'SYS_C0014447' (U
          NIQUE)
  12    7                 NESTED LOOPS (Cost=2 Card=1 Bytes=65)
  13   12                   TABLE ACCESS (FULL) OF 'DVD_CYCLE_MST_PAY'
           (Cost=1 Card=1 Bytes=29)

  14   12                   TABLE ACCESS (BY INDEX ROWID) OF 'COUNTER_
          MST_PAY' (Cost=1 Card=14 Bytes=504)

  15   14                     INDEX (UNIQUE SCAN) OF 'SYS_C0014648' (U
          NIQUE)

  16    4           INDEX (RANGE SCAN) OF 'PERF2' (NON-UNIQUE) (Cost=2
           Card=4767 Bytes=66738)

  17    3         TABLE ACCESS (BY INDEX ROWID) OF 'OC_CHEQUE_INTERFAC
          E' (Cost=745 Card=30 Bytes=1170)

  18   17           INDEX (RANGE SCAN) OF 'IDX_OC_CHEQUE_INT_CODCUST'
          (NON-UNIQUE) (Cost=36 Card=30)

  19    2       TABLE ACCESS (BY INDEX ROWID) OF 'OC_CHEQUE_DETAIL' (C
          ost=2 Card=49 Bytes=1176)

  20   19         INDEX (UNIQUE SCAN) OF 'SYS_C0015224' (UNIQUE) (Cost
    =1 Card=49)

 Statistics
----------------------------------------------------------
          7  recursive calls
          8  db block gets
    1693075  consistent gets
      47287  physical reads
          0  redo size
        367  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
          1  rows processed


============================================

 

Tom Kyte
April 06, 2005 - 6:40 am UTC

I cannot tune every query..

but "indexes are in place" - that could be the problem, using indexes. indexes are good, they are bad, they are sometimes neither good nor bad.

sorry, without tables and indexing schemes....


TO_DATE (vw_cla.dat_payment, 'DD-MM-YY')
BETWEEN TO_DATE
('08-06-2006','DD-MM-YYYY') AND
TO_DATE ('15-06-2006','DD-MM-YYYY')

I can say that is plain "wrong", as in gets the wrong data.

did you know that the date the 14 of January 1806 which is in your query the string

14-jan-1806

is between the string '08....' and the string '15....'

convert STRINGS TO DATES to compare to DATES.

Also, applying functions to database columns typically prevents the use of indexes on that column.

why would you to_char a date to have it converted back to a date...

where DATE_COLUMN between to_date( string ) and to_date( another_string )



SQL Query

reader, April 07, 2005 - 12:50 am UTC

Tom,

Thanks for the inputs. 

Please clarify on the following:

1. I am under the belief that when indexes are in place and the tables stand analyzed using dbms_stats, CBO will decide whether or not to use the index.This being the case how the index would be bad during select operations.

2. As suggested, the query was modified to compare the date column to  to_date of string. But the consistent gets continue to be the same as shown below. Is there any other part of the query to look at and modfiy?

Regards.

==============================================
SQL> SELECT  count(1) noofrecs FROM
OPAY.OC_CHEQUE_INTERFACE OCI,
OPAY.FILE_MST_PAY FIM,
OPAY.OC_CHEQUE_DETAIL OCD,
OPMAY.CLIENTALL VW_CLA
WHERE
OCD.COD_TXN_STATUS = 'ISSUED' and
OCI.AMT_CREDIT BETWEEN
0 AND 999999999999.99 and
OCD.ISSUE_DATE BETWEEN
TO_DATE('21-08-2004','DD-MM-YYYY') AND
TO_DATE ('29-08-2004','DD-MM-YYYY')
AND oci.cod_cust LIKE
DECODE ('VOLTEST', '', '%', 'VOLTEST')
AND vw_cla.dividend_number
LIKE DECODE ('VOLTESTDIV', '', '%','VOLTESTDIV')
AND
VW_CLA.COD_STOCK LIKE DECODE ('', '', '%', '') AND
vw_cla.dat_payment
BETWEEN TO_DATE
('08-06-2006','DD-MM-YYYY') AND
TO_DATE ('15-06-2006','DD-MM-YYYY') AND
OCI.COD_FILEID = FIM.COD_FILEID AND
OCI.TXN_ID = OCD.TXN_ID AND
vw_cla.cycle_id = fim.cycle_id
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17
SQL> /

Elapsed: 00:00:27.42

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=766 Card=1 Bytes=133
          )

   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS (Cost=766 Card=1 Bytes=133)
   3    2       HASH JOIN (Cost=764 Card=1 Bytes=109)
   4    3         HASH JOIN (Cost=18 Card=84 Bytes=5880)
   5    4           VIEW OF 'CLIENTALL' (Cost=8 Card=2 Bytes=112)
   6    5             SORT (UNIQUE) (Cost=8 Card=2 Bytes=122)
   7    6               UNION-ALL
   8    7                 NESTED LOOPS (Cost=2 Card=1 Bytes=57)
   9    8                   TABLE ACCESS (FULL) OF 'DVD_CYCLE_MST_PAY'
           (Cost=1 Card=1 Bytes=29)
 10    8                   TABLE ACCESS (BY INDEX ROWID) OF 'ORBICASH
          _CLIENTMST' (Cost=1 Card=119 Bytes=3332)

  11   10                     INDEX (UNIQUE SCAN) OF 'SYS_C0014447' (U
          NIQUE)

  12    7                 NESTED LOOPS (Cost=2 Card=1 Bytes=65)
  13   12                   TABLE ACCESS (FULL) OF 'DVD_CYCLE_MST_PAY'
           (Cost=1 Card=1 Bytes=29)

  14   12                   TABLE ACCESS (BY INDEX ROWID) OF 'COUNTER_
          MST_PAY' (Cost=1 Card=14 Bytes=504)

  15   14                     INDEX (UNIQUE SCAN) OF 'SYS_C0014648' (U
          NIQUE)

  16    4           TABLE ACCESS (FULL) OF 'FILE_MST_PAY' (Cost=9 Card
          =4767 Bytes=66738)

  17    3         TABLE ACCESS (BY INDEX ROWID) OF 'OC_CHEQUE_INTERFAC
          E' (Cost=745 Card=30 Bytes=1170)
18   17           INDEX (RANGE SCAN) OF 'IDX_OC_CHEQUE_INT_CODCUST'
          (NON-UNIQUE) (Cost=36 Card=30)

  19    2       TABLE ACCESS (BY INDEX ROWID) OF 'OC_CHEQUE_DETAIL' (C
          ost=2 Card=49 Bytes=1176)

  20   19         INDEX (UNIQUE SCAN) OF 'SYS_C0015224' (UNIQUE) (Cost
          =1 Card=49)

Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets
    1693160  consistent gets
      47557  physical reads
          0  redo size
        367  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed


============================================= 

Tom Kyte
April 07, 2005 - 9:10 am UTC

1) </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>

2)

AND oci.cod_cust LIKE
DECODE ('VOLTEST', '', '%', 'VOLTEST')
AND vw_cla.dividend_number
LIKE DECODE ('VOLTESTDIV', '', '%','VOLTESTDIV')
AND
VW_CLA.COD_STOCK LIKE DECODE ('', '', '%', '')

why that, why no

cod_cust = 'constant'
and dividend_number = 'anotherconstant'
and cod_stock is not null


Are the cardnialities in the autotrace even close to those in the tkprof.

Alex, May 24, 2005 - 12:12 pm UTC

Hi Tom,

I have a quick question about my decode I'm trying to do but I don't know if what I'm doing is allowed or not. I'm doing:

select a.name,
DECODE (r.text,INSTR (r.text, 'XXX') != 0, r.id || SUBSTR (r.text, 4),r.text)

from address a, requirements r
where a.key = r.key

I get ORA-00907: missing right parenthesis, but as you see they match. I think it doesn't like something about that INSTR. Thank you.

Tom Kyte
May 24, 2005 - 1:40 pm UTC

the decode isn't making sense.

what are you trying to do exactly? (decode doesn't do boolean expressions)

are you trying to return r.id || substr() when the instr() is not zero?


decode( instr(r.text,'XXX'), 0, r.text, r.id||substr(r.text) )

perhaps? that is:

case when instr(r.text,'XXX') = 0
then r.text
else r.id||substr(r.text)
end

you might just want to use case actually, more readable in many cases.

mo, May 28, 2005 - 7:34 am UTC

Tom:

I have two tables:

SQL> desc phone_log;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LOG_NO                                    NOT NULL NUMBER(10)
 CALLER_ID                                 NOT NULL NUMBER(10)
 CONTACT_PERSON                                     VARCHAR2(50)
 PHONE                                              VARCHAR2(25)
 DATE_RECEIVED                                      DATE
 DATE_RESOLVED                                      DATE
 PURPOSE                                            VARCHAR2(20)
 TOPIC                                              VARCHAR2(25)
 PRIORITY                                           VARCHAR2(6)
 INITIAL_OUTCOME                                    VARCHAR2(20)
 ISSUE                                              VARCHAR2(500)
 INITIAL_RESPONSE                                   VARCHAR2(500)
 SOURCE                                             VARCHAR2(10)

SQL> desc phone_followup
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LOG_NO                                    NOT NULL NUMBER(10)
 FOLLOWUP_NO                               NOT NULL NUMBER(2)
 FOLLOWUP_DATE                                      DATE
 RESPONSE                                           VARCHAR2(500)
 OUTCOME                                            VARCHAR2(20)

The phone log table is a parent table to phone_followup.  However a parent record 
can exist without any children.  I am trying to query on pending records which are defined as:
Find all records in phone_log without any children where date_resolved is null
or if there are children records in phone followup then find the maximum followup number
where outcome is not resolved.

I wrote this but it does not work.  Is there a way to join both tables and find the 
maximum followup number record to join and also find records in parent that do not have children.

  1  select f.log_no,g.outcome from
  2  (
  3  select f.log_no,f.date_received,f.date_resolved from phone_log f,
  4  (
  5   select * from
  6  (
  7  select a.log_no,a.outcome,a.followup_no from phone_followup a,
  8  (select log_no,max(followup_no) followup_no from phone_followup group by log_no) b
  9  where a.log_no = b.log_no and a.followup_no=b.followup_no
 10  ) e
 11  ) g
 12  where f.log_no=g.log_no(+)
 13* )
SQL> /
select f.log_no,g.outcome from
                *
ERROR at line 1:
ORA-00904: "G"."OUTCOME": invalid identifier





  1   select a.log_no,a.date_received,a.date_resolved,b.outcome,b.followup_no from
  2  phone_log a,phone_followup b where
  3  a.log_no=b.log_no(+)
  4  --and (a.date_resolved is null or b.outcome <> 'Resolved') and
  5* --b.followup_no=(select max(followup_no) from phone_followup where log_no=a.log_no)
SQL> /

    LOG_NO DATE_RECE DATE_RESO OUTCOME              FOLLOWUP_NO
---------- --------- --------- -------------------- -----------
        39 22-SEP-04           Referred                       1
        39 22-SEP-04           Follow-up Needed               2
        39 22-SEP-04           Resolved                       3
        39 22-SEP-04                                          4
        39 22-SEP-04           Referred                       5
        41 20-NOV-04           Call Escalated                 1
        41 20-NOV-04           Referred                       2
        41 20-NOV-04           Resolved                       3
        41 20-NOV-04           Follow-up Needed               4
        38 11-NOV-04 11-DEC-04 Follow-up Needed               1
        38 11-NOV-04 11-DEC-04 Resolved                       2

    LOG_NO DATE_RECE DATE_RESO OUTCOME              FOLLOWUP_NO
---------- --------- --------- -------------------- -----------
        38 11-NOV-04 11-DEC-04 Referred                       3
        50           27-MAY-05

13 rows selected.


What I want are the records with max(followup_no) for each log_No and check if the outcome 
is not 'Resolved'.  For record 50 which does not have children I want to only check 
"date resolved" field.


Thank you,
 

Tom Kyte
May 28, 2005 - 9:20 am UTC

no creates, no inserts, no comment.

A reader, June 06, 2005 - 10:00 am UTC

create table orders
(order_id number(10),
address varchar2(100))
insert into orders values(1,'addr1');
insert into orders values(2,'addr2');
insert into orders values(3,'addr3');
insert into orders values(4,'addr4');
insert into orders values(5,'addr5');
insert into orders values(6,'addr6');

insert into order_items values(1,1,100,5);
insert into order_items values(2,1,200,15);
insert into order_items values(3,2,100,5);
insert into order_items values(4,2,200,15);
insert into order_items values(5,3,100,5);
insert into order_items values(6,4,300,4);
insert into order_items values(7,5,400,4);
insert into order_items values(7,6,400,4);


I need to select identical orders with the same set on inventory_Id and ordered items,

First set is orders 1, 2, and second set - orders 5,6.
order_id address count(order_id) Set#
1 addr1 2 1
2 addr2 2 1

5 addr3 2 2
6 addr5 2 2

Thanks in advance.





Tom Kyte
June 06, 2005 - 10:52 am UTC

sort of incomplete example no?

A reader, June 06, 2005 - 11:09 am UTC

We have a lot of customers making the same orders,
and want to select such customers's orders as a bulk,
then create file and ship those orders thru fedx as a one set.
The number of orders in such bulk should be more than 3.
Let's say, if more than 2 orders have 10 ordered item 1,
20 items 2, this is a bulk #1. And so on.



A reader, June 06, 2005 - 2:56 pm UTC

Sorry, I need it and will try again.

let's say we have:
order1 contains item1 10 pieces, item2 15 pieces
order2 contains item1 10 pieces, item2 15 pieces
order3 contains item3 30 pieces
order4 contains item4 15 pieces, item1 20 pieces
order5 contains item1 8 pieces, item2 15 pieces
order6 contains item1 3 pieces, item2 15 pieces
order7 contains item3 30 pieces
order8 contains item3 30 pieces
order9 contains item7 10 pieces
Another words:
create table T
(item_id number(10), order_id number(10),
item varchar2(10), itm_ordered number(10))
insert into T values (1,1,'item1',10);
insert into T values (2,1,'item2',15);
insert into T values (3,2,'item1',10);
insert into T values (4,2,'item2',15);
insert into T values (5,3,'item3',30);
insert into T values (6,4,'item4',15);
insert into T values (7,4,'item1',20);
insert into T values (8,5,'item1',8);
insert into T values (9,5,'item2',15);
insert into T values (10,6,'item1',3);
insert into T values (11,6,'item2',15);
insert into T values (12,7,'item3',30);
insert into T values (13,8,'item3',30);
insert into T values (14,9,'item7',10);
and I need:
orders count in bulk
order1 2
order2 2
order3 3
order7 3
order8 3
order4,5,6,9 arent interesting because there is not dups for them.

Thanks.


Tom Kyte
June 06, 2005 - 3:18 pm UTC

if you know the maximum items per order (and this maxes out at a varchar2(4000)) this'll do it

ops$tkyte@ORA9IR2> select *
  2    from (
  3  select order_id, data, count(*) over (partition by data) cnt
  4    from (
  5  select order_id,
  6          max(decode(r,1,item||'/'||itm_ordered)) || '|' ||
  7          max(decode(r,2,item||'/'||itm_ordered)) || '|' ||
  8          max(decode(r,3,item||'/'||itm_ordered)) || '|' ||
  9          max(decode(r,4,item||'/'||itm_ordered)) || '|' ||
 10          max(decode(r,5,item||'/'||itm_ordered)) || '|' ||
 11          max(decode(r,6,item||'/'||itm_ordered)) || '|' ||
 12          max(decode(r,7,item||'/'||itm_ordered)) data
 13    from (select a.*, row_number() over (partition by order_id order by item, itm_ordered) r
 14            from t a
 15             )
 16   group by order_id
 17         )
 18         )
 19   where cnt > 1
 20  /
 
  ORDER_ID DATA                                                      CNT
---------- -------------------------------------------------- ----------
         1 item1/10|item2/15|||||                                      2
         2 item1/10|item2/15|||||                                      2
         3 item3/30||||||                                              3
         7 item3/30||||||                                              3
         8 item3/30||||||                                              3
 

If you don't know the number of items on an order....

Sean D Stuber, June 06, 2005 - 4:28 pm UTC

How about this? I've been trying to think of a more effficient or at least more "elegant" way but so far this is the best I could come up with. I hope it helps.

SELECT * FROM
(SELECT order_id,
COUNT(*) OVER(PARTITION BY items) cnt FROM
(SELECT x.*,
ROW_NUMBER() OVER(PARTITION BY order_id
ORDER BY lvl DESC) rn
FROM
(SELECT y.*, LEVEL lvl,
SYS_CONNECT_BY_PATH(item || '/'
|| itm_ordered,
'|') items
FROM
(SELECT t.*,
ROW_NUMBER()
OVER (PARTITION BY order_id
ORDER BY item_id) itm_idx
FROM t
) y
CONNECT BY order_id = PRIOR order_id
AND itm_idx - 1 = PRIOR itm_idx
) x
)
WHERE rn = 1
)
WHERE cnt > 1

Tom Kyte
June 06, 2005 - 5:52 pm UTC

has the same 4000 byte limit -- so ultimately there is always a "max"

OK

James, June 10, 2005 - 8:48 am UTC

Hi Tom,
My requirement is
"I would like to get grouped rows from EMP table
where no individual makes more than 3000"

I tried this query but getting errors.

SQL> select deptno,avg(sal)
  2  from emp
  3  group by deptno
  4  having sal <= 3000
  5  /
having sal <= 3000
       *
ERROR at line 4:
ORA-00979: not a GROUP BY expression
 

Tom Kyte
June 10, 2005 - 10:37 am UTC

where sal <= 3000

not having. having works on the grouped columns or the aggregates only. "sal" no longer exists when you get to the having.


but, is the question really:

I want the avg salary for all employees in any dept where the maximum salary in that dept is <= 3000


that would be having max(sal) <= 3000

OK

A reader, June 10, 2005 - 12:54 pm UTC

Hi Tom,
My requirement is:
I want to exclude grouped rows when atleast one individual
in each group gets a salary of more than 3000.


Tom Kyte
June 10, 2005 - 3:47 pm UTC

that would be having max(sal) <= 3000 then.

Query

Laxman Kondal, June 14, 2005 - 4:33 pm UTC

Hi Tom

I wanted to find a way to fetch from emp table where I pass deptno. And if deptno is not in emp then fetch all.

And this is what I did. Since deptno=50 not exists and 'OR' clauese 1=1 bribgs all results.

And when I use deptno=10 or 1=1 then also it fetches all deptno.
scott@ORA9I> select ename, empno from emp where deptno=10;

scott@ORA9I> select ename, empno, deptno from emp where deptno=10;

I think my concept of 'OR' caluse is messedup.

ENAME EMPNO DEPTNO
---------- ---------- ----------
CLARK 7782 10
KING 7839 10
MILLER 7934 10

3 rows selected.

scott@ORA9I> select ename, empno, deptno from emp where deptno=50;

no rows selected

scott@ORA9I> select ename, empno, deptno from emp where deptno=50 or 1=1;

ENAME EMPNO DEPTNO
---------- ---------- ----------
SMITH 7369 20
ALLEN 7499 30
WARD 7521 30
JONES 7566 20
MARTIN 7654 30
BLAKE 7698 30
CLARK 7782 10
SCOTT 7788 20
KING 7839 10
TURNER 7844 30
ADAMS 7876 20
JAMES 7900 30
FORD 7902 20
MILLER 7934 10

14 rows selected.

scott@ORA9I> select ename, empno, deptno from emp where deptno=10 or 1=1;

ENAME EMPNO DEPTNO
---------- ---------- ----------
SMITH 7369 20
ALLEN 7499 30
WARD 7521 30
JONES 7566 20
MARTIN 7654 30
BLAKE 7698 30
CLARK 7782 10
SCOTT 7788 20
KING 7839 10
TURNER 7844 30
ADAMS 7876 20
JAMES 7900 30
FORD 7902 20
MILLER 7934 10

14 rows selected.

scott@ORA9I>

Thanks and regards.

Tom Kyte
June 14, 2005 - 4:46 pm UTC

(some 'requirements' just boggle my mind, I'll never figure out the business case behind this one)

do you really mean this? or can it be more like:

if I pass in null, i want all rows, else I want only the matching rows?

Query

Laxman Kondal, June 15, 2005 - 8:26 am UTC

Hi Tom

You are right and some time I want to figure out some way to work around possibility in shorter and correct way. Although I can have two blocks in if statements and either one will work. That's double coding and if I add just 'were' clause then will it work?

What I am trying to get is that if I pass deptno which exists then get me that dept and if not exists then from all deptno.

Thanks for the hint and it works

scott@ORA9I> CREATE OR REPLACE PROCEDURE Demo(p_dep IN NUMBER )
2 AS
3 v1 NUMBER := p_dep;
4 BEGIN
5 FOR i IN (SELECT *
6 FROM Emp
7 WHERE (CASE WHEN p_dep = 0
8 THEN v1
9 ELSE deptno
10 END ) = p_dep )
11
12 LOOP
13 dbms_output.put_line(i.ename||' - '||i.deptno);
14 END LOOP;
15 END Demo;
16 /

Procedure created.

scott@ORA9I> exec demo(10)
CLARK - 10
KING - 10
MILLER - 10

PL/SQL procedure successfully completed.

scott@ORA9I> exec demo(20)
SMITH - 20
JONES - 20
SCOTT - 20
ADAMS - 20
FORD - 20

PL/SQL procedure successfully completed.

scott@ORA9I> exec demo(30)
ALLEN - 30
WARD - 30
MARTIN - 30
BLAKE - 30
TURNER - 30
JAMES - 30

PL/SQL procedure successfully completed.

scott@ORA9I> exec demo(0)
SMITH - 20
ALLEN - 30
WARD - 30
JONES - 20
MARTIN - 30
BLAKE - 30
CLARK - 10
SCOTT - 20
KING - 10
TURNER - 30
ADAMS - 20
JAMES - 30
FORD - 20
MILLER - 10

PL/SQL procedure successfully completed.

scott@ORA9I>

Thanks and regards.

Tom Kyte
June 15, 2005 - 10:05 am UTC

guess I'll recommend

ops$tkyte@ORA9IR2> variable x number
ops$tkyte@ORA9IR2> exec :x := 50;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> declare
  2      l_cursor sys_refcursor;
  3      l_rec    emp%rowtype;
  4  begin
  5      open l_cursor for select * from emp where deptno = :x;
  6      fetch l_cursor into l_rec;
  7      if ( l_cursor%notfound )
  8      then
  9          close l_cursor;
 10          open l_cursor for select * from emp;
 11          fetch l_cursor into l_rec;
 12      end if;
 13      loop
 14          exit when l_cursor%notfound;
 15          dbms_output.put_line( l_rec.deptno || ', ' || l_rec.ename );
 16          fetch l_cursor into l_rec;
 17      end loop;
 18      close l_cursor;
 19  end;
 20  /
20, SMITH
30, ALLEN
30, WARD
20, JONES
30, MARTIN
30, BLAKE
10, CLARK
20, SCOTT
10, KING
30, TURNER
20, ADAMS
30, JAMES
20, FORD
10, MILLER
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec :x := 10
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> /
10, CLARK
10, KING
10, MILLER
 
PL/SQL procedure successfully completed.
 

 

Query

Laxman Kondal, June 15, 2005 - 10:33 am UTC

Hi Tom

Thanks and you showed one more way to do this.

Thanks and regards

A reader, June 15, 2005 - 4:09 pm UTC

Hi Tom,

I have 2 tables like

create table t1
(
id1 number,
name varchar2(60),
id2 number
);

create table t2
(
id2 number,
xxx char(1)
);

insert into t1 values (1, 'firstname', 1);
insert into t1 values (2, 'secondname', 2);
insert into t2 values (1, 'A');
insert into t2 values (1, 'B');
insert into t2 values (2, 'A');
insert into t2 values (2, 'B');
commit;

I have a query

select t1.id1, t1.name, t1.id2, t2.xxx from t1, t2
where t1.id2 = t2.id2
/

ID1 NAME ID2 X
---------- ------------------------------------------------------------ ---------- -
1 firstname 1 A
1 firstname 1 B
2 secondname 2 A
2 secondname 2 B


Is it possible to get something like

ID1 NAME ID2 XA XB

1 firstname 1 A B
2 secondname 2 A B

The values for columns ID1, NAME, ID2 for all rows for A and B will always be the same.

Thanks.



Tom Kyte
June 16, 2005 - 3:31 am UTC

and what happens if firstname has 3 rows?

A reader, June 16, 2005 - 10:12 am UTC

No firstname will always have only 2 rows and so will secondname.

Thanks.

Tom Kyte
June 16, 2005 - 1:09 pm UTC

ops$tkyte@ORA9IR2> Select id1, name, id2, max( decode(rn,1,xxx) ) c1, max(decode(rn,2,xxx)) c2
  2    from (
  3  select t1.id1, t1.name, t1.id2, t2.xxx ,
  4         row_number() over (partition by t1.id1, t1.name, t1.id2 order by t2.xxx) rn
  5    from t1, t2
  6   where t1.id2 = t2.id2
  7         )
  8   group by id1, name, id2
  9  /
 
       ID1 NAME                                  ID2 C C
---------- ------------------------------ ---------- - -
         1 firstname                               1 A B
         2 secondname                              2 A B
 


there is of course an easier answer if xxx is always A/B and nothing but nothing else. 

A reader, June 16, 2005 - 2:49 pm UTC

Yes xxx will not have anything else other than A/B. Is there an easier way to do this in that case?

Thanks a lot for your help Tom.


Tom Kyte
June 16, 2005 - 3:14 pm UTC

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select t1.id1, t1.name, t1.id2, max(decode(t2.xxx,'A','A')) xa, max(decode(t2.xxx,'B','B')) xb  2    from t1, t2
  3   where t1.id2 = t2.id2
  4   group by t1.id1, t1.name, t1.id2
  5  /
 
       ID1 NAME                                  ID2 X X
---------- ------------------------------ ---------- - -
         1 firstname                               1 A B
         2 secondname                              2 A B
 

A reader, June 16, 2005 - 4:07 pm UTC

I have a similar requirement but I am selecting around 20 fields and so my "group by" clause also has 20 columns. If I try a similar query with the 20 fields, it is taking a very long time, do I have to do anything special for that? Please let me know.

Thanks.

Tom Kyte
June 16, 2005 - 9:51 pm UTC

well, very little data to go on.

but, grouping by 20 columns probably leads to "lots more records" than grouping by say 2 right.

so, bigger result set, more temp space, more work, more stuff.


tkprof it, preferably with a 10046 level 12 trace and see what you are waiting on (assuming software written this century like 9ir2 so the waits show up nicely in the tkprof report)

A reader, June 17, 2005 - 3:55 pm UTC

Our database is 10g. I setup the session for 10046 trace level 12 and I am attaching below the last part of the tkprof output. Is this what I should be looking at for the wait times. Please let me know and if it is tell me what I should see to determine how to make my query faster.

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 8 0.00 0.00 0 0 0 0
Execute 10 0.00 0.02 0 11 10 10
Fetch 5 39.51 427.26 566787 575769 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 23 39.51 427.30 566787 575780 10 20

Misses in library cache during parse: 5
Misses in library cache during execute: 4

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 12 0.00 0.00
SQL*Net message from client 11 13.07 13.14
direct path write temp 10800 0.02 3.35
db file sequential read 732 0.03 4.49
db file scattered read 38827 0.06 389.03
direct path read temp 22 0.04 0.33


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.00 0.00 0 0 0 0
Execute 14 0.01 0.01 0 2 8 8
Fetch 6 0.00 0.00 0 15 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 27 0.01 0.01 0 17 8 14

Misses in library cache during parse: 6
Misses in library cache during execute: 5

15 user SQL statements in session.
1 internal SQL statements in session.
16 SQL statements in session.
********************************************************************************
Trace file: wicis_ora_1116.trc
Trace file compatibility: 9.00.01
Sort options: default

3 sessions in tracefile.
18 user SQL statements in trace file.
1 internal SQL statements in trace file.
16 SQL statements in trace file.
13 unique SQL statements in trace file.
51339 lines in trace file.

Thanks.


Tom Kyte
June 17, 2005 - 4:17 pm UTC

ops$tkyte@ORA9IR2> select 38827/389.03 from dual;
 
38827/389.03
------------
  99.8046423
 
ops$tkyte@ORA9IR2> select 389.03/38827 from dual;
 
389.03/38827
------------
  .010019574


You are doing 100 multiblock io's/second or about 0.01 seconds per IO.  Does that seem "fast or correct" given your hardware?

but therein lies your low hanging fruit, look for the do'ers of IO and see what you can do to have them do less of it. 

A reader, June 20, 2005 - 10:50 am UTC

I am sorry but I am not able to understamd much from your response. Can you elaborate it a little please?

Thanks.

Tom Kyte
June 20, 2005 - 12:43 pm UTC

I was saying "you are doing a lot of physical IO, they are taking about 1/100th of a second per IO, is that reasonable for your hardware and have you looked at tuning SQL in order to reduce the number of physical IO's you have to do"



A reader, June 20, 2005 - 11:30 am UTC

Regd the following query from previous followup

select t1.id1, t1.name, t1.id2, max(decode(t2.xxx,'A','A'))
xa, max(decode(t2.xxx,'B','B')) xb 2 from t1, t2
where t1.id2 = t2.id2
group by t1.id1, t1.name, t1.id2
/

I have about 10 mil rows in t2. I have an index on t2(xxx). But it is still doing a table scan and the query results are very slow. Should I create some sort of functional index or something ?

Thanks.


Tom Kyte
June 20, 2005 - 12:53 pm UTC

I would be seriously disappointed if it touched an index, if you want slow, make it use an index.


please do this:

a) say out loud "full scans are not evil"
b) say out loud "indexes are not all goodness"
c) goto (a) until you believe what you are saying out loud

read this too:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>


Now, perhaps you are hashing to disk and would benefit from a larger hash_area_size/pga_aggregate_target

A reader, June 21, 2005 - 9:04 am UTC

Ho Tom,

Will you please help me with this. I have a table that looks like below:

col1 col2
---- ----
1 1
2 2
3 3
4 4

I would like a select statement that displays:

col1 col2
---- ----
1 1
2 1
3 1
4 1

In other words, I'd like the value in col2 to show the smallest value of all rows.

Thank you very much.

Tom Kyte
June 21, 2005 - 5:01 pm UTC

select col1, min(col2) over () from t;

A reader, June 21, 2005 - 2:49 pm UTC

Hi Tom,

I have the foll requirement:

create table t1
(
datecol date,
idcol1 number,
idcol2 number,
col1 varchar2(20),
charcol char(1),
opcol varchar2(10)
);

I am doing the following queries in a single procedure all based on t1.

select max(datecol), max(idcol1), max(idcol2)
into tdatecol, tidcol1, tidcol2
from t1
where col1 = '1234'
and charcol = 'A';

select max(datecol), max(idcol2)
into tdatecol1, tidcol21
from t1
where idcol1 = tidcol1
and charcol = 'B';

select opcol into topcol
from t1
where idcol2 = tidcol2;

if (tidcol21 is not null) then
select opcol into topcol1
from t1
where idcol2 = tidcol21;
end if;

Is it possible for me to put this all in a single query? Please help.

Thanks.


Tom Kyte
June 23, 2005 - 12:03 pm UTC

yes, but I think it would be just layers of scalar subqueries, so not neccessarily anymore efficient.


select tdatecol, tidcol1, tidcol2,
tdatecol1, tidcol21,
(select opcol from t1 c where c.idcol2 = tidcol2) topcol,
case when tidcol21 is not null then
(select opcol from t1 c where c.idcol2 = tidcol21)
end topcol1
from (
select tdatecol, tidcol1, tidcol2,
to_date(substr(trim(data),1,14),'yyyymmddhh24miss') tdatecol1,
to_number(substr(data,15)) tidcol21
from (
select tdatecol, tidcol1, tidcol2,
(select nvl(to_char( max(datecol), 'yyyymmddhh24miss' ),rpad(' ',14)) || max(idcol2)
from t1 b
where b.idcol1 = a.tidcol1
and b.charcol = 'B') data
from (
select max(datecol) tdatecol, max(idcol1) tidcol1, max(idcol2) tidcol2
from t1
where col1 = '1234'
and charcol = 'A'
) A
)
)
/


select col1, min(col2) over () from t;

A reader, June 22, 2005 - 11:22 am UTC

I thank you for your suggestion but how can we do that on an non-enterprise version? Below is what I have:

SQL> select banner from v$version;

BANNER
----------------------------------------------------------
Oracle8i Release 8.1.7.4.1 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE    8.1.7.2.1       Production
TNS for 32-bit Windows: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

and

SQL> select * from v$option
PARAMETER                        VALUE
---------------------------      -----------
...                              ...
OLAP Window Functions            FALSE

I appreciate your help. 

Tom Kyte
June 23, 2005 - 1:19 pm UTC

in 9i you have analytics with standard edition

in 8i without them, you are stuck doing things the old fashioned way


select col1, min_col2
from t, (select min(col2) min_col2 from t)


or

select col1, (select min(col2) from t)
from t;



new query

Sudha Bhagavatula, June 22, 2005 - 1:32 pm UTC

I have a table this way:

eff_date term_date contr_type
01/01/2001 01/31/2001 12
01/01/2001 01/31/2001 13

I want the output to be like this:

01/01/2001 01/31/2001 12,13

Can this be done through sql ? I use Oracle 9i

Tom Kyte
June 23, 2005 - 1:40 pm UTC

search site for

stragg


Look up Pivot Table

DHP, June 23, 2005 - 1:08 pm UTC

Search for Pivot Table or check out this thread:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:766825833740 <code>

select col1, min(col2) over () from t;

A reader, June 23, 2005 - 4:52 pm UTC

That works great! Thanks again for your help.

SQL Anomaly

reader, June 23, 2005 - 5:37 pm UTC

Tom,

I am running this query and every time getting different answer. Is it a bug or feature mind you I am the only one logged in.

2. Does DISTINCT apply only on first following column or all columns in the list? 

Thank you in advance.

  1  SELECT DISTINCT loan_id, program_name
  2  FROM   loan_history
  3* WHERE ROWNUM < 30

14:16:31 SQL> /

LOAN_ID              PROGRAM_NAME
-------------------- ------------------------------
0000000000           FR 15 Yr CF
1111111111           FR 15 Yr CF
2222222222           FR 15 Yr CF
3333333333           FR 15 Yr CF
4444444444           FR 25-30 Yr CF
5555555555           FR CF 20-30 Yr
6666666666           FR 15 Yr CF
7777777777           CMT ARM 3/1 CF (salable)
8888888888           FR 15 Yr CF

9 rows selected.

14:16:34 SQL> l
  1  SELECT DISTINCT loan_id, program_name
  2  FROM   loan_history
  3* WHERE ROWNUM < 30
14:17:22 SQL> /

LOAN_ID              PROGRAM_NAME
-------------------- ------------------------------
5555555555           FR CF 20-30 Yr
4444444444           FR 25-30 Yr CF
0000000000           FR 15 Yr CF
4141414141           30 Yr Fixed FHA/VA - Salable
4444444444           FR 25-30 Yr CF
4141414141           FR 15 Yr CF
5353535353           FR 15 Yr CF
5454545454           FR 25-30 Yr CF
6565656565           FR 25-30 Yr CF
6767676767           FR 15 Yr CF
8787878787           FR 25-30 Yr CF

11 rows selected.

14:17:23 SQL> l
  1  SELECT DISTINCT loan_id, program_name
  2  FROM   loan_history
  3* WHERE ROWNUM < 30
14:17:25 SQL> /

LOAN_ID              PROGRAM_NAME
-------------------- ------------------------------
3232323232           FR 15 Yr CF
3434343434           FR 15 Yr CF
4545454545           FR 15 Yr CF
5656565656           FR 15 Yr CF
6767676767           FR CF 20-30 Yr
7878787878           FR 15 Yr CF
8989898989           FR CF 20-30 Yr

7 rows selected.

14:17:26 SQL> 

Tom Kyte
June 23, 2005 - 7:41 pm UTC

no bug, they are all reasonable.

"get 30 rows"
"distinct them"

or better yet:

"get ANY 30 rows"
"distinct them"


what are you trying to do exactly?

Distinct Question

reader, June 24, 2005 - 7:02 pm UTC

Tom,

Essentially query had to answer very simple question "show me all distinct loan ids and programs" but I started testing and could not understand why same query returns different number of rows.
I always thought that selecting ROWNUN returns a sequential number of rows. Why would Oracle select any random number of rows when I asked to show me all 29 or less?

2. Still not clear whether DISTINCT/UNIQUE apply on first column that follows or all columns in the select list following DISTINCT/UNIQUE.

Thank you,

Tom Kyte
June 24, 2005 - 9:51 pm UTC

you asked for 29 rows

and then distincted them. It is free to get any 29 rows it wanted to get, and distinct them. It could return anywhere between 0 and 29 rows at any time.


distinct applies to the entire result set, all columns.

RE: Distinct and Group By

reader, June 27, 2005 - 11:57 am UTC

Tom,

1. Would it be safe to say that Distinct is sort of Group By clause? If so how is it internally different or similar?
2. If Oracle is free to give me any number of rows below 30 how would I find out exact count of unique Last Names of occupants in a 100 stories building below 30th floor?

Thanks,

Tom Kyte
June 27, 2005 - 1:52 pm UTC

1) grouping by all columns in the select list and using distinct would have the same net effect, yes.

select distinct <100 columns> from table

is a lot easier to type though :)

2) oracle is giving you 30 rows (assuming table has more than 30 rows). exactly thirty rows. then you asked for the distinct rows from that. that could any number of rows less than or 30.

if you had a table with last names and a floor attribute, it would be

select count(distinct last_name)
from t
where floor <= 30

OK

Raju, June 28, 2005 - 1:03 pm UTC

Hi Tom,
My requirement is
"Retrieve all employees from emp table
who have joined before their manager".

I used this query.
Is this correct??
Do you have a better way to put this query??

SQL> select * from emp e
  2  where hiredate <= all(
  3  select hiredate from emp where empno in(select distinct mgr 

from emp)
  4  and deptno = e.deptno)
  5  /

     EMPNO ENAME      JOB              MGR HIREDATE         SAL   

    COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- 

---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        880   

                 20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600   

     300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250   

     500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975   

                 20
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850   

                 30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450   

                 10

6 rows selected.

 

Tom Kyte
June 28, 2005 - 1:20 pm UTC

no.

you are getting all of the employees hired before ALL managers in the deptno they work in. Given the data, it might appear to work -- but there is no rule that says the manager has to work in the same deptno.


I would have joined

select *
from (
select e1.*, e2.hiredate hd2
from emp e1, emp e2
where e1.mgr = e2.empno(+)
)
where hiredate > hd2 or hd2 is null

probably.


query

mo, July 06, 2005 - 11:46 am UTC

Tom:

I have two tables ORG and CONTACT with one to many relationship between them. One organization can have multiple contacts. They are linked using ORGCD.

ORG
-----
orgcd varchar2(10)
name varchar2(100)

Contact
-------
contactid varchar2(20)
orgcd varchar2(10)
contact_type varchar2(1)

Contact type can be either "M" for Main or "A" for additional.

I am trying to get a query to get the organization record with the main contact.

select a.orgcd,a.name,b.contactid from org a, contact b
where a.orgcd=b.orgcd(+) and b.contact_Type='M' and a.orgcd='122';

The problem is that if i have an organization with no main contact I will not get the organzation data.

IF i used contact_Type(+)='M' it gives me all the contacts.

How can you solve this problem?


Tom Kyte
July 06, 2005 - 12:33 pm UTC

you already did????

other ways could be

select ...
from org a, ( select * from contact where contact_type = 'M' )b
where a.ordcd = b.orgcd(+) and a.orgcd = '122';

or

select ..
from org a, contact b
here a.orgcd=b.orgcd(+) and (b.contact_Type='M' or b.contact_type is null) and a.orgcd='122'


but what you did is correct and proper.

query

mo, July 06, 2005 - 1:29 pm UTC

Tom:

My query did not give me what I wanted.

When i added (contact_type = 'M' or contact_type is null) it did give me what I want.

Thanks a lot,

the two queries are not equivalent

Daniell, July 06, 2005 - 5:05 pm UTC

16:56:30 SCOTT@ora10g > select * from org;

ORGCD
----------
122

Elapsed: 00:00:00.01
16:57:10 SCOTT@ora10g > select * from contact;

ORGCD C
---------- -
122 A

Elapsed: 00:00:00.01
16:57:17 SCOTT@ora10g > select *
16:57:33 2 from org a, ( select * from contact where contact_type = 'M' ) b
16:57:33 3 where a.orgcd = b.orgcd(+) and a.orgcd = '122'
16:57:34 4 /

ORGCD ORGCD C
---------- ---------- -
122

Elapsed: 00:00:00.01
16:57:35 SCOTT@ora10g > select *
16:58:01 2 from org a, contact b
16:58:01 3 where a.orgcd=b.orgcd(+) and (b.contact_Type='M' or b.contact_type
is null) and a.orgcd = '122';

no rows selected

Elapsed: 00:00:00.01

Tom Kyte
July 07, 2005 - 8:57 am UTC

you are correct, went too fast.

re-reading, it would appear that

16:57:17 SCOTT@ora10g > select *
16:57:33 2 from org a, ( select * from contact where contact_type = 'M' ) b
16:57:33 3 where a.orgcd = b.orgcd(+) and a.orgcd = '122'
16:57:34 4 /

ORGCD ORGCD C
---------- ---------- -
122

is the most likely thing he wants.

Insert from another table with more number of columns

Thiru, July 08, 2005 - 12:03 pm UTC

Tom,

I am not sure whether my question fits this thread but if you could reply, shall greatly appreciate.

I have a table :
create table temp(a number, b varchar2(10));
Another table :
create table temp_mv
as
select rowid temp_rid,t.* from temp t where 1=2;

This table temp_mv is populated with data basically from the temp table based on varied conditions and then moved to another db where there is a temp table exactly as the one above.
I would like to insert data into the remote temp table from temp_mv but without using the temp_rid column. Is there a way to do this without writing all the columns. As the tables I am trying to do has lots of columns (200+).

insert into remote.temp
select a,b from temp_mv;
(if I use the one like above I will have to specifiy all 200 columns just to avoid the column temp_rid).

Thanks so much for your time.


Tom Kyte
July 08, 2005 - 12:59 pm UTC

when faced with "lots of columns", why not just write a piece of code to write the sql from the dictionary -- I do it all of the time.

Other alternative would be to create a one time view without the columns you don't want.

(insert into t select * from another_t -- that is considered a relatively "bad practice", you would be best served with

insert into t ( c1, c2, c3, .... c200 ) select c1, c2, c3, .... c200 from another_t;

You are protected from someone reordering the columns, adding a column, etc over time.

Thiru, July 08, 2005 - 1:02 pm UTC

Thanks. The sql from the dictionary looks comfortable. The column_id is what I have to order by, right?


Tom Kyte
July 08, 2005 - 3:02 pm UTC

yes

A different sql question!!

Eray, July 12, 2005 - 5:33 am UTC

Tom;
I have question about sql- customizing the order-.Let me give you an example.I am selecting data starts with A,B,C........Z.I want to sort my data A to Z except K.I want my data which starts with K at the end?If I can how I can?

Tom Kyte
July 12, 2005 - 4:59 pm UTC

order by decode( column, 'K', 2, 1 ), column



query efficiency

J, July 13, 2005 - 1:10 pm UTC

regarding query efficiency:

is there any resource saving during query execution using sub-query by pre-selecting columns vs. non-subquery?

for instance:

1. select a.col1, b.col2, b.col3
from a, b, c
where join-criteria

2. select a1.col2,a1.col3, b1.col1
from (select a.col2, a.col3 from a ...) a1,
(select b.col1, b.col2 from b ...) b1
where join on a1 and b1

for sure, we have very complex query from our analysts, with join to more than 10 tables, and 2 or 3 or even 4 tables are big table (more than Gb size, 100+ columns, billion records).

Thanks for any suggestion.


Tom Kyte
July 13, 2005 - 1:32 pm UTC

no, those two queries are identical to the optimizer.

How to fine tune this query ?

Parag Jayant Patankar, July 14, 2005 - 6:07 am UTC

Hi Tom,

I have very large transaction table ( v8spd800 ) which is consisting

branchcd, client no, client sub no, currency, transaction date and transaction amount. I have to give sum of transactions amount bucketwise for a particular month

For e.g.
1. sum of transactions where amt is less than 500000
2. sum of transactions where amt is >= 500000 and less than 1000000

For this I have written following SQL

select
branchname,
branchaddr1,
branchaddr2,
...
..
( select nvl(sum(v08203),0)
from v8spd800, tdf31
where v09000 between substr(c65401,1,6)||'01' and substr(c65401,1,6)||'30'
and v00090 = branchcd
and v09060 < 50000000 ) x1,
( select sum(v08203)
from v8spd800, tdf31
where v09000 between substr(c65401,1,6)||'01' and substr(c65401,1,6)||'30'
and v00090 = branchcd
and v09060 >= 50000000 and v09060 < 100000000) x2,
( select nvl(sum(v08203),0)
from v8spd800, tdf31
where v09000 between substr(c65401,1,6)||'01' and substr(c65401,1,6)||'30'
and v00090 = branchcd
and v09060 >= 100000000 and v09060 < 250000000) x3
from branchmst
/

But I am sure this kind of SQL is not "GOOD" sql because every time it is accessing same table.

Can you show me how to write efficient SQL for having bucketwise total ?

regards & thanks
pjp


Tom Kyte
July 14, 2005 - 10:45 am UTC

i'm missing something -- cartesian joins in the scalar subqueries? I cannot tell what comes from where? are the scalar subqueries correlated or standalone?

use correlation names on all columns

SQL Query

Parag Jayant Patankar, July 14, 2005 - 11:08 am UTC

Hi Tom,

Regarding your answer "i'm missing something -- cartesian joins in the scalar subqueries? I cannot
tell what comes from where? are the scalar subqueries correlated or standalone?"

Sorry I was not clear. In my query my transaction table is v8spd600 whic is joinded to tdf31. Table tdf31 is having only dates ( Current Processing Date, Last Processing Date, Last processed month ..etc ) and having only one record. As this query I am going to generate for previous month I have not hardcoded date in my scalar subquery.

Sorry I have not understood your question "Queries or correlated or standalone".

Basically I want "amount bucketwise no of transactions" from transaction table which is havig many transaction records.

regards & thanks
pjp

Tom Kyte
July 14, 2005 - 11:26 am UTC

Please correlate the column names for us. In the query. (get used to doing that on a regular basis! )

Like and between

arjun, August 02, 2005 - 5:36 pm UTC

hi:

I need help with framing a Query.

I have two input values 'B' and 'L'

I have to find usernames starting with 'B' and 'L' and also which are between username starting with 'B' and 'L'


Result should have Ben, David , Elina , Henry and Luca
( assuming these names are there in a table )

select * from dba_users where username like 'B%'
.....

Thanks

Tom Kyte
August 02, 2005 - 7:22 pm UTC

where username >= 'B' and username < 'M'

Like and between

arjun, August 02, 2005 - 5:44 pm UTC

I tried this : There might be an easier and better approach.

select du.username
from dba_users du ,
( select username from dba_users where username like 'A%' and rownum=1 ) start_tab ,
( select username from dba_users where username like 'S%' and rownum=1 ) end_tab
where du.username between start_tab.username and end_tab.username

II ]

why does this query doesn't return names starting with 'H' I have records starting with 'H'

select *
from dba_users
where username >= 'A%'
and username <= 'H%'
order by username

Thanks

Tom Kyte
August 02, 2005 - 7:28 pm UTC

% is only meaningful in "like" as a wildcard.

you want where username >= 'A' and username < 'I'

to get A...H names (in ascii)

Maybe wont hit the first A-user

Lolle, August 03, 2005 - 5:42 am UTC

If you use 'user like 'A%' and rownum = 1', you dont know which user you hit, maybe not the first A in alpha-order. In that approach you must add an order by user.

A question for you

Murali, August 11, 2005 - 1:08 am UTC

Tom,

I got a table with two columns

Customer Category
C1 1
C2 2
C1 2
C1 3
C2 3
C2 5
C3 4
C4 3
C4 4
C5 1
C5 5
C3 3
C6 1
C6 5

i.e.,
C1 - 1,2,3
C2 - 2,3,5
C3 - 3,4
C4 - 3,4
C5 - 1,5
C6 - 1,5

When I pass category as 1,5
My output should be

C5 and C6 - 1,5
C1 - 1
C2 - 5

If selected categories are 1,3,4,5
The output should be
C1 - 1,3
C2 - 3
C3, C4 - 3,4
C5,C6 - 1,5

Please help me out in this regard.



Tom Kyte
August 11, 2005 - 9:32 am UTC

i would, if I had create tables and insert intos.




Random guess

Bob B, August 11, 2005 - 10:02 am UTC

CREATE TABLE VALS(
p1 VARCHAR2(2),
p2 NUMBER(1)
)
/

INSERT INTO VALS VALUES('C1', 1 );
INSERT INTO VALS VALUES('C2', 2 );
INSERT INTO VALS VALUES('C1', 2 );
INSERT INTO VALS VALUES('C1', 3 );
INSERT INTO VALS VALUES('C2', 3 );
INSERT INTO VALS VALUES('C2', 5 );
INSERT INTO VALS VALUES('C3', 4 );
INSERT INTO VALS VALUES('C4', 3 );
INSERT INTO VALS VALUES('C4', 4 );
INSERT INTO VALS VALUES('C5', 1 );
INSERT INTO VALS VALUES('C5', 5 );
INSERT INTO VALS VALUES('C3', 3 );
INSERT INTO VALS VALUES('C6', 1 );
INSERT INTO VALS VALUES('C6', 5 );

SELECT
DISTINCT
STRAGG( P1 ) OVER ( PARTITION BY P2 ORDER BY P1 ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) P1,
P2
FROM (
SELECT DISTINCT P1, STRAGG( P2 ) OVER ( PARTITION BY P1 ORDER BY P2 ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) P2
FROM VALS
WHERE INSTR( ',' || :p_categories || ',', ',' || p2 || ',' ) > 0
)
ORDER BY P1

Works in 10gR1. STRAGG as defined here:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2196162600402 <code>

Don't know if the datatypes are correct or if this will perform nicely. The format of the output wasn't clear either:

C1 and C2 ...
C1,C2 ...
C1, C2 ...

I'm not going to even try to figure that one out.

Tom Kyte
August 11, 2005 - 6:10 pm UTC

I would have used stragg and str2tbl, something like:

select cusomter, stragg( category )
from ( select customer, category
from t
where category in (select * from TABLE(str2tbl(:x)))
)
group by customer


where :x := '1, 2, 3, 4' or whatever.

Two different answers to the same question

Bob B, August 11, 2005 - 10:31 pm UTC

Depends on whether the customers need to be grouped by their category list or not.

One query returns each customer and a list of their categories; the other returns each list of categories and the list of customers that match that list.

One Query

Murali, August 12, 2005 - 12:01 am UTC

Hello Tom,
First of all, thanks Bob for creating those insert stmts for me. So very nice of you. And secondly, I need a generic function kind of a thing where in you pass in the Category and I would need the output as mentioned earlier. I would want to use this function in Sybase, Sql Server and MySql without making much changes. Please help me out in this regard.




Tom Kyte
August 12, 2005 - 8:37 am UTC

rolling on the floor laughing. (also known as ROTFL)....


sorry, don't see that happening.

On the topic of database independence ...

Bob B, August 12, 2005 - 11:28 am UTC

Why even pick a programming language for application development? I say we should not only be database neutral, but programming language neutral! What happens if Java is no longer supported or PHP developers become too expensive or ASP.net can no longer deliver the performance we need (etc)? Instead, lets write (lots of) code that will compile in any programming language. *dripping sarcasm*

Being database neutral means you have to write an application in several different languages that needs to do the same thing in about the same time in all of them. I doubt anyone would try to write an app that works in more than one programming language (maybe for fun, but not for a living). You'll see much of the same language and syntax in many languages:

for, while, try/catch/finally, switch, function, (), {}, ;, etc

Same thing for databases:
select, update, delete, from, where, and, or, group by, order by, etc

Query

mo, August 15, 2005 - 2:59 pm UTC

Tom:

I have this query that counts total of records based on a field value of "Null" or "Open". However I find out that query does not work for nulls and it should be where field is null instead of equal sign.

Do I need to create 2 queries and write an IF statement or there is a wa to do this in one query?

IF (i_link_id = 12) THEN
v_request_status := null;
elsif (i_link_id = 13) THEN
v_request_status := 'Open';
END IF;

SELECT count(*) into re_count
FROM parts_request a
WHERE a.user_id = i_user_id and
a.request_status = v_request_status and
not exists (select request_id from parts_shipment b where a.request_id = b.request_id);

Tom Kyte
August 15, 2005 - 10:33 pm UTC

in the following, pretend y is user_id and z is request_status, you can union all the "two" queries and then count.  Only one part really executes at run time based on the bind variable sent in


ops$tkyte@ORA9IR2> create table t ( x int primary key, y int, z int );

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
  2  select rownum, trunc(rownum/10), case when mod(rownum,10) = 0 then null else 1 end
  3  from all_objects;

27991 rows created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t_idx on t(y,z);

Index created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable y number
ops$tkyte@ORA9IR2> variable z number
ops$tkyte@ORA9IR2> @plan "select count(*) from (select x from t where y = :y and z = :z union all select x from t where y = :y and z is null and :z is null)"
ops$tkyte@ORA9IR2> delete from plan_table;

7 rows deleted.

ops$tkyte@ORA9IR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select count(*) from (select x from t where y = :y and z = :z union all select x from t where y = :y and z is null and :z is null)

Explained.

ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |       |     3 |
|   1 |  SORT AGGREGATE      |             |     1 |       |       |
|   2 |   VIEW               |             |    10 |       |     3 |
|   3 |    UNION-ALL         |             |       |       |       |
|*  4 |     INDEX RANGE SCAN | T_IDX       |     9 |    63 |     1 |
|*  5 |     FILTER           |             |       |       |       |
|*  6 |      INDEX RANGE SCAN| T_IDX       |     1 |     7 |     2 |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("T"."Y"=TO_NUMBER(:Z) AND "T"."Z"=TO_NUMBER(:Z))
   5 - filter(:Z IS NULL)
   6 - access("T"."Y"=TO_NUMBER(:Z) AND "T"."Z" IS NULL)

Note: cpu costing is off

21 rows selected.
 

query

mo, August 17, 2005 - 6:29 pm UTC

Tom:

You are right. I was going to use a ref cursor for the query but your method here is easier.

FOR y in (
select x from t where user_id = i_user_id and request_status = v_request_status
union all
select x from t where user_id = i_user_id and request_status is null and v_request_status is null )
LOOP
...
END LOOP;

error

khaterali, September 19, 2005 - 8:31 am UTC

VERSION INFORMATION:
TNS for Solaris: Version 8.1.7.0.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 8.1.7.0.0 - Production
Time: 17-SEP-2005 12:07:36
Tracing not turned on.
Tns error struct:
nr err code: 0
ns main err code: 12547
TNS-12547: TNS:lost contact
ns secondary err code: 12560
nt main err code: 517
TNS-00517: Lost contact
nt secondary err code: 32
nt OS err code: 0


***********************************************************************
Fatal NI connect error 12547, connecting to:
(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/8.1.7/bin/oracle)(ARGV0=oracleehrd
ev)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=)(H
OST=sparc)(USER=oracle))))

VERSION INFORMATION:
TNS for Solaris: Version 8.1.7.0.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 8.1.7.0.0 - Production
Time: 17-SEP-2005 12:07:42
Tracing not turned on.
hsperfdata_root Tns error struct:
nr err code: 0
ns main err code: 12547
TNS-12547: TNS:lost contact
ns secondary err code: 12560
nt main err code: 517
TNS-00517: Lost contact
nt secondary err code: 32
nt OS err code: 0


***********************************************************************
Fatal NI connect error 12547, connecting to:
(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/8.1.7/bin/oracle)(ARGV0=oracleehrd
ev)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=)(H
OST=sparc)(USER=oracle))))

VERSION INFORMATION:
TNS for Solaris: Version 8.1.7.0.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 8.1.7.0.0 - Production
Time: 17-SEP-2005 12:07:42
Tracing not turned on.
Tns error struct:
nr err code: 0
ns main err code: 12547
TNS-12547: TNS:lost contact
ns secondary err code: 12560
nt main err code: 517
TNS-00517: Lost contact
nt secondary err code: 32
nt OS err code: 0


Tom Kyte
September 19, 2005 - 11:49 am UTC

please open an itar - I see the other two postings below as well -- an itar is what you want, not a forum discussion question

errror in exp -00084

khater ali, September 19, 2005 - 9:02 am UTC

Export fails with EXP-00084: Unexpected DbmsJava error -1031 at step 6661

Running FULL export on Oracle 8.1.5 database on son solaris 8 Export starts OK, dumps tables under all schemas, then fails with the following errors:

...
. exporting referential integrity constraints
. exporting synonyms
. exporting views
. exporting stored procedures
EXP-00084: Unexpected DbmsJava error -1031 at step 6661
EXP-00008: ORACLE error 1031 encountered
ORA-01031: insufficient privileges
EXP-00000: Export terminated unsuccessfully
how to solve the above problem also i already posted this question in metalink.oracle.com but there is no responsible .please help me

errror in exp -00084

khater ali, September 19, 2005 - 9:03 am UTC

Export fails with EXP-00084: Unexpected DbmsJava error -1031 at step 6661

Running FULL export on Oracle 8.1.7 database on son solaris 8 Export starts OK, dumps tables under all schemas, then fails with the following errors:

...
. exporting referential integrity constraints
. exporting synonyms
. exporting views
. exporting stored procedures
EXP-00084: Unexpected DbmsJava error -1031 at step 6661
EXP-00008: ORACLE error 1031 encountered
ORA-01031: insufficient privileges
EXP-00000: Export terminated unsuccessfully
how to solve the above problem also i already posted this question in metalink.oracle.com but there is no responsible .please help me

sqlnet

khater ali, September 20, 2005 - 12:13 am UTC

Fatal NI connect error 12547, connecting to:
(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/8.1.7/bin/oracle)(ARGV0=oracleehrd
ev)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=)(H
OST=sparc)(USER=oracle))))

VERSION INFORMATION:
TNS for Solaris: Version 8.1.7.0.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 8.1.7.0.0 - Production
Time: 17-SEP-2005 12:07:42
Tracing not turned on.
hsperfdata_root Tns error struct:
nr err code: 0
ns main err code: 12547
TNS-12547: TNS:lost contact
ns secondary err code: 12560
nt main err code: 517
TNS-00517: Lost contact
nt secondary err code: 32
nt OS err code: 0

please give some suggestions to solve the above problems.our version is 8.1.7 and our os is sun solaris 8

Tom Kyte
September 20, 2005 - 12:20 am UTC

please see my original comment.

Table value

Tony, September 21, 2005 - 3:55 am UTC

Hi Tom,

I am inserting the values to the table from one big sql statement.

The sql statement can change in the future.. so I inserted the sql in the table and taking the value from the table I am doing the ref cursor and inserting but the problem is the query take some parameters so dynamically how can I send the parameter..

bellow is the procedure please guide me on the same..

pmonth is the parameter which i am passing to the SQL.....

CREATE OR REPLACE PROCEDURE CMPR_LOAD_test(CNTRYCDE IN VARCHAR2,
PMONTH IN VARCHAR2,
ERR_CD IN OUT NUMBER,
ERR_TEXT IN OUT VARCHAR2) IS
LN_ROWS NUMBER(7) := '500';
TYPE CMPREXT IS TABLE OF DRI_276_CMPR%ROWTYPE;
CMPRRECS CMPREXT;
TYPE CMPR_REF_CUR IS REF CURSOR;

CMPRCUR CMPR_REF_CUR;
sql_text long;

BEGIN
select sql_value into sql_text from cmpr_sql_text;

OPEN CMPRCUR for sql_text;
LOOP
FETCH CMPRCUR BULK COLLECT
INTO CMPRRECS LIMIT LN_ROWS;
BEGIN
FORALL I IN CMPRRECS.FIRST .. CMPRRECS.LAST
INSERT INTO DRI_276_CMPR VALUES CMPRRECS (I);
EXCEPTION
WHEN OTHERS THEN
ERR_CD := SQLCODE;
ERR_TEXT := 'Sp_Ins_Ext_Mthly_test : Error IN INSERTING INTO TABLE DRI_276_ECS_MISEXT' ||SQLERRM;
RETURN;
END;
EXIT WHEN CMPRCUR%NOTFOUND;
COMMIT;
END LOOP;
CLOSE CMPRCUR;
COMMIT;
END CMPR_LOAD_test;

/

Tom Kyte
September 21, 2005 - 7:01 pm UTC

why is this procedure a procedure and not just an INSERT into table SELECT from another table???????

Your logic scares me to death.


You load some records (say the first 2 500 row batches)....

You hit an error on row 1023

You catch the error and basically ignore it (leaving the first two batches sort of hanging out???)

and return.....


Please, just make this an insert into select whatever from and lose all procedural code and don't use "when others" unless you follow it by RAISE;

characterset

khater ali, September 23, 2005 - 8:37 am UTC

how to change a characterset for example i have a database called hsusdev and my characterset is us7ascii and now i want to change my characterset to UTF8.one of my friend told me that, recreate the control file.my question is, is there any other option to change my db characterset without recreating the controlfile and now we are inserting xml tables in this database

Tom Kyte
September 23, 2005 - 9:43 am UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96529/ch2.htm#101203 <code>

it is not a controlfile rebuild operation.

You'll want to check out that document in its entirety

You are the best and I know this is the only place in the whole world where I will get the answer..

A reader, September 23, 2005 - 3:29 pm UTC

I have similar row to coloum question (like the original post ).Please help.

create table temp_fluids
(code varchar2(13), name varchar2(10), percent number)
/
SQL> desc temp_fluids
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CODE                                               VARCHAR2(13)
 NAME                                               VARCHAR2(10)
 PERCENT                                            NUMBER


insert into temp_fluids
values ('CODE1','NAME1',10)
/
insert into temp_fluids
values ('CODE1','NAME2',20)
/
insert into temp_fluids
values ('CODE1','NAME3',30)
/
insert into temp_fluids
values ('CODE2','NAME1',10)
/
insert into temp_fluids
values ('CODE2','NAME2',20)
/
insert into temp_fluids
values ('CODE2','NAME3',30)
/
insert into temp_fluids
values ('CODE2','NAME4',40)
/
insert into temp_fluids
values ('CODE3','NAME1',10)
/


Want output something like :

       code1    code2  code3

name1   10      10      10
 
name2   20      20

name3   30      30

name4           40 

Where number of columns (code1, code2 etc) may go up to more than 100 depending on the number of records.

Thanks! 

Tom Kyte
September 23, 2005 - 8:51 pm UTC

search this site for PIVOT to see examples of pivots

and remember a sql query has a fixed number of columns in it, you'll have 100 "max(decodes( ... ) )"

SQL QUERY problem

Rmais, September 24, 2005 - 4:28 pm UTC

Hi Tom,
I have problems writing a difficult sql query, please help me
I have a table t in which there are 50000 records 
the table has columns like 

create table t 
(MATCH_ID    NUMBER(4) NOT NULL,
TEAM_ID         NUMBER(4), 
PLAYER_ID    NUMBER(4),
RUNS    NUMBER(3))

here match_id, player_id and team_id are jointly primary key

SQL> SELECT * FORM T WHERE MATCH_ID < 4

/

 MATCH_ID    TEAM_ID      PL_ID RUNS
--------- ---------- ---------- -------------------
        1          2       1228 8
        1          2       1203 82
        1          2       1316 24
        1          1       1150 27
        1          1       1278 13
        1          1       1243 60
        2          1       1278 37
        2          1       1291 0
        2          1       1243 53
        2          2       1228 25
        2          2       1285 103
        2          2       1316 60
        3          2       1228 8
        3          2       1285 25
        3          2        858 43
        3          1       1278 52
        3          1       1394 6
        3          1       1243 31
        4          1       1278 61
        4          1       1394 6
        4          1       1243 3
        4          2       1228 41
        4          2       1285 40
        4          2        858 5
        6          2       1228 20
        6          2       1285 100
        6          2       1408 0
        7          2       1228 15
        7          2       1285 34
        7          2       1408 44
        8          2       1228 0
        8          2       1420 31
        8          2       1340 66
        9          2       1420 19
        9          2       1385 28
        9          2       1340 0

.....so on upto 50000 records..

the problem is that I want to extract how many times each player_id in each 

match exists in the table, prior to that match_id (or current_match_id)

along with that in another column, I also want the sum of 'RUNS' for each 

player_id  prior to that match_id (or current_match_id)



my disired output is:


 MATCH_ID    TEAM_ID   player_ID RUNS   NO_OF_OCCURENCES    SUM(RUNS)
                                       BEFORE_THIS_MATCH   BEFORE_THIS_MATCH
                                       FOR_THIS_PLAYER_ID  FOR_THIS_PLAYER_ID
--------- ---------- ---------- -------------------
        1          2       1228 8      0                   0
        1          2       1203 82     0                   0
        1          2       1316 24     0                   0
        1          1       1150 27     0                   0
        1          1       1278 13     0                   0
        1          1       1243 60     0                   0
        2          1       1278 37     1                   13
        2          1       1291 0      0                   0
        2          1       1243 53     1                   60 
        2          2       1228 25     1                   8
        2          2       1285 103    0                   0 
        2          2       1316 60     1                   24
        3          2       1228 8      2                   33
        3          2       1285 25     1                   103
        3          2        858 43     0                   0
        3          1       1278 52     2                   50
        3          1       1394 6      0                   0
        3          1       1243 31     2                   113
        4          1       1278 61     3                   102 
        4          1       1394 6      1                   6
        4          1       1243 3      3                   144
        4          2       1228 41     3                   41
        4          2       1285 40     2                   128
        4          2        858 5      1                   43
        6          2       1228 20     4                   82
        6          2       1285 100    3                   168
        6          2       1408 0      0                   0
        7          2       1228 15     5                   102
        7          2       1285 34     4                   268
        7          2       1408 44     1                   0
        8          2       1228 0      6                   117
        8          2       1420 31     0                   0
        8          2       1340 66     0                   0
        9          2       1420 19     1                   31
        9          2       1385 28     0                   0
        9          2       1340 0      1                   66


as you can see from the above data (5TH COLUMN), i have mentioned the 

existance of each player_id in each match prior to the current_match_id

since match_id = 1 is the 1st match in the table so no player_id comes in the 

table before match number 1    

in match number 2 , player_id = 1278 was also present in match_id = 1 so 

thats why Number_OF_OCCURENCES = 1 for  player_id = 1278 in match_id = 2 

and so on.. 

same is the case with 'RUNS' column but here RUNS are the SUM of each 

player_id's 'RUNS' before the current match


Note: if some player_id does not exist in the table before the current 

match_ID then the query should return zero for that player_id ( as in 4th and 

5th columns of no_of_occurances and sum(runs) respectively)

for example: in above data

MATCH_ID    TEAM_ID  PLayer_ID RUNS   NO_OF_OCCURENCES    SUM(RUNS)
                                      BEFORE_THIS_MATCH   BEFORE_THIS_MATCH
                                      FOR_THIS_PLAYER_ID  FOR_THIS_PLAYER_ID
       9          2       1385 28     0                   0


I hope this will clear my problem
i would be extremely grateful if you help me out??




here is sample ddl of the above data

create table t 
(MATCH_ID    NUMBER(4) NOT NULL,
TEAM_ID         NUMBER(4), 
PLAYER_ID    NUMBER(4),
RUNS    NUMBER(3))


insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 2, 1228, 8);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 2, 1203, 82);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 2, 1316, 24);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 1, 1150, 27);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 1, 1278, 13);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (1, 1, 1243, 60);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 1, 1278, 37);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 1, 1291, 0);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 1, 1243, 53);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 2, 1228, 25);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 2, 1285, 103);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (2, 2, 1316, 60);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 2, 1228, 8);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 2, 1285, 25);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 2, 858, 43);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 1, 1278, 52);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 1, 1394, 6);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (3, 1, 1243, 31);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 1, 1278, 61);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 1, 1394, 6);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 1, 1243, 3);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 2, 1228, 41);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 2, 1285, 40);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (4, 2, 858, 5);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (6, 2, 1228, 20);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (6, 2, 1285, 100);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (6, 2, 1408, 0);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (7, 2, 1228, 15);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (7, 2, 1285, 34);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (7, 2, 1408, 44);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (8, 2, 1228, 0);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (8, 2, 1420, 31);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (8, 2, 1340, 66);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (9, 2, 1420, 19);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values  (9, 2, 1385, 28);
insert into t (MATCH_ID, TEAM_ID, PLAYER_ID, RUNS) values (9, 2, 1340, 0);


regards
ramis.

               

Tom Kyte
September 24, 2005 - 8:26 pm UTC

It was a bit hard to follow, but I believe you want this:

ops$tkyte@ORA10G> select match_id, team_id, player_id, runs,
  2         count(*) over (partition by player_id order by match_id)-1 cnt,
  3         sum(runs) over (partition by player_id order by match_id)-runs runs_tot
  4    from t
  5   order by match_id, player_id
  6  /
 
  MATCH_ID    TEAM_ID  PLAYER_ID       RUNS        CNT   RUNS_TOT
---------- ---------- ---------- ---------- ---------- ----------
         1          1       1150         27          0          0
         1          2       1203         82          0          0
         1          2       1228          8          0          0
         1          1       1243         60          0          0
         1          1       1278         13          0          0
         1          2       1316         24          0          0
         2          2       1228         25          1          8
         2          1       1243         53          1         60
         2          1       1278         37          1         13
         2          2       1285        103          0          0
         2          1       1291          0          0          0
         2          2       1316         60          1         24
         3          2        858         43          0          0
         3          2       1228          8          2         33
         3          1       1243         31          2        113
         3          1       1278         52          2         50
         3          2       1285         25          1        103
         3          1       1394          6          0          0
         4          2        858          5          1         43
         4          2       1228         41          3         41
         4          1       1243          3          3        144
         4          1       1278         61          3        102
         4          2       1285         40          2        128
         4          1       1394          6          1          6
         6          2       1228         20          4         82
         6          2       1285        100          3        168
         6          2       1408          0          0          0
         7          2       1228         15          5        102
         7          2       1285         34          4        268
         7          2       1408         44          1          0
         8          2       1228          0          6        117
         8          2       1340         66          0          0
         8          2       1420         31          0          0
         9          2       1340          0          1         66
         9          2       1385         28          0          0
         9          2       1420         19          1         31
 
36 rows selected.
 
 

compare two tables

Bhavesh Ghodasara, September 25, 2005 - 3:09 am UTC

Hi tom,
I have two tables with same structure but different data..
say : t and t_log
when user make changes in t the old data stored in t_log..
now i want to make a report in which i have to display only the columns of t that is changed..
and dont want to hardcoded comparision like t.a=t_log.a
because i have 96 columns..
so how can I get columns only which are changed..
for example:
column_name oldvalue newvalue
a 0 1
Is it possible by just sql query or i have to do it by pl/sql..
thanks in advance
BHavesh

Tom Kyte
September 25, 2005 - 9:32 am UTC

well, I will not promise this technique scales up, but I've used this many a time to compare rows column/column down the page:


ops$tkyte@ORA10G> create or replace type myScalarType as object
  2  ( rnum number, cname varchar2(30), val varchar2(4000) )
  3  /
 
Type created.
 
ops$tkyte@ORA10G> create or replace type myTableType as table of myScalarType
  2  /
 
Type created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create or replace
  2  function cols_as_rows( p_query in varchar2 ) return myTableType
  3  -- this function is designed to be installed ONCE per database, and
  4  -- it is nice to have ROLES active for the dynamic sql, hence the
  5  -- AUTHID CURRENT_USER
  6  authid current_user
  7  -- this function is a pipelined function -- meaning, it'll send
  8  -- rows back to the client before getting the last row itself
  9  -- in 8i, we cannot do this
 10  PIPELINED
 11  as
 12      l_theCursor     integer default dbms_sql.open_cursor;
 13      l_columnValue   varchar2(4000);
 14      l_status        integer;
 15      l_colCnt        number default 0;
 16      l_descTbl       dbms_sql.desc_tab;
 17      l_rnum          number := 1;
 18  begin
 19          -- parse, describe and define the query.  Note, unlike print_table
 20          -- i am not altering the session in this routine.  the
 21          -- caller would use TO_CHAR() on dates to format and if they
 22          -- want, they would set cursor_sharing.  This routine would
 23          -- be called rather infrequently, I did not see the need
 24          -- to set cursor sharing therefore.
 25      dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
 26      dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
 27      for i in 1 .. l_colCnt loop
 28          dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
 29      end loop;
 30
 31          -- Now, execute the query and fetch the rows.  Iterate over
 32          -- the columns and "pipe" each column out as a separate row
 33          -- in the loop.  increment the row counter after each
 34          -- dbms_sql row
 35      l_status := dbms_sql.execute(l_theCursor);
 36      while ( dbms_sql.fetch_rows(l_theCursor) > 0 )
 37      loop
 38          for i in 1 .. l_colCnt
 39          loop
 40              dbms_sql.column_value( l_theCursor, i, l_columnValue );
 41              pipe row
 42              (myScalarType( l_rnum, l_descTbl(i).col_name, l_columnValue ));
 43          end loop;
 44          l_rnum := l_rnum+1;
 45      end loop;
 46
 47          -- clean up and return...
 48      dbms_sql.close_cursor(l_theCursor);
 49      return;
 50  end cols_as_rows;
 51  /
 
Function created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select *
  2    from TABLE( cols_as_rows('select *
  3                                from emp
  4                               where rownum = 1') );
 
      RNUM CNAME           VAL
---------- --------------- --------------------
         1 EMPNO           7369
         1 ENAME           SMITH
         1 JOB             CLERK
         1 MGR             7902
         1 HIREDATE        17-dec-1980 00:00:00
         1 SAL             800
         1 COMM
         1 DEPTNO          20
 
8 rows selected.


<b>Now, to see how you can use it:</b>


ops$tkyte@ORA10G> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA10G> create table emp2 as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA10G> update emp2 set ename = lower(ename) where mod(empno,3) = 0;
 
7 rows updated.
 
ops$tkyte@ORA10G> commit;
 
Commit complete.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> alter session set nls_date_format ='dd-mon-yyyy hh24:mi:ss';
 
Session altered.
 
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select a.pk, a.cname, a.val, b.val
  2    from (
  3  select cname, val,
  4         max(case when cname='EMPNO' then val end) over (partition by rnum) pk
  5  from table( cols_as_rows( 'select * from emp' ) ) x
  6         ) A,
  7             (
  8  select cname, val,
  9         max(case when cname='EMPNO' then val end) over (partition by rnum) pk
 10  from table( cols_as_rows( 'select * from emp2' ) ) x
 11         ) B
 12   where a.pk = b.pk
 13     and a.cname = b.cname
 14     and decode( a.val, b.val, 0, 1 ) = 1
 15  /
 
PK                   CNAME           VAL                  VAL
-------------------- --------------- -------------------- --------------------
7521                 ENAME           WARD                 ward
7566                 ENAME           JONES                jones
7698                 ENAME           BLAKE                blake
7782                 ENAME           CLARK                clark
7788                 ENAME           SCOTT                scott
7839                 ENAME           KING                 king
7902                 ENAME           FORD                 ford
 
7 rows selected.
 
ops$tkyte@ORA10G>

<b>We had to carry the primary key down by RNUM (by original row) and join primary key+cname to primary key+cname - decode is an easy way to compare columns, even when null.


You can see and obvious enhancement you can make for your specific case, the cols_as_rows function can easily output the primary key (modify the type, add an attribute and output it) so you can skip the analytic.

Yes, there are other ways to transpose a table using a cartesian join to a table with 96 rows and decode, but this is something I had sitting around</b>
 

thanks

Bhavesh Ghodasara, September 25, 2005 - 10:03 am UTC

hi tom,
thanks a lot...you are simpally oracle GOD..
I spend my whole sunday behind this question..i try to do it by pl/sql like create two cursors and compare the value and insert into intermidiate table...
but your solutions looks lot better then it and its also genralized...
but i dont know much about pipelined function but just going to read tonight..
I have problem when run this query :
SELECT *
FROM TABLE( cols_as_rows('select *
FROM emp
WHERE ROWNUM = 1') );

ORA-22905: cannot access rows from a non-nested table item

what is table??
by the way we are using here 10g version :10.1.0.2.0

Bhavesh.

Tom Kyte
September 25, 2005 - 10:46 am UTC

No one has that title - really.



Table is builtin - part of SQL - but I ran the example in 10.1, did you use my code "as is"?

Sql query

Jagadeesh Tata, September 26, 2005 - 4:43 am UTC

Hi Tom,

I have the following requirement. I have a query as

SELECT A.RESULT_CODE
FROM JTF_IH_RESULTS_TL A,JTF_IH_WRAP_UPS B
WHERE A.RESULT_ID = B.RESULT_ID
AND A.RESULT_CODE IN (
'Billing',
'Voice Service/ Feature Related',
'WAP Corporate Directory')

I want to get the value that is not in the table.

Say for example 'Billing' is not available in table. I want to get the data using a sql statement.

Thanks and Regards,
Jagadeesh Tata



Tom Kyte
September 26, 2005 - 9:15 am UTC

does not make sense to me yet......

SQL - HOW TO WRITE

VIKAS SANGAR, September 26, 2005 - 6:42 am UTC

Dear Mr. Kyte

Can you pls, kindly help me out with a query that I am trying to work out, to get the following output...

Suppose in my database, I have a Table T1 with its Columns as c1, c2 and c3, and a few rows of records such as, for example Row1 has the data set for above three columns as c1=a, c2=b, c3=c, now I want to write a query based on this Table, the out put of which comes out as:-

Column_name dummy Records
------------- ------- --------
c1 := a
c2 := b
c3 := c

3 Rows selected.

Can you pls suggest, me a way as for how to achive the above output.

Take care, regards...
Vikas.


Tom Kyte
September 26, 2005 - 9:23 am UTC

don't get it, i don't know what is really in the table in the first place.

you need a create table and insert into for me to play with.

Bhavesh Ghodasara, September 26, 2005 - 8:24 am UTC

hi tom,
yes I use yours code as it is..
with no changes.
Bhavesh.

Tom Kyte
September 26, 2005 - 9:37 am UTC

demonstrate for me, create your table with as few columns as possible, insert as little data as you can to simulate the issue and show me what you mean.

A reader, September 26, 2005 - 9:24 am UTC

Thanks Tom! In our real case , we don't know the value of CODE yet (it can be anything ) then how can we write the different decodes ? 


SQL> ed
Wrote file afiedt.buf

  1  select name, max(decode(code,'CODE1',percent,null)) code1,
  2   max(decode(code,'CODE2',percent,null)) code2 ,
  3   max(decode(code,'CODE3',percent,null)) code3,
  4   max(decode(code,'CODE4',percent,null)) code4
  5  from temp_fluids
  6* group by name
SQL> /

NAME            CODE1      CODE2      CODE3      CODE4
---------- ---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40
 

Tom Kyte
September 26, 2005 - 9:38 am UTC

you have to run a query then, to get the codes, and then construct the query based on that output and execute it.

which rights I required??

Bhavesh Ghodasara, September 26, 2005 - 9:58 am UTC

hi tom,

SQL>  CREATE TABLE t
  2   (
  3        empno NUMBER PRIMARY KEY,
  4       ename VARCHAR2(10)
  5   );

Table created.

SQL> insert into t values(1,'Bhavesh');

1 row created.

SQL> insert into t values(2,'Tom');

1 row created.

SQL> insert into t values(3,'Rahul');

1 row created.

SQL> create table t_log
  2  as select * from t;

Table created.



SQL> create or replace type myTableType as table of myScalarType;
  2  /

Type created.

SQL> CREATE OR REPLACE
  2      FUNCTION Cols_As_Rows( p_query IN VARCHAR2 ) RETURN myTableType
  3      -- this function is designed to be installed ONCE per database, and
  4      -- it is nice to have ROLES active for the dynamic sql, hence the
  5      -- AUTHID CURRENT_USER
  6      authid current_user
  7      -- this function is a pipelined function -- meaning, it'll send
  8      -- rows back to the client before getting the last row itself
  9      -- in 8i, we cannot do this
 10     PIPELINED
 11     AS
 12         l_theCursor     INTEGER DEFAULT dbms_sql.open_cursor;
 13         l_columnValue   VARCHAR2(4000);
 14         l_status        INTEGER;
 15         l_colCnt        NUMBER DEFAULT 0;
 16         l_descTbl       dbms_sql.desc_tab;
 17         l_rnum          NUMBER := 1;
 18     BEGIN
 19             -- parse, describe and define the query.  Note, unlike print_table
 20             -- i am not altering the session in this routine.  the
 21             -- caller would use TO_CHAR() on dates to format and if they
 22             -- want, they would set cursor_sharing.  This routine would
 23             -- be called rather infrequently, I did not see the need
 24             -- to set cursor sharing therefore.
 25         dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
 26         dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
 27         FOR i IN 1 .. l_colCnt LOOP
 28             dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
 29         END LOOP;
 30   
 31             -- Now, execute the query and fetch the rows.  Iterate over
 32             -- the columns and "pipe" each column out as a separate row
 33             -- in the loop.  increment the row counter after each
 34             -- dbms_sql row
 35         l_status := dbms_sql.EXECUTE(l_theCursor);
 36         WHILE ( dbms_sql.fetch_rows(l_theCursor) > 0 )
 37         LOOP
 38             FOR i IN 1 .. l_colCnt
 39             LOOP
 40                dbms_sql.column_value( l_theCursor, i, l_columnValue );
 41                 pipe ROW
 42                 (myScalarType( l_rnum, l_descTbl(i).col_name, l_columnValue ));
 43             END LOOP;
 44             l_rnum := l_rnum+1;
 45         END LOOP;
 46   
 47             -- clean up and return...
 48         dbms_sql.close_cursor(l_theCursor);
 49         RETURN;
 50     END cols_as_rows;
 51  /

Function created.

  1  select *
  2      from TABLE( cols_as_rows('select *
  3                                   from t
  4*                                 where rownum = 1') )
SQL> /
    from TABLE( cols_as_rows('select *
         *
ERROR at line 2:
ORA-22905: cannot access rows from a non-nested table item

-------------------------------------
but 

SQL> select cols_as_rows('select * from t where rownum=1') from dual;

COLS_AS_ROWS('SELECT*FROMT(RNUM, CNAME, VAL)
---------------------------------------------------------------------------------------------------
MYTABLETYPE(MYSCALARTYPE(1, 'EMPNO', '1'), MYSCALARTYPE(1, 'ENAME', 'Bhavesh'))
whats that??
what rights are required to execute queries??
I think problem with rights..
Bhavesh 

Tom Kyte
September 26, 2005 - 10:57 am UTC

what is your cursor_sharing set to?


you can use


cast( cols_as_rows( .... ) as mytableType )


instead of just cols_as rows - I think you might have cursor_sharing set :(

A reader, September 26, 2005 - 10:22 am UTC

Thanks Tom!

Created a table to hold the distinct codes and able to get the values from there as shown below.Now the issue is 
how to change the label of the columns depending on the values of codes ?


SQL> select * from hold_codes;

CODES                  SEQ
--------------- ----------
CODE1                    1
CODE2                    2
CODE3                    3



SQL> ed
Wrote file afiedt.buf

  1   select name, max(decode(seq,1,percent,null)) code1,
  2    max(decode(seq,2,percent,null)) code2,
  3    max(decode(seq,3,percent,null)) code3
  4   from temp_fluids a ,hold_Codes b
  5   where a.code = b.codes
  6*  group by name
SQL> /

NAME            CODE1      CODE2      CODE3
---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40 

Tom Kyte
September 26, 2005 - 11:00 am UTC

you control the name of the label -- you named them code1, code2, code3 here but you can call them a, b, c if you like - you control that entirely.

A reader, September 26, 2005 - 11:18 am UTC

I think I was not clear. I want the output column names same as the data in HOLD_Codes table dynamically.

Lets say my data set looks like below. and instead of hardsetting the names of the output cloumns as code1 ,code2 etc I want to display the actual values from HOLD_CODEs table dynamically.How to do that ?


SQL> select * from hold_codes;

CODES                  SEQ
--------------- ----------
apple                   1
orange                  2
some other label        3



SQL> ed
Wrote file afiedt.buf

  1   select name, max(decode(seq,1,percent,null)) code1,
  2    max(decode(seq,2,percent,null)) code2,
  3    max(decode(seq,3,percent,null)) code3
  4   from temp_fluids a ,hold_Codes b
  5   where a.code = b.codes
  6*  group by name
SQL> /

NAME            CODE1      CODE2      CODE3
---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40
 

Tom Kyte
September 27, 2005 - 9:25 am UTC

but as said -- you have to "know" what the number of columns and their names are when the query is parsed.

Therefore, you

a) query codes table in order to
b) construct query with specific information from codes table.

A reader, September 26, 2005 - 3:54 pm UTC

In sqlplus one way is...

SQL> column a new_value b ;
SQL> select codes a from hold_codes where seq =1;

A
---------------
CODE1

SQL> 
SQL> column a1 new_value b1 ;
SQL> select codes a1 from hold_codes where seq =2;

A1
---------------
CODE2

SQL> 
SQL> column a2 new_value b2 ;
SQL> select codes a2 from hold_codes where seq =3;

A2
---------------
CODE3

SQL> 
SQL> select name, max(decode(seq,1,percent,null)) &b,
  2  max(decode(seq,2,percent,null)) &b1,
  3  max(decode(seq,3,percent,null)) &b2
  4  from temp_fluids a ,hold_Codes b
  5  where a.code = b.codes
  6  group by name
  7  /
old   1: select name, max(decode(seq,1,percent,null)) &b,
new   1: select name, max(decode(seq,1,percent,null)) CODE1,
old   2: max(decode(seq,2,percent,null)) &b1,
new   2: max(decode(seq,2,percent,null)) CODE2,
old   3: max(decode(seq,3,percent,null)) &b2
new   3: max(decode(seq,3,percent,null)) CODE3

NAME            CODE1      CODE2      CODE3
---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40 

A reader, September 26, 2005 - 4:06 pm UTC

Can we do the similar thing in plsql?

Tom Kyte
September 27, 2005 - 9:50 am UTC

yes, you write some SQL, that SQL queries the code table.

Based on what you find in the code table you build your query.

then, you execute it.




A reader, September 26, 2005 - 4:54 pm UTC

I meant to say , can we do the similar thing in one sql without using sqlplus as sqlplus is not an option for our case. Thanks!

Tom Kyte
September 27, 2005 - 9:53 am UTC

you can do anything in code you want - just query the codes, build the query and execute it.

Query help ....

A reader, September 26, 2005 - 6:16 pm UTC

I have table "login"

create table login (user_code varchar2(10),user_session varchar2(30),user_action varchar2(10),user_time date);

insert into login values ('c1','asdfadsfasdfasdf1','LOGIN',sysdate -10);
insert into login values ('c2','asdfadsfasdfasdf2','LOGIN',sysdate -10);
insert into login values ('c3','asdfadsfasdfasdf3','LOGOUT',sysdate -10);
insert into login values ('c4','asdfadsfasdfasdf4','LOGIN',sysdate -10);
insert into login values ('c5','asdfadsfasdfasdf5','LOGIN',sysdate -10);
insert into login values ('c6','asdfadsfasdfasdf6','LOGIN',sysdate -10);
insert into login values ('c7','asdfadsfasdfasdf7','LOGOUT',sysdate -10);
insert into login values ('c8','asdfadsfasdfasdf8','LOGIN',sysdate -10);
insert into login values ('c9','asdfadsfasdfasdf9','LOGIN',sysdate -10);
insert into login values ('c10','asdfadsfasdfasdf10','LOGIN',sysdate -10);

insert into login values ('c1','asdfadsfasdfasdf1','LOGOUT',sysdate -5);
insert into login values ('c3','asdfadsfasdfasdf31','LOGIN',sysdate -10);
insert into login values ('c4','asdfadsfasdfasdf4','LOGOUT',sysdate -5);
insert into login values ('c5','asdfadsfasdfasdf5','RECONNECT',sysdate -5);
insert into login values ('c6','asdfadsfasdfasdf6','LOGOUT',sysdate -5);
insert into login values ('c7','asdfadsfasdfasdf71','LOGIN',sysdate -5);
insert into login values ('c8','asdfadsfasdfasdf8','LOGOUT',sysdate -5);
insert into login values ('c10','asdfadsfasdfasdf10','LOGOUT',sysdate -5);

insert into login values ('c1','asdfadsfasdfasdf101','LOGIN',sysdate -3);
insert into login values ('c3','asdfadsfasdfasdf3','RECONNECT',sysdate -3);
insert into login values ('c4','asdfadsfasdfasdf41','LOGIN',sysdate -3);
insert into login values ('c5','asdfadsfasdfasdf5','LOGOUT',sysdate -3);
insert into login values ('c6','asdfadsfasdfasdf61','LOGIN',sysdate -3);
insert into login values ('c7','asdfadsfasdfasdf71','RECONNECT',sysdate -3);
insert into login values ('c8','asdfadsfasdfasdf81','LOGIN',sysdate -3);
insert into login values ('c9','asdfadsfasdfasdf9','RECONNECT',sysdate -3);
insert into login values ('c10','asdfadsfasdfasdf100','LOGIN'sysdate -3);

insert into login values ('c1','asdfadsfasdfasdf101','LOGOUT',sysdate -2);
insert into login values ('c4','asdfadsfasdfasdf41','RECONNECT',sysdate -2);
insert into login values ('c5','asdfadsfasdfasdf51','LOGIN',sysdate -2);
insert into login values ('c7','asdfadsfasdfasdf71','RECONNECT',sysdate -2);
insert into login values ('c8','asdfadsfasdfasdf81','LOGOUT',sysdate -2);
insert into login values ('c9','asdfadsfasdfasdf9', 'LOGOUT',sysdate -2);
insert into login values ('c10','asdfadsfasdfasdf100','RECONNECT',sysdate -2);


insert into login values ('c1','asdfadsfasdfasdf1010','LOGIN',sysdate -1);
insert into login values ('c4','asdfadsfasdfasdf41','LOGOUT',sysdate -1);
insert into login values ('c5','asdfadsfasdfasdf51','RECONNECT',sysdate -1);
insert into login values ('c7','asdfadsfasdfasdf71','LOGOUT',sysdate -1);
insert into login values ('c8','asdfadsfasdfasdf811','LOGIN',sysdate -1);
insert into login values ('c9','asdfadsfasdfasdf91', 'LOGIN',sysdate -1);
insert into login values ('c10','asdfadsfasdfasdf100','LOGOUT',sysdate -1);


insert into login values ('c1','asdfadsfasdfasdf1010','RECONNECT',sysdate);
insert into login values ('c4','asdfadsfasdfasdf411','LOGIN',sysdate );
insert into login values ('c5','asdfadsfasdfasdf51','LOGOUT',sysdate);
insert into login values ('c8','asdfadsfasdfasdf811','RECONNECT',sysdate);
insert into login values ('c9','asdfadsfasdfasdf91', 'RECONNECT',sysdate);
insert into login values ('c10','asdfadsfasdfasdf1001','LOGIN',sysdate );




Now I want to know how many users(c1,c2..) are "logged in" at any given time.
this is a sample data the real login table contains 500k rows.
and also this login report must calculate logins from prevous days if the
user did not log out..

q1.) is it possible ? can you help me solve this issue.

q2) how would you design this kind of table ?


Query help

A reader, September 26, 2005 - 6:18 pm UTC


In above qestion I want to know
1.) how may and which users are loggin in at given time

TIA

Tom Kyte
September 27, 2005 - 10:05 am UTC

select * from v$session?

Query help

A reader, September 26, 2005 - 6:21 pm UTC

the database is : oracle 8.1.7.4
sun sol 2.8


FEEDBACK -> SQL - HOW TO WRITE?

VIKAS, September 27, 2005 - 1:48 am UTC

Dear Mr. Kyte,

As desired by you here is the above mentioned Table created for your consideration with some changes and additions.

sql> create table T1(ID number(2), Initials varchar2(4),
JoinDate date);
TABLE CREATED.

Now, Insert a row,

sql> insert into T1 values(01, 'VKS', '01-MAR-04');
1 ROW CREATED.
sql> Commit;

Now, Select records to check,

sql> select * from T1;

ID Initials JoinDate
------ -------- ----------
1 VKS 1-MAR-04

Here, after this, all that I want is to write a query or PL/SQL procedure, which gives the following output from above Table T1.

column Dummy Records
-------- ------- -----------
ID := 1;
Initials := VKS;
JoinDate := 1-MAR-04;

Or may something like this..

column Dummy
-------- ---------------
ID := 1;
Initials := VKS;
JoinDate := 1-MAR-04;

Or even this...

Dummy
-----------------------
ID := 1;
Initials := VKS;
JoinDate := 1-MAR-04;


I hope now its clear, for you to get my point. Can you pls help me out with this.

Take care, regards.
Vikas.


done it

Bhavesh Ghodasara, September 27, 2005 - 8:28 am UTC

hi tom,
I solve the problem like
FROM TABLE (CAST (cols_as_rows ('select * FROM mst_personal') AS mytabletype)) x) a

thanks ..I just done it before your answer..
but why some times you have to cast and some time not..
Bhavesh

Tom Kyte
September 27, 2005 - 11:35 am UTC

you have cursor sharing set on - it is an issue, cursor sharing <> exact indicates you have a serious bug in your developed applications!



Re:

Jagjeet Singh malhi, September 27, 2005 - 8:32 am UTC

Hi Vikas,

I used this code for that. It is very usefull for day to day
activities


SQL> create or replace procedure p ( p_str   varchar2)
  2  authid current_user
  3  as
  4  v_cur     int := dbms_sql.open_cursor;
  5  v_exe     int ;
  6  v_tot_cols int;
  7  v_tab_desc  dbms_sql.desc_tab;
  8  v_col_value varchar2(4000);
  9  begin
 10  dbms_sql.parse(v_cur,p_str,dbms_sql.native);
 11  dbms_sql.describe_columns(v_cur,v_tot_cols,v_tab_desc);
 12  for i in 1..v_tot_cols loop
 13  dbms_sql.define_column(v_cur,i,v_col_value,4000);
 14  end loop;
 15  --
 16  v_exe := dbms_sql.execute(v_cur);
 17  --------
 18  loop
 19  exit when  ( dbms_sql.fetch_rows(v_cur) <=  0 ) ;
 20  --
 21  for i in 1..v_tot_cols loop
 22  dbms_sql.column_Value(v_cur,i,v_col_value);
 23  dbms_output.put_line(rpad(v_tab_desc(i).col_name,30,' ')||' : '||v_col_value);
 24  end loop;
 25  --
 26  end loop;
 27  -------
 28  dbms_sql.close_cursor(v_cur);
 29* end;

Procedure created.

SQL> set serveroutput on size 100000

SQL> exec p ( ' Select * from v$database ' );

DBID                           : 2903310348                                     
NAME                           : JS                                             
CREATED                        : 01-JAN-99                                      
RESETLOGS_CHANGE#              : 836266                                         
RESETLOGS_TIME                 : 01-JAN-99                                      
PRIOR_RESETLOGS_CHANGE#        : 1                                              
PRIOR_RESETLOGS_TIME           : 01-JAN-99                                      
LOG_MODE                       : ARCHIVELOG                                     
CHECKPOINT_CHANGE#             : 856397                                         
ARCHIVE_CHANGE#                : 0                                              
CONTROLFILE_TYPE               : CURRENT                                        
CONTROLFILE_CREATED            : 01-JAN-99                                      
CONTROLFILE_SEQUENCE#          : 712                                            
CONTROLFILE_CHANGE#            : 856399                                         
CONTROLFILE_TIME               : 01-JAN-99                                      
OPEN_RESETLOGS                 : NOT ALLOWED                                    
VERSION_TIME                   : 01-JAN-99                                      
OPEN_MODE                      : READ WRITE                                     
PROTECTION_MODE                : MAXIMUM PERFORMANCE                            
PROTECTION_LEVEL               : MAXIMUM PERFORMANCE                            
REMOTE_ARCHIVE                 : ENABLED                                        
ACTIVATION#                    : 2903250804                                     
DATABASE_ROLE                  : PRIMARY                                        
ARCHIVELOG_CHANGE#             : 856396                                         
SWITCHOVER_STATUS              : SESSIONS ACTIVE                                
DATAGUARD_BROKER               : DISABLED                                       
GUARD_STATUS                   : NONE                                           
SUPPLEMENTAL_LOG_DATA_MIN      : NO                                             
SUPPLEMENTAL_LOG_DATA_PK       : NO                                             
SUPPLEMENTAL_LOG_DATA_UI       : NO                                             
FORCE_LOGGING                  : NO                                             

PL/SQL procedure successfully completed.


SQL> exec p ( ' select * from dba_tables where table_name = ''T'' ');

OWNER                          : OPS$ORA9                                       
TABLE_NAME                     : T                                              
TABLESPACE_NAME                : TEST                                           
CLUSTER_NAME                   :                                                
IOT_NAME                       :                                                
PCT_FREE                       : 10                                             
PCT_USED                       :                                                
INI_TRANS                      : 1                                              
MAX_TRANS                      : 255                                            
INITIAL_EXTENT                 : 65536                                          
NEXT_EXTENT                    :                                                
MIN_EXTENTS                    : 1                                              
MAX_EXTENTS                    : 2147483645                                     
PCT_INCREASE                   :                                                
FREELISTS                      :                                                
FREELIST_GROUPS                :                                                
LOGGING                        : YES                                            
BACKED_UP                      : N                                              
NUM_ROWS                       : 397                                            
BLOCKS                         : 19629                                          
EMPTY_BLOCKS                   : 339                                            
AVG_SPACE                      : 1871                                           
CHAIN_CNT                      : 397                                            
AVG_ROW_LEN                    : 2019                                           
AVG_SPACE_FREELIST_BLOCKS      : 0                                              
NUM_FREELIST_BLOCKS            : 0                                              
DEGREE                         :          1                                     
INSTANCES                      :          1                                     
CACHE                          :     N                                          
TABLE_LOCK                     : ENABLED                                        
SAMPLE_SIZE                    : 397                                            
LAST_ANALYZED                  : 01-JAN-99                                      
PARTITIONED                    : NO                                             
IOT_TYPE                       :                                                
TEMPORARY                      : N                                              
SECONDARY                      : N                                              
NESTED                         : NO                                             
BUFFER_POOL                    : DEFAULT                                        
ROW_MOVEMENT                   : DISABLED                                       
GLOBAL_STATS                   : NO                                             
USER_STATS                     : NO                                             
DURATION                       :                                                
SKIP_CORRUPT                   : DISABLED                                       
MONITORING                     : NO                                             
CLUSTER_OWNER                  :                                                
DEPENDENCIES                   : DISABLED                                       
COMPRESSION                    : DISABLED                                       
OWNER                          : access                                         
TABLE_NAME                     : T                                              
TABLESPACE_NAME                : SYSTEM                                         
CLUSTER_NAME                   :                                                
IOT_NAME                       :                                                
PCT_FREE                       : 10                                             
PCT_USED                       : 40                                             
INI_TRANS                      : 1                                              
MAX_TRANS                      : 255                                            
INITIAL_EXTENT                 : 10240                                          
NEXT_EXTENT                    : 10240                                          
MIN_EXTENTS                    : 1                                              
MAX_EXTENTS                    : 121                                            
PCT_INCREASE                   : 50                                             
FREELISTS                      : 1                                              
FREELIST_GROUPS                : 1                                              
LOGGING                        : YES                                            
BACKED_UP                      : N                                              
NUM_ROWS                       :                                                
BLOCKS                         :                                                
EMPTY_BLOCKS                   :                                                
AVG_SPACE                      :                                                
CHAIN_CNT                      :                                                
AVG_ROW_LEN                    :                                                
AVG_SPACE_FREELIST_BLOCKS      :                                                
NUM_FREELIST_BLOCKS            :                                                
DEGREE                         :          1                                     
INSTANCES                      :          1                                     
CACHE                          :     N                                          
TABLE_LOCK                     : ENABLED                                        
SAMPLE_SIZE                    :                                                
LAST_ANALYZED                  :                                                
PARTITIONED                    : NO                                             
IOT_TYPE                       :                                                
TEMPORARY                      : N                                              
SECONDARY                      : N                                              
NESTED                         : NO                                             
BUFFER_POOL                    : DEFAULT                                        
ROW_MOVEMENT                   : DISABLED                                       
GLOBAL_STATS                   : NO                                             
USER_STATS                     : NO                                             
DURATION                       :                                                
SKIP_CORRUPT                   : DISABLED                                       
MONITORING                     : NO                                             
CLUSTER_OWNER                  :                                                
DEPENDENCIES                   : DISABLED                                       
COMPRESSION                    : DISABLED                                       

PL/SQL procedure successfully completed.


In 9i - It can print 100000 bytes.
In 10g - It's unlimited "set serveroutput on size unlimited" 

Need diff. type for each table.

Jagjeet Singh, September 27, 2005 - 8:36 am UTC

Hi Bhavesh,

I think for that we need diff. type for each diff. table.

Thanks,
Js

Re:

Jagjeet Singh, September 27, 2005 - 8:50 am UTC

Bhavesh,

I thought you answered Viaks's query. But you have diff. question.

I apologies.

Js






A reader, September 27, 2005 - 10:35 am UTC

Thanks Tom! pl/sql works okay. 

Can I do this job in one SQL instead?


SQL>  create or replace procedure return_matrix(p_cursor out get_matrix_fluids.refcursor_type ) as
  2   a1 varchar2(20);
  3   a2 varchar2(20);
  4   a3 varchar2(20);
  5   v_query varchar2(1000);
  6   begin
  7    select codes into a1 from hold_codes where seq =1;
  8    select codes into a2 from hold_codes where seq =2;
  9    select codes into a3 from hold_codes where seq =3;
 10    v_query := 'select name, max(decode(seq,1,percent,null)) '|| a1
 11      || ', max(decode(seq,2,percent,null)) '|| a2
 12      ||' , max(decode(seq,3,percent,null)) '|| a3
 13   ||'  from temp_fluids a ,hold_Codes b
 14    where a.code = b.codes
 15    group by name';
 16    open p_cursor for v_query;
 17   end;
 18  /

Procedure created.

SQL> execute return_matrix(:a);

PL/SQL procedure successfully completed.

SQL>  print a

NAME            CODE1      CODE2      CODE3
---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40

SQL>  

Tom Kyte
September 27, 2005 - 12:08 pm UTC

I would have thought of code like this:


v_query := 'select name';
for x in ( seledct * from hold_codes order by seq )
loop
v_query := v_query || ', max(decode(seq,' || x.seq || ',percent,null)) "' ||
x.codes || '"'
end loop;
v_query := v_query || ' from ......';



Query help ....

A reader, September 27, 2005 - 11:56 am UTC

Tom,

this is a custom login table, "login"
Whenever user logs in OUR application (not oracle user account hance not v$session), we insert a row as whether he logged in, reconnected or logged out. I have provided
create table and insert stmts. to generate test data
the session there represents java session to OUR custome application.

Please help me find which are the users logged in at given time, from the "login" table.


TIA

Tom Kyte
September 27, 2005 - 1:36 pm UTC

I've not a clue what "YOUR" login table which presumably YOU designed to answer the questions you need to ask of it looks like.

Thanks Tom! You are the best!

A reader, September 27, 2005 - 2:14 pm UTC

"I would have thought of code like this:"

Thanks for this tips. I think its a great idea! Will write the code accordingly.



A reader, September 27, 2005 - 2:30 pm UTC

Thanks Tom! 

SQL> ed
Wrote file afiedt.buf

  1  create or replace procedure return_matrix(p_cursor out
  2    get_matrix_fluids.refcursor_type ) as
  3    a1 varchar2(20);
  4    a2 varchar2(20);
  5    a3 varchar2(20);
  6    v_query varchar2(1000);
  7  begin
  8   v_query := 'select name';
  9   for x in ( select * from hold_codes order by seq )
 10   loop
 11     v_query := v_query || ', max(decode(seq,' || x.seq || ',percent,null)) "' ||
 12                x.codes || '"';
 13   end loop;
 14   v_query := v_query || 'from temp_fluids a ,hold_Codes b
 15         where a.code = b.codes
 16         group by name';
 17   open p_cursor for v_query;
 18*  end;
SQL> /

Procedure created.

SQL> execute return_matrix(:a);

PL/SQL procedure successfully completed.

SQL> print a

NAME            CODE1      CODE2      CODE3
---------- ---------- ---------- ----------
NAME1              10         10         10
NAME2              20         20
NAME3              30         30
NAME4                         40
 

Query help....

A reader, September 27, 2005 - 3:01 pm UTC


Oracle 8.1.7.4
os : sun sol 2.8

I have table "login"

create table login
(user_code varchar2(10),
user_session varchar2(30), -- "Application java seesion not oracle seesion"
user_action varchar2(10),
user_time date);

insert into login values ('c1','asdfadsfasdfasdf1','LOGIN',sysdate -10);
insert into login values ('c2','asdfadsfasdfasdf2','LOGIN',sysdate -10);
insert into login values ('c3','asdfadsfasdfasdf3','LOGOUT',sysdate -10);
insert into login values ('c4','asdfadsfasdfasdf4','LOGIN',sysdate -10);
insert into login values ('c5','asdfadsfasdfasdf5','LOGIN',sysdate -10);
insert into login values ('c6','asdfadsfasdfasdf6','LOGIN',sysdate -10);
insert into login values ('c7','asdfadsfasdfasdf7','LOGOUT',sysdate -10);
insert into login values ('c8','asdfadsfasdfasdf8','LOGIN',sysdate -10);
insert into login values ('c9','asdfadsfasdfasdf9','LOGIN',sysdate -10);
insert into login values ('c10','asdfadsfasdfasdf10','LOGIN',sysdate -10);

insert into login values ('c1','asdfadsfasdfasdf1','LOGOUT',sysdate -5);
insert into login values ('c3','asdfadsfasdfasdf31','LOGIN',sysdate -10);
insert into login values ('c4','asdfadsfasdfasdf4','LOGOUT',sysdate -5);
insert into login values ('c5','asdfadsfasdfasdf5','RECONNECT',sysdate -5);
insert into login values ('c6','asdfadsfasdfasdf6','LOGOUT',sysdate -5);
insert into login values ('c7','asdfadsfasdfasdf71','LOGIN',sysdate -5);
insert into login values ('c8','asdfadsfasdfasdf8','LOGOUT',sysdate -5);
insert into login values ('c10','asdfadsfasdfasdf10','LOGOUT',sysdate -5);

insert into login values ('c1','asdfadsfasdfasdf101','LOGIN',sysdate -3);
insert into login values ('c3','asdfadsfasdfasdf3','RECONNECT',sysdate -3);
insert into login values ('c4','asdfadsfasdfasdf41','LOGIN',sysdate -3);
insert into login values ('c5','asdfadsfasdfasdf5','LOGOUT',sysdate -3);
insert into login values ('c6','asdfadsfasdfasdf61','LOGIN',sysdate -3);
insert into login values ('c7','asdfadsfasdfasdf71','RECONNECT',sysdate -3);
insert into login values ('c8','asdfadsfasdfasdf81','LOGIN',sysdate -3);
insert into login values ('c9','asdfadsfasdfasdf9','RECONNECT',sysdate -3);
insert into login values ('c10','asdfadsfasdfasdf100','LOGIN'sysdate -3);

insert into login values ('c1','asdfadsfasdfasdf101','LOGOUT',sysdate -2);
insert into login values ('c4','asdfadsfasdfasdf41','RECONNECT',sysdate -2);
insert into login values ('c5','asdfadsfasdfasdf51','LOGIN',sysdate -2);
insert into login values ('c7','asdfadsfasdfasdf71','RECONNECT',sysdate -2);
insert into login values ('c8','asdfadsfasdfasdf81','LOGOUT',sysdate -2);
insert into login values ('c9','asdfadsfasdfasdf9', 'LOGOUT',sysdate -2);
insert into login values ('c10','asdfadsfasdfasdf100','RECONNECT',sysdate -2);


insert into login values ('c1','asdfadsfasdfasdf1010','LOGIN',sysdate -1);
insert into login values ('c4','asdfadsfasdfasdf41','LOGOUT',sysdate -1);
insert into login values ('c5','asdfadsfasdfasdf51','RECONNECT',sysdate -1);
insert into login values ('c7','asdfadsfasdfasdf71','LOGOUT',sysdate -1);
insert into login values ('c8','asdfadsfasdfasdf811','LOGIN',sysdate -1);
insert into login values ('c9','asdfadsfasdfasdf91', 'LOGIN',sysdate -1);
insert into login values ('c10','asdfadsfasdfasdf100','LOGOUT',sysdate -1);


insert into login values ('c1','asdfadsfasdfasdf1010','RECONNECT',sysdate);
insert into login values ('c4','asdfadsfasdfasdf411','LOGIN',sysdate );
insert into login values ('c5','asdfadsfasdfasdf51','LOGOUT',sysdate);
insert into login values ('c8','asdfadsfasdfasdf811','RECONNECT',sysdate);
insert into login values ('c9','asdfadsfasdfasdf91', 'RECONNECT',sysdate);
insert into login values ('c10','asdfadsfasdfasdf1001','LOGIN',sysdate );




Now I want to know which and how many users(c1,c2..) are "logged in" at any given time.


This is a sample data the real login table contains 500k rows.
and also this login report must calculate logins from prevous days if the user did not log out..

q1.)how may and which users are loggin in at given time

q2) how would you design this kind of table ?


TIA



Tom Kyte
September 27, 2005 - 3:14 pm UTC

there would be a single record per "session".

those that are logged in - they don't have a logout time, done.

are you sure you always have a logout record, if a session times out, do you have records inserted in there accordingly,

question: can we just look for login records such that there is NO subsequent LOGOUT records?


Query Help....

A reader, September 27, 2005 - 5:12 pm UTC


Yes, that is what I want,

# of logins(and or reconnects) that does not have log out so far.

Yes, there no time out, a user can be logged into system for as many DAYS as he/she wants..
This should give me how many users are connected to the system.


Tom Kyte
September 27, 2005 - 8:23 pm UTC

ops$tkyte@ORA10G> select *
  2    from (
  3  select user_code,
  4         user_action,
  5         user_time,
  6         lead(user_action) over (partition by user_code order by user_time) next_action
  7    from login
  8   where user_action in ( 'LOGIN', 'LOGOUT' )
  9         )
 10   where user_action = 'LOGIN'
 11     and next_action is null
 12  /
 
USER_CODE  USER_ACTIO USER_TIME NEXT_ACTIO
---------- ---------- --------- ----------
c1         LOGIN      26-SEP-05
c10        LOGIN      27-SEP-05
c2         LOGIN      17-SEP-05
c3         LOGIN      17-SEP-05
c4         LOGIN      27-SEP-05
c6         LOGIN      24-SEP-05
c8         LOGIN      26-SEP-05
c9         LOGIN      26-SEP-05
 
8 rows selected.
 

SQL QUERY HELP???

ramis, September 27, 2005 - 6:29 pm UTC

Hi,

i have a student quiz scores table in which data is stored in the following 

format

 QUIZ_ID   STUDENT_ID   CLASS_ID    SCORES
--------- ---------- ---------- ----------
        1       1150          1         27
        1       1278          1         13
        1       1243          1         60
        1       1277          1         41
        1       1215          1         12
        1       1364          2         22
        1       1361          2         10
        2       1278          1         13
        2       1243          1         60
        2       1215          1         12
        2       1364          2         22
        2       1361          2         10
        2       1960          6         54

WELL, the problem is that I want the output in the format based on some conditions..


STUDENT_ID    CLASS_ID    SUM(STUDENT    TOTAL_CLASS     TOTAL_QUIZ    
                             _SCORES)         _SCORES        SCORES

here
      1. STUDENT_ID
      2. CLASS_ID is the class_id of the respective student
      3. sum(student_scores) is the sum of SCORES of each
         student in all quizes he participated
      4. TOTAL_CLASS_scores is the SUM of all scores of all
         students belonging to the student's CLASS_ID, ONLY in
         those quizes in which the student also participated  
      5. total_quiz_scores is the sum of all scores of all
         students of all classes ONLY in quizes in which that
         student also participated


hope this will be clear enough for my requiremnet

now here is my disired output



STUDENT_ID   CLASS_ID  SUM(SCORES)   TOTAL_CLASS    TOTAL_QUIZ 
                                        SCORES      SCORES   
      1150          1          27       153         185
      1215          1          24       238         356
      1243          1         120       238         356
      1277          1          41       153         185 
      1278          1          26       238         356
      1361          2          20       64          356
      1364          2          44       64          365
      1960          6          54       54          171 

I can easily get the first three columns by this query

SQL> SELECT STUDENT_ID, CLASS_ID, SUM(SCORES)
     FROM T
     GROUP BY STUDENT_ID, CLASS_ID
/

but unable to get the last two columns as desired..I would most prefer the shortest possible query/yet easy and fast to achieve this for my further calculations


CREATE TABLE T
(QUIZ_ID NUMBER(4),  
STUDENT_ID NUMBER(4),
CLASS_ID  NUMBER(2),
SCORES  NUMBER(3))


INSERT INTO T VALUES (1,1150,1,27);
INSERT INTO T VALUES (1,1278,1,13);
INSERT INTO T VALUES (1,1243,1,60);
INSERT INTO T VALUES (1,1277,1,41);
INSERT INTO T VALUES (1,1215,1,12);
INSERT INTO T VALUES (1,1364,2,22);
INSERT INTO T VALUES (1,1361,2,10);
INSERT INTO T VALUES (2,1278,1,13);
INSERT INTO T VALUES (2,1243,1,60);
INSERT INTO T VALUES (2,1215,1,12);
INSERT INTO T VALUES (2,1364,2,22);
INSERT INTO T VALUES (2,1361,2,10);
INSERT INTO T VALUES (2,1960,6,54);
 

Tom Kyte
September 27, 2005 - 8:33 pm UTC

assuming a student may only take a quiz ONCE.

we can assign to each row

o the sum(scores) for that same class_id and quiz_id easily, since the assumption 
  is that a student may take a quiz ONCE, we can sum this again safely.

o the sum(scores) for that same quiz_id, same assumption.


then aggregate...


ops$tkyte@ORA9IR2> select student_id,
  2         class_id,
  3             sum(scores),
  4             sum(t1),
  5             sum(t2)
  6    from (
  7  select student_id,
  8         class_id,
  9             scores,
 10             sum(scores) over (partition by class_id, quiz_id) t1,
 11             sum(scores) over (partition by quiz_id) t2
 12    from t
 13         )
 14   group by student_id, class_id
 15  /

STUDENT_ID   CLASS_ID SUM(SCORES)    SUM(T1)    SUM(T2)
---------- ---------- ----------- ---------- ----------
      1150          1          27        153        185
      1215          1          24        238        356
      1243          1         120        238        356
      1277          1          41        153        185
      1278          1          26        238        356
      1361          2          20         64        356
      1364          2          44         64        356
      1960          6          54         54        171

8 rows selected.


I believe that is what you were looking for - make sure you understand what it does before just using it!  Run the inline view by itself to see what is being assigned at each step. 

Thanx - Mr. Jagjeet Singh malhi

VIKAS SANGAR, September 28, 2005 - 3:31 am UTC

Mr. Jagjeet Singh Malhi,

Thanx a ton for the code (Procedure) supplied by you. It really works, But there is small Problem of how to demarcate output of the columns based on their respective column datatypes. It takes even the Number data types as String/varchar. Do you have any way to fix this out. I mean, to put columns with char/varchar/varchar2/date datatype between the ' ' and let remain the columns with number datatype exist freely. This will further increase the utility and effeciiveness of the procedure, by minimising manual edits/changes, reducing errors and saving time.

I think the user_tab_cols view may prove to be handy, to achieve this.

By the way, I was Happy and Surprised to have a person from my Home Town, replying to my query.

Take care, regards..
Vikas

Re:

Jagjeet Singh malhi, September 28, 2005 - 10:02 am UTC

Hello Sir,

Good Point.

May be Mr. Tom can tell us better.

My assumptions are :
================

--we can achive this by declaring 3 diff. datatype variables.
--and just check using desc_tab and stores its value in same
--datatype. But I do not see any use of this -- if it is just for printing ..


I see two diff. things in this case.

-- Considering client as SQL*PLUS ----

o One is recordset building and push to client -- at Database Level
o Client is formatting and printing the data in lines. -- at client Level

Whenever we issue any sql using any manipulation with it's
datatype or using any functions. like ..
Select .. to_number(format..),trunc(date), sum(col), min(column) ..

Oracle does all manipulation or building this recordset
at database level. and just sends the result to it's lient.
And sql*plus prints it for us ..

At the time of printing -- it is out of any datatype zone ..
its just printing lines ...

same we are doing here ..

you can issue at sql*prompt

" Select min(number),trunc(date_colum) ... from .. "
or
exec p (' Select min(number),trunc(date_colum) from ' );

I think it is same ..

again waiting for Mr. Tom's comment ..

Thanks,
Js

Query Help...,

A reader, September 28, 2005 - 10:51 am UTC

Thanks, tom.

This works, but the explain plan is not good. and it takes too much time though. can not run this in prod !!

Tom Kyte
September 28, 2005 - 11:21 am UTC

that is why we typically model our physical schemas to be able to rapidly answer questions we need to ask of them


this model doesn't support your question very nicely.


You need to look at all of the login/logout records, sort them and find login records that don't have a subsequent logout record.

we could write that in different ways - but it isn't going to be "fast" in any case.

Outputting the result in a single row

Vikas Khanna, September 30, 2005 - 3:06 am UTC

Hi Tom,

Please help and show us how to write a combined query to get the output as

Count(A) Count(B) Count(C) Count(D) Count(E) Count(F)
107164 138381 98008 98248 96968 84028

How can these different queries produce the result in a single row.

Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-6) group by request_ter\
ms having count(*) > 1);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-5) group by request_ter\
ms having count(*) > 2);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-4) group by request_ter\
ms having count(*) > 3);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-3) group by request_ter\
ms having count(*) > 4);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-2) group by request_ter\
ms having count(*) > 5);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate-1) group by request_ter\
ms having count(*) > 6);
Select Count(request_terms) from (Select request_terms, Count(*) from request_data w\
here request_date between trunc(sysdate-7) and trunc(sysdate) group by request_terms\
having count(*) > 7);

admin@DWADM> @@try

COUNT(REQUEST_TERMS)
--------------------
107164


COUNT(REQUEST_TERMS)
--------------------
138381


COUNT(REQUEST_TERMS)
--------------------
98008


COUNT(REQUEST_TERMS)
--------------------
98248


COUNT(REQUEST_TERMS)
--------------------
96968


COUNT(REQUEST_TERMS)
--------------------
84028


COUNT(REQUEST_TERMS)
--------------------
83946

Thanks
Vikas

Tom Kyte
September 30, 2005 - 9:27 am UTC

if i had a simple create table and some inserts to test with...


Query Help...,

A reader, September 30, 2005 - 12:55 pm UTC

Thanks, tom,

Can you suggest a different model/structure for the "login"
table ? how would you design it provided the objectives are
1.) only to log all login,logout and reconnects from the system
2.) report how many users are logged into system right now
3.) how many times a perticular user disconnected
4.) how often user logout from the system
5.) how long user stays logged into the system.

thanks,


Tom Kyte
September 30, 2005 - 2:19 pm UTC

i don't know what a reconnect is, but a login/logout record would be the same record with a login time and a logout time (logout time is null means "not logged out")


the queries to answer the questions are then "trivial"



sql query of sum before a particular value??

ramis, October 01, 2005 - 5:04 pm UTC

Hi,

I am facing a complex problem, hope you would help
I have a table which has four columns

CREATE TABLE T
(MATCH_ID   NUMBER(4),
TEAM_ID NUMBER(2),
PLayer_ID NUMBER(4),
SCORE  NUMBER(3))

here match_id, team_id and pl_id are jointly primary key..


SQL> select * from t

  MATCH_ID    TEAM_ID      PLAYER_ID      SCORE
---------- ---------- ---------- ----------
         2          2       1061          8
        12          2       1061          0
        13          2       1061         18
        14          2       1061         14
        15          2       1061         
        17          2       1061         12
        18          2       1061         33
        19          2       1061         10
        20          2       1061          0
        21          2       1061        100
        22          2       1061         41
         1          1       1361          0
         3          1       1361         11
         4          1       1361         10
         6          1       1361         10
         7          1       1361         99
         8          1       1361         91
         9          1       1361        100
        10          1       1361         76
        15          1       1361         51
        21          1       1361         22
        34          1       1361          0
         1          4       1661          0
         2          4       1661          0
         3          4       1661         70
         4          4       1661         99
         5          4       1661         12
         6          4       1661          0
        10          4       1960         10
        15          4       1960         68
        16          4       1960         14
        17          4       1960         89
        18          4       1960         10
        19          4       1960         45
        21          4       1960         63
        22          4       1960         44
        23          4       1960         86
        24          4       1960          5
        25          4       1960          3
        26          4       1960          8
        27          4       1960         27
        28          4       1960         28
        29          4       1960        141
        30          4       1960          0
        31          4       1960          7
        32          4       1960         37
         1          4       2361        100
         2          4       2361          7
         3          4       2361         10
         4          4       2361         49
         5          4       2361         12
         6          4       2361          0

my requirement is to get an aggregate and max(scores) of players before he made a particluar score for the first time. For exmaple, take player_id = 1061. I want to take his total aggregate and max(score) before he made a score >= 100 (100 or more) for the first time. 

His scores in order of match_id are:


SQL> select * from t where player_id = 1061 order by 1

  MATCH_ID    TEAM_ID      PLAYER_ID      SCORE
---------- ---------- ---------- ----------
         2          2       1061          8
        12          2       1061          0
        13          2       1061         18
        14          2       1061         14
        15          2       1061         
        17          2       1061         12
        18          2       1061         33
        19          2       1061         10
        20          2       1061          0
        21          2       1061        100
        22          2       1061         41

here the match_id shows the actual match number in which the player played..for exmaple, for the above player, he played in match_id = 2 which was actaully his match no. 1 and then he played in match_id = 12 which was actauily his second match. so he missed the remaining matches. The match_id is the key thing here because my desired output is based on that. The query would first sort the data by player_id and match_id in ascending order and then would perhaps loop through each match of the respective player to check in which match player has the score of 100 or more. when it finds such match for the respective player it should aggregate the runs and extract max(score) among all matches before the match in which he made a score of 100 or more.

now for this player my desired output is 

player_id  team_id  sum(scores)  Max(scores)  min(match_id)  max(match_id)
1061             2          95           33              2              20


here 
Player_id: is player_id.
Team_id is player's team_id.

Sum(scores): is the sum of all scores of the player before he made a score of 100 or more for the first time. 

Max(scores): is the maximum score of the player before he made a score of 100 or more for the first time.

Min(match_id): is the minimum match_id of the player 'in or before' which he did not make any score of 100 or more.

Max(match_id): is the maximum match_id of the player in which he played before he made a score of 100 or more in his next match.



thus for all players grouped by player_id and team_id I want the final output 

in the following format


player_id  team_id  sum(scores)  Max(scores)  min(match_id)  max(match_id)
1061             2          95           33              2              20
1361             1         221           99              1              8 
1960             4         500           89              10             28


notice that, if a player has a score of 100 or more in his first match then he would not come into this desired output. similary if a player never had the said score in all matches that he played then he would also not come. For example, two of the players in the data i provided at the top has these issues respectively


SQL> select * from t where player_id = 2361 order by 1

  MATCH_ID    TEAM_ID      PLAYER_ID    SCORE
         1          4       2361        100
         2          4       2361          7
         3          4       2361         10
         4          4       2361         49
         5          4       2361         12
         6          4       2361          0

as you can see he has a score of 100 or more in the first match he played, so he would not come into my desired output as he has not previous matches without a score of less than 100.

SQL> select * from t where player_id = 1661 order by 1

MATCH_ID    TEAM_ID      PL_ID      SCORE
-------- ---------- ---------- ----------
       1          4       1661          0
       2          4       1661          0
       3          4       1661         70
       4          4       1661         99
       5          4       1661         12
       6          4       1661          0


no score of 100 or more so not considered for the output

I hope this will be clear enough to solve problem


here is the sample ddl data

INSERT INTO T VALUES (2,2,1061,8);
INSERT INTO T VALUES (12,2,1061,0);
INSERT INTO T VALUES (13,2,1061,18);
INSERT INTO T VALUES (14,2,1061,14);
INSERT INTO T VALUES (15,2,1061,null);
INSERT INTO T VALUES (17,2,1061,12);
INSERT INTO T VALUES (18,2,1061,33);
INSERT INTO T VALUES (19,2,1061,10);
INSERT INTO T VALUES (20,2,1061,0);
INSERT INTO T VALUES (21,2,1061,100);
INSERT INTO T VALUES (22,2,1061,41);
INSERT INTO T VALUES (1,1,1361,0);
INSERT INTO T VALUES (3,1,1361,11);
INSERT INTO T VALUES (4,1,1361,10);
INSERT INTO T VALUES (6,1,1361,10);
INSERT INTO T VALUES (7,1,1361,99);
INSERT INTO T VALUES (8,1,1361,91);
INSERT INTO T VALUES (9,1,1361,100);
INSERT INTO T VALUES (10,1,1361,76);
INSERT INTO T VALUES (15,1,1361,51);
INSERT INTO T VALUES (21,1,1361,22);
INSERT INTO T VALUES (34,1,1361,0);
INSERT INTO T VALUES (10,4,1960,10);
INSERT INTO T VALUES (15,4,1960,68);
INSERT INTO T VALUES (16,4,1960,14);
INSERT INTO T VALUES (17,4,1960,89);
INSERT INTO T VALUES (18,4,1960,10);
INSERT INTO T VALUES (19,4,1960,45);
INSERT INTO T VALUES (21,4,1960,63);
INSERT INTO T VALUES (22,4,1960,44);
INSERT INTO T VALUES (23,4,1960,86);
INSERT INTO T VALUES (24,4,1960,5);
INSERT INTO T VALUES (25,4,1960,3);
INSERT INTO T VALUES (26,4,1960,8);
INSERT INTO T VALUES (27,4,1960,27);
INSERT INTO T VALUES (28,4,1960,28);
INSERT INTO T VALUES (29,4,1960,141);
INSERT INTO T VALUES (30,4,1960,0);
INSERT INTO T VALUES (31,4,1960,7);
INSERT INTO T VALUES (32,4,1960,37);
INSERT INTO T VALUES (1,4,2361,100);
INSERT INTO T VALUES (2,4,2361,7);
INSERT INTO T VALUES (3,4,2361,10);
INSERT INTO T VALUES (4,4,2361,49);
INSERT INTO T VALUES (5,4,2361,12);
INSERT INTO T VALUES (6,4,2361,0);
INSERT INTO T VALUES (1,4,1661,0);
INSERT INTO T VALUES (2,4,1661,0);
INSERT INTO T VALUES (3,4,1661,70);
INSERT INTO T VALUES (4,4,1661,99);
INSERT INTO T VALUES (5,4,1661,12);
INSERT INTO T VALUES (6,4,1661,0);

thanks in advance,
regards,
 

Query for gap "n days"

Vikas Khanna, October 03, 2005 - 1:57 am UTC

Hi Tom,

I have made the query myself to obtain the desired results foa a given 7 days.

Can you please help me formulating the query for a given n days. .

Select a.cnt AS "Gap 1 Day",b.cnt "Gap 2 Day",c.cnt "Gap 3 Day",d.cnt "Gap 4 Day",e.cnt "Gap 5 Day",f.cnt "Gap 6 Day",g.cnt "Gap 7 Day" from
(Select Count(request_terms) cnt from (Select request_terms ,count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-6) group by request_terms having count(*) > 1)) a ,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-5) group by request_terms having count(*) > 2)) b,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-4) group by request_terms having count(*) > 3)) c,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-3) group by request_terms having count(*) > 4)) d,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-2) group by request_terms having count(*) > 5)) e,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate-1) group by request_terms having count(*) > 6)) f,
(Select Count(request_terms) cnt from (Select request_terms, count(*) from request_data where request_date between trunc(sysdate-7) and trunc(sysdate) group by request_terms having count(*) > 7)) g
/

Gap 1 Day Gap 2 Day Gap 3 Day Gap 4 Day Gap 5 Day Gap 6 Day Gap 7 Day
---------- ----------- ----------- ---------- ---------- ----------- ----------
107164 138381 98008 98248 96968 84028 83946

The create DDL scripts is

CREATE TABLE CLICK_DATA(
CLICK_TAG VARCHAR(16) NOT NULL,
REQUEST_TERMS VARCHAR(16) NOT NULL,
REQUEST_DATE DATE
CONSTRAINT CLICK_DATA_PK PRIMARY KEY (CLICK_TAG)
USING INDEX TABLESPACE TS_INDEX
)
TABLESPACE TS_DATA;

Looking forward to the solution.


Regards
Vikas

Tom Kyte
October 03, 2005 - 7:32 am UTC

you can do this without running N queries in inline views. eg: you can simplify this.

my question not answered

ramis, October 03, 2005 - 8:41 am UTC

Hi Tom,
you have not answered my question yet two post above
i.e.
sql query of sum before a particular value??

I am waiting for your response!
regards,





Tom Kyte
October 03, 2005 - 11:13 am UTC

I do not see every posting, I do not answer every single request.


In this case, I saw something that required me to page down three or four times and said simply "don't have time for this"

In other words - too big, I answer these "quickly" or not at all.

To Ramis; your query

A reader, October 03, 2005 - 1:28 pm UTC

Select t.player_id, sum(t.score), max(t.score), min(match_id), max(match_id)
From t
where match_id < ( Select t1.match_id from t t1 where t1.score=100 and t1.player_id=t.player_id)
group by t.player_id

Thia assumes match_id is in "correct" sequence in your data. If it isn't, try to use datetime stamps for the scores. We were bitten by sequences not guaranteeing the right order for transactions.



Please help

Vikas Khanna, October 04, 2005 - 12:50 am UTC

Hi Tom,

That's what I was looking for help. Can you please simplify this for me to generate this for a given "n" number of days.

Would appreciate if you can let me know the query for this.

Regards
Vikas

Tom Kyte
October 04, 2005 - 1:41 pm UTC

it is hard when I don't have any simple test data to play with.


I believe this:

(Select Count(request_terms) cnt
from (Select request_terms ,count(*)
from request_data
where request_date between trunc(sysdate-7) and trunc(sysdate-6)
group by request_terms
having count(*) > 1)) a ,
(Select Count(request_terms) cnt
from (Select request_terms, count(*)
from request_data
where request_date between trunc(sysdate-7) and trunc(sysdate-5)
group by request_terms
having count(*) > 2)) b,



can be phrased as this:


select count( case when rt1 > 1 then request_terms ) c1,
count( case when rt2 > 2 then request_terms ) c2,
....
from (
select request_terms
count(
case when request_date between trunc(sysdate-7) and trunc(sysdate-6)
then 1
end ) rt1,
count(
case when request_date between trunc(sysdate-7) and trunc(sysdate-5)
then 1
end ) rt2,
....
from request_data
where reqest_date >= trunc(sysdate-7)
group by request_terms
)



but having no data to test with....

Question on Select Query

Newbie, October 04, 2005 - 8:50 am UTC

I would like to know if there is a select query statement which will return true if and only if all the rows are present in the table.

For ex:
In table ORDER_TYPE
order_id order_type
1 SHIP
2 PLANE


The select query should return true if and only if the ORDER_TYPE table contains both the values i.e SHIP,PLANE















Tom Kyte
October 04, 2005 - 4:28 pm UTC

queries don't really return "true" or "false", but rather a set of data.

select 1
from t
where order_type in ( 'SHIP', 'PLANE' )
having count(distinct order_type) = 2;


will return 1 or "nothing"

RE: NEWBIE

David Eads, October 04, 2005 - 3:57 pm UTC

How about:

select count(*) from dual
where exists(select * from order_type where order_type='SHIP')
and exists(select * from order_type where order_type='PLANE')

Tom Kyte
October 04, 2005 - 8:08 pm UTC

that works as well - there are many ways to do this.

Sql Query

ian gallacher, October 04, 2005 - 7:23 pm UTC

Creating a view using where ( select In )
consider this example :-
drop table t1;
create table t1
( t1_a varchar2(1),
t1_b varchar2(1));

drop table t2;
create table t2
( t2_a varchar2(1),
t2_b varchar2(1));

drop view v1;
create view v1 as
select * from t1 where t1_a in
( select distinct t1_b from t2);

Rem t1_b not in table T2! and view created without errors

View is created without errors
why does select distinct t1_b from t2 give no errors
since t1_b is not a valid column on table t2 !

Running
select distinct t1_b from t2 alone gives error
invalid column name as expected

Any comments would be appreciated

Ian

Tom Kyte
October 04, 2005 - 9:03 pm UTC

create view v1 as
select * from t1 where t1_a in
( select distinct t1.t1_b from t2);
^^^

is what you coded, perfectly 100% valid SQL and part of the way it is supposed to work.

it is called a correlated subquery, sort of like:

select * from dept
where exists ( select null from emp where emp.deptno = dept.deptno )


ALL columns are "passed" into the subquery as correlation variables.

(correlation names are good.... avoid this problem all together.)

Not a bug, but rather the way the SQL standard says to do it.

Thanks

NewBie, October 04, 2005 - 11:29 pm UTC

Thank you all.Tom this is a very useful site.Good Work Keep it up.And one question? How can I also become an expert like you?



Tom Kyte
October 05, 2005 - 7:00 am UTC

sql

tony, October 05, 2005 - 2:36 am UTC

Hi tom,

I have a query here.
I have a table

V_FILE_NAM D_MIS_DATE D_SYS_DATE
ED_MISPRI 12/8/2003 1/9/2004 8:08:33 PM
ED_MISEXT 12/10/2003 1/9/2004 8:18:09 PM
ED_MISPRI 12/9/2003 1/9/2004 8:19:05 PM
ED_MISPRI 12/10/2003 1/14/2004 2:50:54 PM
ED_MISSTAT_020 12/10/2003 1/14/2004 4:05:52 PM
ED_MISSTAT_098 12/10/2003 1/14/2004 4:07:40 PM
ED_MISSTAT_101 12/10/2003 1/14/2004 4:07:51 PM
ED_MISSTAT_103 12/10/2003 1/14/2004 4:08:03 PM
ED_MISSTAT_104 12/10/2003 1/14/2004 4:08:13 PM
ED_MISSTAT_107 12/10/2003 1/14/2004 4:08:32 PM
ED_MISEXT 12/11/2003 1/15/2004 2:35:08 PM
ED_MISEXT 12/12/2003 1/15/2004 2:41:27 PM
ED_MISPRI 12/11/2003 1/15/2004 2:45:03 PM
ED_MISPRI 12/12/2003 1/15/2004 2:49:31 PM
ED_MISSTAT_020 12/11/2003 1/15/2004 2:56:49 PM
ED_MISSTAT_101 12/11/2003 1/15/2004 2:57:33 PM
ED_MISSTAT_103 12/11/2003 1/15/2004 2:57:41 PM
ED_MISSTAT_104 12/11/2003 1/15/2004 2:57:48 PM
ED_MISSTAT_107 12/11/2003 1/15/2004 2:57:56 PM
ED_MISSTAT_020 12/12/2003 1/15/2004 3:03:05 PM
ED_MISSTAT_101 12/12/2003 1/15/2004 3:03:26 PM
ED_MISSTAT_103 12/12/2003 1/15/2004 3:03:39 PM

here the D_SYS_DATE is the process time...

I want to get the montly audit statment where i need to get the min start time ,end time,avg time for a month,thease are loading jobs and d_mis_date is the date on which the file came and the process date

Tom Kyte
October 05, 2005 - 7:22 am UTC

well, given your data and date format - I'm not sure what the dates represent (dd/mm/yyyy or mm/dd/yyyy).


I don't know what MONTH a given record should fall into (the month of the START (d_mis_date) or the month of the END (d_sys_date))


but, once you figure that out, you just


select trunc(dt,'mm'),
min(d_mis_date),
max(d_sys_date),
avg(d_sys_date-d_mis_date)
from t
group by trunc(dt,'mm')


where DT is the column you picked to be "the month"

sql query using where/in

ian gallacher, October 05, 2005 - 12:22 pm UTC

Hi Tom,

Still a bit perplexed on behaviour of query but live and learn !

Am I using the where in construct correctly or would you suggest a different approach ?

Real life example is

T1 is view of patient contact details conditioned by a user supplied date

T2 are sets of views of other patient information for patients who are in the T1 view

Have used “where” “distinct” “In” T1 construct so that conditioning for patients selected is controlled by the one view and if conditioning changes only need to amend T1 view

Hope this I have explained my self and look forward to hearing your comments

Thanks

Ian


Tom Kyte
October 05, 2005 - 1:42 pm UTC

Yes, I would use IN


select * from t1 where some_key in ( select t2.some_other_key from t2 )


no need for distinct, the semantics of "in" don't necessitate that.

sql query using where/in

ian gallacher, October 05, 2005 - 4:10 pm UTC

Hi Tom

Thanks - will use In with confidence and remove distinct

Ian


query

mo, October 11, 2005 - 10:29 pm UTC

Tom:

Is there a way to count numbers in a varchar2 field?

ORG_CODE varchar2(10)
------------
ABC
123
456
DEF

For the above data the count should = 2.

THanks,

Tom Kyte
October 12, 2005 - 7:10 am UTC

create a plsql function that tries to convert a string to a number.

function returns 1 when conversion is successful
function returns 0

then you can select count(*) from t where is_number(string) = 1;


query

mo, October 12, 2005 - 7:33 am UTC

Tom:

Do you have this function anywhere on the webiste? Would you use to_number(x) and if it works you return 1 else return 0.



Tom Kyte
October 12, 2005 - 7:45 am UTC

it is a pretty simple thing - try to covert - return 1, exception block - return 0.

ops$tkyte@ORA10GR1> select is_number( 'a' ) from dual;
 
IS_NUMBER('A')
--------------
             0
 
ops$tkyte@ORA10GR1> select is_number( '1' ) from dual;
 
IS_NUMBER('1')
--------------
             1
 
ops$tkyte@ORA10GR1> select is_number( '1e10' ) from dual;
 
IS_NUMBER('1E10')
-----------------
                1
 


but - if you know your numbers are only to be digits say 0..9 - you can of course simplify this greatly using builtin functions like translate.   

query

mo, October 12, 2005 - 10:13 am UTC

Tom:

This is_number function does not work in 8i and 9i. The database is 8i.


SQL> select is_number( 'a' ) from dual;
select is_number( 'a' ) from dual
       *
ERROR at line 1:

ORA-00904: invalid column name

Also the number value is always numbers 0..9.

What would you do here.


 

Tom Kyte
October 12, 2005 - 1:58 pm UTC

i wante you to write it, it is a very simple thing.

create or replace function is_number( p_str in varchar2 ) return number
as
l_num number;
begin
l_num := to_number(p_str);
return 1;
exception
when others then return 0;
end;
/



Missing data found!

Robert Simpson, October 12, 2005 - 11:51 am UTC

As for (3) in the original question, the answer to (2) works as long as the "missing" data is added to the table. In this case, the data should be:

AC006 10 DC001 2/1/2002
AC006 20 DC002 2/1/2002

AC006 0 DC001 5/1/2002
AC006 0 DC002 5/1/2002

AC006 100 DC003 5/1/2002
AC006 50 DC004 5/2/2002


SQl QUERY

Totu, October 13, 2005 - 12:40 am UTC

Dear All.

I have below tables:

[Docs]
DocID DocName
1 Doc1
2 Doc2
3 Doc3
....

[DocTasks]
DocID TaskID
1 1
1 2
2 1
2 3
2 2
3 2
3 1
4 5
4 2
4 1
4 3
5 3
5 2
....

[Tasks]
TaskID TaskName
1 Task1
2 Task2
3 Task3
4 Task4
...

Now I want to get TaskID-s from DocTasks table, where each that TaskID belongs to all DocID-s in DocTasks table. For example:
TaskID 1 belongs to DocIDs(1, 2, 3, 4) -> This TaskID is not needed
TaskID 2 belongs to ALL DocIDs(1, 2, 3, 4, 5) -> This TaskID is needed
TaskID 3 belongs to DocIDs(2, 4, 5) -> This TaskID is not needed. So on..
So, Query must return only TaskID=2, because only this TaskID belongs to All DocIDs.

Thanks in advance.

Tom Kyte
October 13, 2005 - 10:32 am UTC

i don't really even look at these without create tables and inserts - no promises that when they are placed here I can answer it - but......

Create table

Bala, October 13, 2005 - 3:10 am UTC

Hi Tom,
I need to create a table as the table which is available on another db. For eg: I am connected to X and want to create a table on this, which already exists on database Y like,
create table abc as select abc@Y .... etc.
Is there a way to do this?


Tom Kyte
October 13, 2005 - 10:34 am UTC

you just gave the entire syntax for it pretty much?

create table abc as select * from abc@y;



generic sql query

ramis, October 17, 2005 - 7:44 am UTC

Hi Tom,

would you please help this out for me

I have a query

select t.*,
case when col2 in (1, 17)
then rank() over (order by case when col2 in (1, 17) then 0 else null end, id)
else 0
end rank
from t
order by id
/

ID COL2 RANK
1 1 1
2 17 2
3 1 3
4 1 4
5 11 0
6 1 5


well that is perfectly fine,
can we make this query a generic one, that means given any set of two col2 values it shows the output


some things like this...

select t.*,
value1 = 1,
value2 = 17,
case when col2 in (value1, value2)
then rank() over (order by case when col2 in (value1, value2) then 0 else null end, id)
else 0
end rank
from t
order by id
/

output

ID COL2 RANK value1 value2
1 1 1 1 17
2 17 2 1 17
3 1 3 1 17
4 1 4 1 17
5 11 0 1 17
6 1 5 1 17

[/pre]

i mean in such a query we would not need to change the col2 values
inside the CASE statement but just pass the two values at the top
which will get the same desired ouput..

also value1 and value 2 should act as columns

best regards,

create table t
(id number,
col2 number)
/


insert into t values ( 1, 1)
insert into t values (2, 17)
insert into t values (3 , 1)
insert into t values (4 , 1)
insert into t values (5 , 11)
insert into t values (6 , 1)




Tom Kyte
October 17, 2005 - 8:08 am UTC

sql is like a program - you cannot change the columns/tables whatever in a given query. You would have to write a procedure that dynamically BUILDS a given query based on inputs you send to it.

how to write a dynamic procedure

ramis, October 17, 2005 - 9:52 am UTC

Hi, tom,
thanks for your response,

would you please help me the writing a procedure, as you mentioned above, that dynamically BUILDS a given query based on inputs we send to it.

regards,
Ramis.

Tom Kyte
October 17, 2005 - 10:08 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

has an example, there are tons of them - it is just dynamic sql

A reader, October 19, 2005 - 11:24 am UTC

Hi Tom,

Your advice to solve this problem, would be greatly appreciated.

DROP TABLE T
/
create table T
(id number
,qty number
,trn_date date
)
/
insert into t values (1,1,'01-JAN-2005')
/
insert into t values (1,3,'03-JAN-2005')
/
insert into t values (1,7,'05-JAN-2005')
/
select * from t;

ID QTY TRN_DATE
-------------------- -------------------- ---------
1 1 01-JAN-05
1 3 03-JAN-05
1 7 05-JAN-05

For any given day, calculate the qty based on date:
The output required as:

ID QTY TRN_DATE
1 1 01-JAN-05
1 1 02-JAN-05
1 3 03-JAN-05
1 3 04-JAN-05
1 7 05-JAN-05

This is in Data Warehousing environment in Oracle 9.2.0.5.
Table has around 5 million rows with 1.5 million unique ids.
Any pointers to achieve this would be greatly appreciated.
I wrote a function to calucate quantity for any given day. For 1.5M ids, it takes around 25 min to run.
Is there anyway, we can achieve this in single sql.

Thanks in advance.

Tom Kyte
October 19, 2005 - 12:37 pm UTC

need more info.

assuming in real life - more than 1 id right and this should be "by id"

assuming in real life you will either

a) expect to find the min/max date by id for trn_date from the table and use that range for each ID

or...

b) expect to find the min/max date OVER ALL records for trn_date and use that range for each ID

or...

c) you input the date range of interest and we only use that range.


(this is a simple carry down, we can do this with analytics)


A reader, October 19, 2005 - 8:35 pm UTC

Hi Tom,

You are correct.

For front end tool to run faster, they want me to create a table with trn_quantity for everyday for all ids. There is no date range. Can you please illustrate, how to achive this with analytics. Thanks for your advice.

Tom Kyte
October 20, 2005 - 7:51 am UTC

I asked a three part OR question.

I cannot be correct ;) I don't know which of the three is right.


what is "everyday"

A reader, October 20, 2005 - 9:55 am UTC

Tom,

Table has to be populated with qty and trn_date on a daily basis , eventhough there was no transaction for that date.
The quantity has to be derived from the most recent quantity for that id. Hope I am clear.

Tom Kyte
October 20, 2005 - 4:28 pm UTC

Ok, I put this in october to make it have less rows to demo with:

ops$tkyte@ORA10GR1> select * from t;

        ID        QTY TRN_DATE
---------- ---------- ---------
         1          1 01-OCT-05
         1          3 03-OCT-05
         1          7 05-OCT-05

ops$tkyte@ORA10GR1>

Your initial "populate" will insert these values, set start_date to your lowest start date you want:


ops$tkyte@ORA10GR1> variable start_date varchar2(20)
ops$tkyte@ORA10GR1> exec :start_date := '01-oct-2005';

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> with dates
  2  as
  3  (select to_date(:start_date,'dd-mon-yyyy')+level-1 dt
  4     from dual
  5  connect by level <= sysdate-to_date(:start_date,'dd-mon-yyyy')+1 ),
  6  ids
  7  as
  8  (select distinct id from t ),
  9  dates_ids
 10  as
 11  (select * from dates, ids )
 12  select dates_ids.id, dates_ids.dt, max(t.qty) over (order by dates_ids.dt) qty
 13    from dates_ids left outer join t on (dates_ids.id = t.id and dates_ids.dt = t.trn_date);

        ID DT               QTY
---------- --------- ----------
         1 01-OCT-05          1
         1 02-OCT-05          1
         1 03-OCT-05          3
         1 04-OCT-05          3
         1 05-OCT-05          7
         1 06-OCT-05          7
         1 07-OCT-05          7
         1 08-OCT-05          7
         1 09-OCT-05          7
         1 10-OCT-05          7
         1 11-OCT-05          7
         1 12-OCT-05          7
         1 13-OCT-05          7
         1 14-OCT-05          7
         1 15-OCT-05          7
         1 16-OCT-05          7
         1 17-OCT-05          7
         1 18-OCT-05          7
         1 19-OCT-05          7
         1 20-OCT-05          7

20 rows selected.


<b>and then every day, you run this</b>

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select id, sysdate, qty
  2    from (select id, trn_date, max(trn_date) over (partition by id) max_dt, qty from t)
  3   where trn_date = max_dt;

        ID SYSDATE          QTY
---------- --------- ----------
         1 20-OCT-05          7

<b>and if you "miss a day" for whatever reason, just use the first query again to fill in the gaps</b>
 

A reader, October 20, 2005 - 6:30 pm UTC

Brilliant, as usual!!!

Thanks a lot.
I need to understand the concept behind your first query.



sql query..

ramis, October 23, 2005 - 11:57 am UTC

Hi Tom,

i have a db table having thousands of records

two of the columns of that table are



MATCH_ID TEAM_ID
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 3
1 3
1 3
1 3
1 3
1 3
1 3
1 3
1 3
1 3
1 3



each MATCH_ID has 22 occurances of two different TEAM_ID's (11 each)
this pattern is followed throghtout the table
i..e in above data MATCH_ID = 1 has 11 occuracnes of TEAM_ID = 2
and 11 occuracnes of TEAM_ID = 3

i want the shortest possible query that shows me the the opposite TEAM_ID
against the other TEAM_ID in the same MATCH_ID

for example (desired output)



MATCH_ID TEAM_ID OPP_TEAM_ID
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2
1 3 2



as you can see there are two diffrent types of TEAM_ID
in the data for MATCH_ID = 1.
So, the third column shows the opposite TEAM_ID for the other
TEAM_ID in the same MATCH


hope ths will clear my requiremnt
I want the shortest possible query to achieve this




create table t
(match_id number(2),
team_id number(2))
/


insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,2);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);
insert into T values (1,3);



Tom Kyte
October 23, 2005 - 1:45 pm UTC

ops$tkyte@ORA10GR1> select match_id, team_id,
  2         decode( team_id,
  3                     max(team_id) over (partition by match_id) ,
  4                 min(team_id) over (partition by match_id) ,
  5                     max(team_id) over (partition by match_id) ) other_tid
  6    from t
  7  /

  MATCH_ID    TEAM_ID  OTHER_TID
---------- ---------- ----------
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          2          3
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2
         1          3          2

22 rows selected.
 

sql query for this??

ramis, November 01, 2005 - 8:37 am UTC

Hi

I have a table with data in the following format


Col1 COL2
A 2
B
C
D
E
F 45
G
H
I 33


I want a query that would fill the gaps with the last non-null value in col2

desired output

Col1 COL2 output_col
A 2 2
B 2
C 2
D 2
E 2
F 45 45
G 45
H 45
I 33 33


as you can see the output_column shows the last non-null value of the col2 for each row..if for any row it finds a new col2 value then it shows that value for that row and for next rows until it finds another new one and so on


how to achieve that from sql query...??

regards,

create table t
(col1 varchar2(10),
col2 number(5))

INSERT INTO T VALUES (A,2);
INSERT INTO T VALUES (B,NULL);
INSERT INTO T VALUES (C,NULL);
INSERT INTO T VALUES (D,NULL);
INSERT INTO T VALUES (E,NULL);
INSERT INTO T VALUES (F,45);
INSERT INTO T VALUES (G,NULL);
INSERT INTO T VALUES (H,NULL);
INSERT INTO T VALUES (I,33);



Tom Kyte
November 01, 2005 - 10:58 am UTC

10g and up and then 8i-9i solutions:

ops$tkyte@ORA10GR2> select col1, col2, last_value(col2 ignore nulls ) over (order by col1) oc
  2    from t;

COL1             COL2         OC
---------- ---------- ----------
A                   2          2
B                              2
C                              2
D                              2
E                              2
F                  45         45
G                             45
H                             45
I                  33         33

9 rows selected.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select col1, col2,
  2         to_number( substr( max(oc1) over (order by col1), 7 ) ) oc
  3    from (
  4  select col1, col2,
  5         case when col2 is not null
  6                  then to_char(row_number() over (order by col1),'fm000000')||
  7                               col2
  8                  end oc1
  9    from t );

COL1             COL2         OC
---------- ---------- ----------
A                   2          2
B                              2
C                              2
D                              2
E                              2
F                  45         45
G                             45
H                             45
I                  33         33

9 rows selected.

 

Creating table on another db

Kumar, November 02, 2005 - 12:01 am UTC

Hi Tom,
Earlier one person had asked about creating a table on different db and you had replied

create table abc as select * from abc@y;

But, when I tried this, I am getting
ORA-02019 connection description for remote database not found.

Regards,
kumar



Tom Kyte
November 02, 2005 - 5:03 am UTC

you need to create a database link named "Y" first - see the create database link command in the sql reference.

Cursor question

jp, November 04, 2005 - 6:19 am UTC

Tom,

I am trying to query information from one table and insert the information in a new table,  but one of the column is an expresion and am alway getting error

SQL> create table tab1
  2  (jobs varchar2(20),
  3  tot_salary number);

Table created.

SQL> create or replace procedure ins_dat is
  2
  3  CURSOR c1_cur is
  4  select job "j", sum(sal) "s" from emp
  5  group by job;
  6  begin
  7  for c1_rec in c1_cur
  8  Loop
  9  insert into tab1 values (c1_rec.j,c1_rec.s);
 10
 11  end loop;
 12  commit;
 13  end;
 14  /

Warning: Procedure created with compilation errors.

SQL> show err
Errors for PROCEDURE INS_DAT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
9/1      PL/SQL: SQL Statement ignored
9/42     PLS-00302: component 'S' must be declared
9/42     PL/SQL: ORA-00984: column not allowed here

Then I tried declaring both variables,  but still getting errors..

SQL> create or replace procedure ins_dat is
  2  j varchar2(20);
  3  s number;
  4  CURSOR c1_cur is
  5  select job "j", sum(sal) "s" from emp
  6  group by job;
  7  begin
  8  for c1_rec in c1_cur
  9  Loop
 10  insert into tab1 values (c1_rec.j,c1_rec.s);
 11
 12  end loop;
 13  commit;
 14  end;
 15  /

Warning: Procedure created with compilation errors.

SQL> show err
Errors for PROCEDURE INS_DAT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/1     PL/SQL: SQL Statement ignored
10/42    PLS-00302: component 'S' must be declared
10/42    PL/SQL: ORA-00984: column not allowed here

Can you help me to sort it out this?

thanks  

Tom Kyte
November 04, 2005 - 8:49 am UTC

"j" and "s" are lower case

c1_rec."j"

sql query help?

saad, November 07, 2005 - 8:52 am UTC

HI Tom

i have a db table having thousands of records

three of the columns of that table are


NO. M_ID T_ID
1 1 2
2 1 2
3 1 3
4 1 3
5 2 6
6 2 9
7 3 10
8 3 2

The No. column shows the chronlogical number of records in the table (this is a primary key column)

each M_ID has two different T_ID's
this pattern is followed throghout the table
i..e in above data M_ID = 1 has two distinct T_ID's i.e. 2 & 3

i want the shortest possible query that shows the ascending number of distinct "T_ID's" in each "M_id" order by the "No." column

for example (desired output)

No. M_ID T_ID desred_column
1 1 2 1
2 1 2 1
3 1 3 2
4 1 3 2
5 2 6 1
6 2 9 2
7 3 10 1
8 3 2 2


as you can see there are two diffrent types of T_ID's in the data for M_ID = 1.
And T_ID = 2 comes first then T_ID = 3 in M_ID = 1
So, the fourth column shows the 1 for T_ID = 2 and shows 2 for T_ID = 2


hope ths will clear my requiremnt
I want the shortest possible query to achieve this
regards,



create table t
(Num_c number(4),
M_ID number(2),
T_ID number(2))
/


insert into T values (1,1,2) ;
insert into T values (2,1,2) ;
insert into T values (3,1,3) ;
insert into T values (4,1,3) ;
insert into T values (5,2,6) ;
insert into T values (6,2,9);
insert into T values (7,3,10) ;
insert into T values (8,3,2) ;


Tom Kyte
November 07, 2005 - 11:29 am UTC

well, count(distinct t_id) over (partition by m_id order by num_c) won't do it - you cannot order by a distinct in this case....


would counting state changes be sufficient with your data? that is, if the data is:


1 1 2 (1)
2 1 3 (2)
3 1 3 (2)
4 1 2 (3) <<<=== does this happen, 2 to 3 to 2? and if so, is double
counting like this permitted?

just a little problem remains...??

saad, November 07, 2005 - 10:39 am UTC

Tom, in my question above "SQL query help?"

i have managed to solve my problem abit but still it is showing wrong result for a row

this was the data


NUM_C M_ID T_ID
---------- ---------- ----------
1 1 2
2 1 2
3 1 3
4 1 3
5 2 6
6 2 9
7 3 10
8 3 2

and this was my desired output

NUM_C M_ID T_ID RESULT_COLUMN
1 1 2 1
2 1 2 1
3 1 3 2
4 1 3 2
5 2 6 1
6 2 9 2
7 3 10 1
8 3 2 2

i used this query

select num_c
, m_id
, t_id
, dense_rank() over (partition by m_id order by t_id) result_column
from t

and it gave me this

NUM_C M_ID T_ID RESULT_COLUMN
--------- ---------- ---------- -------------
1 1 2 1
2 1 2 1
3 1 3 2
4 1 3 2
5 2 6 1
6 2 9 2
8 3 2 1
7 3 10 2



please check the last two rows
the query puts num_c = 8 before num_c = 7 and hence gives wrong ranking
I want to rank the data by distinct T_ID in each M_ID as ordered by the num_c column

please help me to solve this one!
regards,

Tom Kyte
November 07, 2005 - 8:17 pm UTC

you ordered by t_id, not by num_c - therefore you have no reason to say "it gives wrong answer", it sorted by a completely different field ?!?!?

query problem??

saad, November 07, 2005 - 8:31 pm UTC

Tom,
in my question above, by wrong result I meant not according to my requirement..or not matching my desired output..
would you please fix that one..ddl data is given with my original question (two posts above..)

regards,

to: Saad.

Marcio Portes, November 08, 2005 - 12:28 am UTC

Saad, as far as I understand here you've got your desired result but the order, so just order it.

select *
from (
select num_c
, m_id
, t_id
, dense_rank() over (partition by m_id order by t_id) result_column
from t
)
order by num_c



problem still not solved??

saad, November 08, 2005 - 2:56 pm UTC

Marcio Portes and Tom,
no my problem is not solved as you have suggested above in your query

select *
from (
select num_c
, m_id
, t_id
, dense_rank() over (partition by m_id order by t_id) result_column
from t
)
order by num_c


it gives this

NUM_C M_ID T_ID RESULT_COLUMN
--------- ---------- ---------- -------------
1 1 2 1
2 1 2 1
3 1 3 2
4 1 3 2
5 2 6 1
6 2 9 2
7 3 10 2
8 3 2 1

see the last two rows, for num_c = 7 it puts RESULT_COLUMN = 2 and for num_c = 8 it puts RESULT_COLUMN = 1

it should have been
NUM_C M_ID T_ID RESULT_COLUMN
7 3 10 1
8 3 2 2

the ranking should be done ordered by the num_c column
as i am said in my original question above..

Tom, looking for your help??



Tom Kyte
November 08, 2005 - 10:33 pm UTC

not sure you can do this, but if you give me a create table and insert into's, i'll give it a try (I think i have an idea...)

select distinct

A reader, November 08, 2005 - 4:08 pm UTC

I have a table datflow which has 5 min data
METERID
RECORDERTYPE
TESTDATETIME
AVGOFFLOWVALUE
DATE_
I need to select for each meterid and recorder type
count of distinct number of days in a month(testdatetimestamp)
something like
select meterid, recordertype,
count(distinct number of days in this month trunc(date_,'mm')) datflow_5m
group by meterid, recordertype, trunc(date_,'mm')


To Saad:

Ravi, November 09, 2005 - 3:45 am UTC

Saad,
Below is one way to get the result.Its w/o using analytics.
select num_c,
m_id,
t_id,
(select count(distinct(t_id)) from t where m_id=t1.m_id and num_c<=t1.num_c) rank1
from t t1


Tom Kyte
November 09, 2005 - 9:48 am UTC

yes, very good - thanks!

missed the obvious, won't be really "performant" but for small sets, that'll work.

To Ravi from India

saad, November 09, 2005 - 9:29 am UTC

Ravi, thank you so much for your query its done!

select num_c,
m_id,
t_id,
(select count(distinct(t_id)) from t where m_id=t1.m_id and
num_c<=t1.num_c) rank1
from t t1

/

NUM_C M_ID T_ID RANK1
------ ---------- ---------- ----------
1 1 2 1
2 1 2 1
3 1 3 2
4 1 3 2
5 2 6 1
6 2 9 2
7 3 10 1
8 3 2 2

regards,

null for duplicate names??sql help

ali, November 09, 2005 - 12:00 pm UTC

Hi,

I am facing s problem hope you would help

I am running a query that shows me name of employees from a table with their
employee_ids

select emp_id, emp_name from t
/

EMP_ID EMP_NAME
1820 Waugh
1920 King
1936 Ric
1940 Taylor
2036 Smith
2123 Paul
2157 ALi
2181 Smith
2189 Joe
2332 Bichel
2333 Usman

I want that if there are more than ONE same EMP_NAMES returned by the query,

then query should not show the emp_name but instead should show 'duplicate

name' for matching names

so in my above data the name SMITH has come twice
so my desired output is

EMP_ID EMP_NAME
1820 Waugh
1920 King
1936 Ric
1940 Taylor
2036 DUPLICATE NAME
2123 Paul
2157 ALi
2181 DUPLICATE NAME
2189 Joe
2332 Bichel
2333 Usman


hope this will clear my requirement, I need the fastest and shortest possible

query to achieve this

CREATE TABLE T
(EMP_ID NUMBER(4),
EMP_NAME VARCHAR2(25))
/

INSERT INTO T VALUES (1820,'Waugh');
INSERT INTO T VALUES (1920,'King');
INSERT INTO T VALUES (1936,'Ric');
INSERT INTO T VALUES (1940,'Taylor');
INSERT INTO T VALUES (2036,'Smith');
INSERT INTO T VALUES (2123,'Paul');
INSERT INTO T VALUES (2157,'ALi');
INSERT INTO T VALUES (2181,'Smith');
INSERT INTO T VALUES (2189,'Joe');
INSERT INTO T VALUES (2332,'Bichel');
INSERT INTO T VALUES (2333,'Usman');


regards,




Tom Kyte
November 11, 2005 - 10:21 am UTC

ops$tkyte@ORA10GR2> select emp_id,
  2         case when count(emp_name) over (partition by emp_name) > 1
  3              then 'duplicate name'
  4              else emp_name
  5          end emp_name
  6    from t;

    EMP_ID EMP_NAME
---------- --------------------
      2157 ALi
      2332 Bichel
      2189 Joe
      1920 King
      2123 Paul
      1936 Ric
      2181 duplicate name
      2036 duplicate name
      1940 Taylor
      2333 Usman
      1820 Waugh

11 rows selected.

 

re: null for duplicate names??sql help

Jonathan Taylor, November 09, 2005 - 12:14 pm UTC

SQL> select emp_id
  2  , decode (count(*) over (partition by emp_name)
  3           ,1,emp_name
  4           ,'DUPLICATE NAME'
  5           ) emp_name
  6  from t
  7  ;

EMP_ID EMP_NAME
------ -------------------------
  2157 ALi
  2332 Bichel
  2189 Joe
  1920 King
  2123 Paul
  1936 Ric
  2036 DUPLICATE NAME
  2181 DUPLICATE NAME
  1940 Taylor
  2333 Usman
  1820 Waugh

11 rows selected
 

fz, November 09, 2005 - 2:22 pm UTC

from
A B
---------- -
1 x
1 y
2 y
3 x

is there a simple sql to get following?
A B
---------- -
1 x
2 y
3 x

(whenever 'A' is same, just take value 'x').


Tom Kyte
November 11, 2005 - 10:26 am UTC

select a, max(b) from t group by a;

..to fz

Alex N, November 10, 2005 - 7:58 am UTC

select a
, b
from (select a
, b
, Row_Number() over (partition by a
order by b
) rn
from t
)
where rn = 1


understanding not in operator

Vaibhav, November 10, 2005 - 8:47 am UTC

SQL> select * from temp1;

CASE_ID    BRIDGE_ID
---------- --------------------
A          id1
B          id2
C          id3

SQL> select * from temp2;

USER_CD    BRIDGE_ID
---------- ---------------
U1         id1
U2
U3         id2

SQL> select * from temp1 where bridge_id not in (select bridge_id from temp2);

no rows selected

SQL> select BRIDGE_ID from temp1
  2  minus
  3  select bridge_id from temp2;

BRIDGE_ID
--------------------
id3


If there's a null value, why doesnt the "not in" query work ?  

Tom Kyte
November 11, 2005 - 11:50 am UTC

not in query did work, it did exactly what not in is defined and supposed to do.


when you have:

where x not in ( select y from t );

and some Y is NULL - it is "not known" if X is "not in" that set - nulls change everything


not exists and not in are "not equivalent"

To Ferdinand Maer -: Ename with space

Jagjeet Singh, November 10, 2005 - 9:23 am UTC


SQL> r
  1  select ename, substr(e,instr(e,':')+1) ename2 from
  2*  ( select ename, replace( dump(ename,17),',',' ') e from emp )

ENAME        ENAME2
------------ ---------------------------------------------
SMITH         S M I T H
ALLEN         A L L E N
WARD          W A R D
JONES         J O N E S
MARTIN        M A R T I N
BLAKE         B L A K E
CLARK         C L A R K
SCOTT         S C O T T
KING          K I N G
TURNER        T U R N E R
ADAMS         A D A M S
JAMES         J A M E S
FORD          F O R D
MILLER        M I L L E R

14 rows selected.
 

To Jagjeet Singh

Matthias Rogel, November 11, 2005 - 11:34 am UTC

nice idea, but not 100%-correct

insert into emp(ename) values ('A,B')


how to generate seasons??

saadia, November 11, 2005 - 1:56 pm UTC

Tom,

I want to generate all the seasons between year 1950 upto 2005-06 season from a query..

in this format

SEASONS
1950
1950-51
1951
1951-52
1952
1952-53
1953
1953-54
....
....
....
....
2005
2005-06

would would be the query to get this ouput
regards,


Tom Kyte
November 12, 2005 - 8:52 am UTC

ops$tkyte@ORA10GR2> with nums
  2  as
  3  (select ceil(level/2)-1 l, rownum r
  4     from dual
  5   connect by level <= (to_number(to_char(sysdate,'yyyy'))-1950+1)*2 )
  6  select to_char( 1950 + l ) || case when mod(r,2) = 0 then '-' || substr(to_char( 1950+l+1, 'fm9999') ,3) end dt
  7    from nums
  8  /

DT
--------------------------------------------
1950
1950-51
1951
1951-52
...
2004
2004-05
2005
2005-06

112 rows selected.

 

To -: Matthias Rogel

Jagjeet Singh, November 12, 2005 - 10:52 am UTC

I knew this,

Need to generate any single character like '~' or '*' 
or any string which you will not have in your data.

I am using '@!$#%^'

SQL> r
  1  Select ename, replace( substr(e,instr(e,':')+1),'@ ! $ # % ^',',') e from
  2* (select ename,replace(dump(replace(ename,',','@!$#%^'),17),',',' ') e from emp)

ENAME                          E
------------------------------ ------------------------------------------------------------
SMITH                           S M I T H
ALLEN                           A L L E N
WARD                            W A R D
JONES                           J O N E S
MARTIN                          M A R T I N
BLAKE                           B L A K E
CLARK                           C L A R K
SCOTT                           S C O T T
KING                            K I N G
TURNER                          T U R N E R
ADAMS                           A D A M S
JAMES                           J A M E S
FORD                            F O R D
MILLER                          M I L L E R
A,B                             A , B

 

saadia, November 13, 2005 - 9:13 am UTC

Tom, the query given below in your reply to my question (two posts above) is just great! thankyou so much.
the great thing about this query is that it generates the next season with the chnage in SYSDATE..

Tom, currently the process is starting from the season 1950, suppose I want to start it from 1949-50 season then what would be the query for that

desired output

DT
--------------------------------------------
1949-50
1950
1950-51
1951
1951-52
...
2004
2004-05
2005
2005-06


your query is this..

with nums
as
(select ceil(level/2)-1 l, rownum r
from dual
connect by level <= (to_number(to_char(sysdate,'yyyy'))-1950+1)*2 )
select to_char( 1950 + l ) || case when mod(r,2) = 0 then '-' ||
substr(to_char( 1950+l+1, 'fm9999') ,3) end dt
from nums


DT
--------------------------------------------
1950
1950-51
1951
1951-52
...
2004
2004-05
2005
2005-06

112 rows selected.


Tom Kyte
November 13, 2005 - 10:35 am UTC

just change my 1950 to 1949? in all cases...



problem not solved

saadia, November 13, 2005 - 3:28 pm UTC

Tom,
thanks for your reply but changing 1950 to 1949 does not solve my problem becasue it starts the season from 1949 but I want to start it from 1949-50

here i ran the query as you said

with nums
as
(select ceil(level/2)-1 l, rownum r
from dual
connect by level <= (to_number(to_char(sysdate,'yyyy'))-1949+1)*2 )
select to_char( 1949 + l ) || case when mod(r,2) = 0 then '-' ||
substr(to_char( 1949+l+1, 'fm9999') ,3) end dt
from nums
/


DT
-------------------------------------------
1949 <==== it starts season from 1949
1949-50 <==== but I want to start from here i.e. 1949-50
1950
1950-51
1951
1951-52
1952
1952-53
1953
1953-54
1954
...
2005-06

114 rows selected.

would you please help me out..
regards.



Tom Kyte
November 13, 2005 - 5:10 pm UTC

where dt <> 1949


just add a predicate to get rid of the starting row if that is what you want.

Insert or Update statement in CASE statement

Tata Jagadeesh, November 14, 2005 - 3:37 am UTC

Hi Tom,

1. Is it possible to use insert or update statement in CASE statement. If so, please provide an example.

2. Are there any chances NVL function fail and lead to exception?

Thanks in advance.

Tom Kyte
November 14, 2005 - 9:09 am UTC

1) you have that "backwards", you can use case in an update and insert in SQL.

2) sure, nvl(x,1/0) for example

A reader, November 14, 2005 - 6:38 am UTC

Jagdeesh,

Why don't you try it yourself ?

Let's all collectively not waste such a precious gift to oracle community by asking such trivial questions. A gentle plea to all

mosts beofre first occurance

Ramis, December 15, 2005 - 4:50 pm UTC

Hello,

I am facing a problem hope anyone would help..
I have a table T with 15 columns and data stored in the following format

match_id player_ID Score Place Minutes
1 11 19 A 12
2 12 34 B 24
3 11 101 C 112
4 11 201 D 121
5 12 211 E 222
6 13 181 F 322
7 11 34 A 45
8 12 12 G 10
9 13 45 C 65
10 11 H
10 12 I
10 13 D
10 14 A
11 11 51 H 76
11 12 65 I 98
11 13 76 D 76
11 14 56 A 45

and so on upto 100,000 records..
Null entires show the player didn't get the chance to score

Here match_id and player_id are joinlty primary keys..(i.e. they do not repeat combined)

I want to extract some information for each player before he made his first score between 50 and 99 order by match_id

so my desired output for above data is

player_id =>100 sum(score) sum(minutes) No Place of 1st Match_id
_of_mats 50-99 score
11 2 355 290 5 B 11
12 1 322 256 4 I 11
13 1 226 387 3 D 11

player_id = 14 didn't come because he made score of 56 (i.e between 50 to 99) in his very scoring effort.

Here,
=>100 is the no. of times player scored 100 or more before his first 50-99 score order by match_id
sum(score) is the sum of all scores before his first 50-99 score
sum(minutes) is the sum of all scores before his first 50-99 score
No_of_mts is the no. of matches in which he took part before before his first 50-99 score
place_of_1st_50-99_score is the 'place' where he scored his first 50-99 score for the first time.
MATCH_ID is the match_id in which he scored his first 50-99 score for the first time.


in my desired output, you can see player_id = 11, for example, made two scores of =>100 before he made his first '50 to 99' score. similarly, the sum of his scores was 355, sum of minutes was 290, Total matches were 5 before he made his first score between 50 and 99

same is the case with other player_id's , except 14 how made his 50-99 score in his first scoring effort and hence would not come

hope this will clear my problem..
i would like to have the shortest and most efficient possible query for this..

regards

CREATE TABLE T
(match_id NUMBER(4),
player_ID NUMBER(4),
Score NUMBER(4),
Place VARCHAR2(10),
Minutes NUMBER(4))

INSERT INTO T VALUES (1,11,19,A,12)
INSERT INTO T VALUES (2,12,34,B,24)
INSERT INTO T VALUES (3,11,101,C,112)
INSERT INTO T VALUES (4,11,201,D,121)
INSERT INTO T VALUES (5,12,211,E,222)
INSERT INTO T VALUES (6,13,181,F,322
INSERT INTO T VALUES (7,11,34,A,45)
INSERT INTO T VALUES (8,12,12,G,10)
INSERT INTO T VALUES (9,13,45,C,65)
INSERT INTO T VALUES (10,11,NULL,H,NULL)
INSERT INTO T VALUES (10,12,NULL,I,NULL)
INSERT INTO T VALUES (10,13,NULL,D,NULL)
INSERT INTO T VALUES (10,14,NULL,A,NULL)
INSERT INTO T VALUES (11,11,51,H,76)
INSERT INTO T VALUES (11,12,65,I,98)
INSERT INTO T VALUES (11,13,76,D,76)
INSERT INTO T VALUES (11,14,56,A,45)




Tom Kyte
December 15, 2005 - 5:16 pm UTC

(test inserts to discover they don't work).



A solution

Michel Cadot, December 16, 2005 - 1:22 am UTC

Hi,

I answer less questions than Tom, so i have more time to correct the test case. ;)

SQL> select player_id, match_id, score, place, minutes
  2  from t
  3  order by match_id, player_id
  4  /
 PLAYER_ID   MATCH_ID      SCORE PLACE         MINUTES
---------- ---------- ---------- ---------- ----------
        11          1         19 A                  12
        12          2         34 B                  24
        11          3        101 C                 112
        11          4        201 D                 121
        12          5        211 E                 222
        13          6        181 F                 322
        11          7         34 A                  45
        12          8         12 G                  10
        13          9         45 C                  65
        11         10            H
        12         10            I
        13         10            D
        14         10            A
        11         11         51 H                  76
        12         11         65 I                  98
        13         11         76 D                  76
        14         11         56 A                  45

17 rows selected.

SQL> select player_id, ge100, sum_score, sum_min, nb_match-1 nb_match,
  2         place, match_id
  3  from ( select player_id, match_id, score, place, 
  4                count(case when score>=100 then 1 end) 
  5                  over (partition by player_id order by match_id) ge100,
  6                sum(score)
  7                  over (partition by player_id order by match_id
  8                        rows between unbounded preceding and 1 preceding)
  9                  sum_score,
 10                sum(minutes)
 11                  over (partition by player_id order by match_id
 12                        rows between unbounded preceding and 1 preceding)
 13                  sum_min,
 14                row_number () 
 15                  over (partition by player_id order by match_id) nb_match,
 16                count(case when score between 50 and 99 then 1 end) 
 17                 over (partition by player_id order by match_id) ok
 18         from t )
 19  where score between 50 and 99
 20    and ok = 1
 21  --  and ge100 > 0 -- to exclude those who never score >= 100
 22  order by player_id
 23  /
 PLAYER_ID      GE100  SUM_SCORE    SUM_MIN   NB_MATCH PLACE        MATCH_ID
---------- ---------- ---------- ---------- ---------- ---------- ----------
        11          2        355        290          5 H                  11
        12          1        257        256          4 I                  11
        13          1        226        387          3 D                  11
        14          0                                1 A                  11

4 rows selected.

Regards
Michel 

Tom Kyte
December 16, 2005 - 8:25 am UTC

just a pet peeve of mine. If I get about 1,300 - 1,500 of these every 4 weeks and half of them include a question and I take 1 minute to generate a test case.... it adds up.

all I ask is that the "askee" take the minute to actually run the test case and have it set up. I don't mind removing sqlplus prompts and such (that I can do in milliseconds by now :) so a cut and paste from sqlplus is fine - but scripts that could never have run... or don't create the data the example shows....

another problem

Ramis, December 16, 2005 - 3:56 am UTC

thanks Michel for ur code above

I have encountered another problem, and that is, if some player has never made a score between 50 and 99 then it is not shown in the output of your query..I have also tired with one of other queries..but the problme still remains..

here is the changed data points..notice player_id = 12, he never had a score of 50 to 99 but had two scores of over 100 but it is not shoing up with mine and urs code..

INSERT INTO T VALUES (1,11,19,12);
INSERT INTO T VALUES (2,12,34,24);
INSERT INTO T VALUES (3,11,101,112);
INSERT INTO T VALUES (4,11,201,121);
INSERT INTO T VALUES (5,12,211,222);
INSERT INTO T VALUES (6,13,181,322);
INSERT INTO T VALUES (7,11,34,45);
INSERT INTO T VALUES (8,12,12,10);
INSERT INTO T VALUES (9,13,45,65);
INSERT INTO T VALUES (10,11,NULL,NULL);
INSERT INTO T VALUES (10,12,NULL,NULL);
INSERT INTO T VALUES (10,13,NULL,NULL);
INSERT INTO T VALUES (10,14,NULL,NULL);
INSERT INTO T VALUES (11,11,51,76);
INSERT INTO T VALUES (11,12,165,98);
INSERT INTO T VALUES (11,13,76,76);
INSERT INTO T VALUES (11,14,56,45);



MATCH_ID PLAYER_ID SCORE MINUTES
--------- ---------- ---------- ----------
2 12 34 24
3 11 101 112
4 11 201 121
5 12 211 222
6 13 181 322
7 11 34 45
8 12 12 10
9 13 45 65
10 11
10 12
10 13
10 14
11 11 51 76
11 12 165 98
11 13 76 76
11 14 56 45
1 11 19 12



SELECT
T.PLAYER_ID PLAYER_ID,
COUNT(T.PLAYER_ID) MT,
SUM(T.SCORE) SCORE,
SUM(CASE WHEN SCORE >= 100 THEN 1 ELSE 0 END) HUNDREDS
FROM
T,
(SELECT MIN(MATCH_ID) MATCH_ID,
PLAYER_ID
FROM T
WHERE SCORE BETWEEN 50 AND 99
GROUP BY PLAYER_ID) T2
WHERE
T.PLAYER_ID = T2.PLAYER_ID
AND
T.MATCH_ID < T2.MATCH_ID
GROUP BY T.PLAYER_ID
ORDER BY 4 DESC NULLS LAST
/

PLAYER_ID MT SCORE HUNDREDS
---------- ---------- ---------- ---------- ----------
11 5 355 2
13 3 226 1
14 1 0


as u can see the player_id = 12 is not showing up beacuse he has never made any score between 50 and 99..

so my desired output is


PLAYER_ID MT SCORE HUNDREDS
---------- ---------- ---------- ---------- ----------
11 5 355 2
12 5 422 2
13 3 226 1
14 1 0


I want you to please make the necessary changes in mine and your code oode...
regards,



Another solution

Michel Cadot, December 16, 2005 - 11:44 am UTC

Tom,

I agree and understand. I also storm when there is no test case and even more when there is an incorrect one. 
But... i can't resist to a SQL challenge and had nothing to do waiting my bus this morning (it was 6h30AM in France). So...

Ramis,

It is actually not the same question (and not the same table, where is the ddl?). 
Here's a query, you just have to check if you are at the last row of the player without encountered a 50-99 score:

SQL> select player_id, nb_match-ok nb_match,
  2         sum_score+decode(ok,0,score,0) sum_score,
  3         ge100
  4  from ( select player_id, match_id, score, 
  5                count(case when score>=100 then 1 end) 
  6                  over (partition by player_id order by match_id) ge100,
  7                sum(score)
  8                  over (partition by player_id order by match_id
  9                        rows between unbounded preceding and 1 preceding)
 10                  sum_score,
 11                row_number () 
 12                  over (partition by player_id order by match_id) nb_match,
 13                count(case when score between 50 and 99 then 1 end) 
 14                 over (partition by player_id order by match_id) ok,
 15                lead (player_id) 
 16                 over (partition by player_id order by match_id) next_player
 17         from t )
 18  where ( score between 50 and 99 and ok = 1 )
 19     or ( ok = 0 and next_player is null )
 20  order by player_id
 21  /
 PLAYER_ID   NB_MATCH  SUM_SCORE      GE100
---------- ---------- ---------- ----------
        11          5        355          2
        12          5        422          2
        13          3        226          1
        14          1                     0

4 rows selected.

Regards
Michel
 

Question about scalar query in ORACLE 10G

Ana, December 17, 2005 - 3:26 pm UTC

Hello,Tom,
Can i do in

Select x1,x2,x3,(select y from t1 where z=x1),
(select y1 from t1 where z=x1)
from t

instead of 2 selects one select: select y,y1 from t1 where z=x1
but without to define objects,
Show please,
Thank you,
Ana

Tom Kyte
December 17, 2005 - 4:32 pm UTC

select x1, x2, x3,
f'( substr( data, 1, n ) ) y,
f''(substr( data, n+1 ) ) y1
from (
select x1, x2, x3,
(select f(y) || y1 from t1 where z = x1) data
from t
)

where f(y) is something to make Y "fixed length" (eg: say Y was a DATE NOT NULL, then f(y) could be to_char(y,'yyyymmddhh24miss')

f'(x) converts the string back into what it was (eg: perhaps to_date(..,'yyyymmddhh24miss' ))

f''(x) converts the other part of the string back into whatever y1 was.

EXTRACT STRING

A reader, December 20, 2005 - 11:46 am UTC

Hi Tom

I want to extract a string ( number ) from a table column which starts fron abcd: and then has number.
I want to insert it into another table.
how can i extract it.



Tom Kyte
December 20, 2005 - 12:59 pm UTC

substr() ?

not substr

A reader, December 20, 2005 - 1:01 pm UTC

I have tried but substr, bUT GOT SOLUTION WITH INSTR().

Thanks


reader

A reader, December 29, 2005 - 3:17 pm UTC

HI Tom

i could not understand the answer to these queries....is there an better way to solve these or any other easy way


1)
List the name of cities whose population decreased by more than
10% from the year 1975 to the year 2000.

select city
from CITY
where p2000 < 0.9 * p1975;


2)

List the id of all countries in which the median age of women is
at least 25% below the median age of men.

select p_id
from PEOPLE
where med_age_f <= 0.75 * med_age_m;


Tom Kyte
December 29, 2005 - 4:26 pm UTC

I really hope this is not coming from a "relational database class" because to have a table with columns named "p2000" and "p1975" - meaning "population in 2000". That would be "bad"

where p2000 < 0.9 * p1975

assuming p2000 is the population in 2000, this just sort of says "give me the rows where the 2000 population is less than 90% of the 1975 population - pretty straight forward....


same with the second one.

Default value insertion into a column from other Columns - How to achive at DDL level?

VIKAS SANGAR, December 30, 2005 - 5:43 am UTC

Dear Mr. Kyte,

Can you pls tell If / How this can be achieved in Sql at table creation level?

Suppose, I create a table as...

create table Names
(Fname varchar2(10)not null,
Mname varchar2(10),
Lname varchar2(10)not null,
Fullname varchar2(35));

Now, If I insert the values in the table Names

insert into Names (Fname, Mname, Lname)
values ('Leo', 'De', 'Nero');

1 row inserted

select * from Names;

FNAME MNAME LNAME FULLNAME
---------- ---------- ---------- -------------
Leo De Nero

Here, if I want that the default value should be automatically inserted into "Fullname" column which should be the result of concatenation of the values from the other three columns "Fname", "Mname", "Lname", i.e. Default value of "Fullname" should be...'LeoDeNero',

Fullname varchar2(35)default (Fname||Mname||Lname));

All I want to know is can this be achieved at Table creation level by providing the default value of Fullname as mentioned above? If yes how?

I tried giving the the default value for Fullname as -
Fullname varchar2(35)default (Fname||Mname||Lname));
at the time of Table creation, but it failed giving me, Error...
ORA-00984: column not allowed here

I even tried using "Concat" function, but my effort went pine. Pls Help.

Take care, regards.
Vikas.






Tom Kyte
December 30, 2005 - 9:35 am UTC

No you don't, you want a view!!! Because you don't want to have to maintain that column during an update - update set mname = 'Del'; - now you have to reconstruct the full name.

You want:

create view my_view
as
select fname, mname, lname, fname || ' ' || mname || ' ' || lname fullname
from t;


And if searching on fullname is the goal - you can easily use Oracle text to index fname, mname, lname giving you case insensitive, fuzzy, stem, wildcard, etc searches OR create a function based nidex on fname || ' ' || mname || ' ' || lname.



To the reader above

A reader, December 30, 2005 - 7:29 am UTC

Not sure it can be done at the Table level.
Though you can try an After Insert Trigger.
Let's see what Tom has to say


New Year Greetings...

Vikas, December 30, 2005 - 9:04 am UTC

Dear Mr. Kyte,

I know this is not the right Platform for posting this, but I am ready to take up your scoldings for this... :-)

May I take this Opportunity to...

Wish You, Your Team and Entire Oracle Fraternity

A
VERY HAPPY, PROSPEROUS, PEACEFUL
&
SUCCESFILLED NEW YEAR - 2006.

On My behalf and entire Oracle Community.

Take Care, Regards.

A reader, January 05, 2006 - 8:39 am UTC

Hi tom
I have data like

WILLIAM H FLANAGAN Jr
LINDA R SCOTT ESQ
WILLIAM BELL MARQUIS II

No w i want to extract it into different columns of another table

like If jr or II or esq then suffix and
wiliam in first name h,r in middle name and rest in last name.

how will i do that.

Thanks in advance


Tom Kyte
January 05, 2006 - 10:55 am UTC

you'll parse it, process it, insert it.




tried it

A reader, January 05, 2006 - 11:04 am UTC

i have tried it on the bases of "space" but prob is with if the middle name is more than one letter long I want it in last name not middle name.

Thanks

Tom Kyte
January 05, 2006 - 11:48 am UTC

sounds like an "if statement" would be needed then.

(this is an algorithm, you need to sit down and specify it out...)

I did something like this for city/state/zip parsing

Mark, January 10, 2006 - 3:29 pm UTC

You can start with this and modify it:


CREATE OR REPLACE FUNCTION split(
in_expression IN VARCHAR2
,in_delimiter IN VARCHAR2
)
RETURN typ_array
IS
-- TYPE typ_array IS TABLE OF VARCHAR2 (4000);

v_array typ_array := typ_array(NULL);
i_delim_pos PLS_INTEGER;
i_next_delim_pos PLS_INTEGER;
i_start_pos PLS_INTEGER;
i_length PLS_INTEGER;
i_occurence PLS_INTEGER;
bln_done BOOLEAN := FALSE;
s_extract VARCHAR2(4000);
s_expression VARCHAR2(4000) := in_delimiter || in_expression;
BEGIN
i_length := v_array.COUNT;

-- check for nulls...
IF LENGTH(s_expression) IS NULL
OR LENGTH(in_delimiter) IS NULL
THEN
RETURN v_array;
END IF;

IF INSTR(in_expression, in_delimiter) = 0
THEN
-- just return the expression
v_array(1) := in_expression;
RETURN v_array;
END IF;

-- get length of expression to split
i_length := LENGTH(s_expression);
-- initialize
i_occurence := 1;
i_start_pos := 1;

WHILE bln_done = FALSE
LOOP
-- get position of occurence of delimiter
i_delim_pos := INSTR(s_expression, in_delimiter, 1, i_occurence);
-- get position of next occurence of delimiter
i_next_delim_pos :=
INSTR(s_expression, in_delimiter, 1, i_occurence + 1);

-- if next delimiter position > 0 then extract substring between delimiters.
-- if next delimiter position = 0, then extract until end of expression.
IF i_next_delim_pos > 0
THEN
-- string to extract is from i_start_pos to i_delim_pos
s_extract :=
SUBSTR(
s_expression, i_delim_pos + 1, i_next_delim_pos - i_delim_pos -
1
);
i_length := v_array.COUNT;
v_array(i_occurence) := s_extract;
v_array.EXTEND;
i_length := v_array.COUNT;
-- read it back into a var for debug
s_extract := v_array(i_occurence);
-- increment occurence value
i_occurence := i_occurence + 1;
ELSE
IF i_delim_pos > 0
THEN
-- you have not found the second delimiter. Grab the remaining string
s_extract := SUBSTR(s_expression, i_delim_pos + 1);
i_length := v_array.COUNT;
v_array(i_occurence) := s_extract;
v_array.EXTEND;
i_length := v_array.COUNT;
-- set exit flag
END IF;

-- you're done either way
bln_done := TRUE;
END IF;
END LOOP;

-- chop off last added NULL array element.
v_array.TRIM;
-- debug. iterate through collection and check values
/* i_length := v_array.COUNT;

FOR i_count IN 1 .. v_array.COUNT
LOOP
s_extract := v_array (i_count);
END LOOP;
*/
RETURN v_array;
EXCEPTION
WHEN OTHERS
THEN
raise_application_error(-20001, SQLERRM);
END;
/
CREATE OR REPLACE PROCEDURE parse_name(
in_name IN VARCHAR2
)
IS
v_loc_csz typ_array;
v_city VARCHAR2(100);
v_st VARCHAR2(6);
v_zip VARCHAR2(20);
v_count PLS_INTEGER;
v_temp VARCHAR2(100);
v_end PLS_INTEGER;
v_space VARCHAR2(1) := '';
BEGIN
-- parse in_name into components
v_temp := TRIM(in_name);
v_temp := REPLACE(v_temp, ' ', ' ');
SELECT split(v_temp, ' ')
INTO v_loc_csz
FROM dual;
v_count := v_loc_csz.COUNT;

IF v_count >= 4
THEN
/* sigh - happens when the city is TWO part name like 'CHESTNUT HILL'*/
v_zip := SUBSTR(TRIM(v_loc_csz(v_count)), 1, 5);
-- trim it, first 5
v_st := SUBSTR(TRIM(v_loc_csz(v_count - 1)), 1, 2);
v_end := v_count - 2;

FOR v_loop IN 1 .. v_end
LOOP
v_city := v_space || v_city || TRIM(v_loc_csz(v_loop));
v_space := ' ';
END LOOP;

v_city := TRIM(SUBSTR(v_city, 1, 30));
ELSIF v_count = 3
THEN
v_zip := SUBSTR(TRIM(v_loc_csz(v_count)), 1, 5);
-- trim it, first 5
v_st := SUBSTR(TRIM(v_loc_csz(v_count - 1)), 1, 2);
v_city := SUBSTR(TRIM(v_loc_csz(v_count - 2)), 1, 30);
ELSIF v_count = 2
THEN
v_st := SUBSTR(TRIM(v_loc_csz(v_count)), 1, 2);
v_city := SUBSTR(TRIM(v_loc_csz(v_count - 1)), 1, 30);
ELSIF v_count = 1
THEN
v_city := SUBSTR(TRIM(v_loc_csz(v_count)), 1, 30);
END IF;
/* And do Update/insert here
UPDATE ...
INSERT ...
*/



EXCEPTION
WHEN OTHERS
THEN
raise_application_error(-20001, SQLERRM);
END;


The Split() function acts like a simplified Split() VB function.


With a little work, this should do it.


oops...forgot something

Mark, January 11, 2006 - 9:08 am UTC

typ_array def:

typ_array TABLE OF VARCHAR2(4000)


Question about trim

A reader, February 17, 2006 - 9:17 am UTC

Hi Tom,

For a column in my table a simple query
"select * from users where name='testuser'" is not working but "select * from users where trim(name)='testuser'" is working.
But when I inserted the "testuser" value did not contain any spaces.Can you let me know as to why the above is happening

Thanks for the same.


Tom Kyte
February 17, 2006 - 2:51 pm UTC

yes it did.

do this

select '"' || name || '"', dump(name) from users where trim(name) = 'testuser';


you'll see it then.


Pat, February 18, 2006 - 12:51 pm UTC

Excellent.

I have a data which is to be exploded daily, problem i am facing is when there is a tier row. like in table q1 2/18/2006 to 2/20/2006, vol from q2 table should break as show in the output.

Table q1
DSTART DEND VFROM VTO AMT
2/16/2006 2/17/2006 0 0 9
2/18/2006 2/20/2006 0 5000 5
2/18/2006 2/20/2006 5001 10000 2

Table q2
X1 X2 DSTART DEND VOL
k1 111 2/16/2006 2/20/2006 3000
k1 222 2/16/2006 2/20/2006 4000
w1 333 2/16/2006 2/20/2006 5000

I am wondering is it possible to write in a single sql. I coded in pl/sql which is not that efficient.

i want the output like.

X1 X2 DSTART DEND VOL AMT
K1 111 2/16/2006 2/16/2006 3000 9
K1 222 2/16/2006 2/16/2006 4000 9
W1 333 2/16/2006 2/16/2006 5000 9
K1 111 2/17/2006 2/17/2006 3000 9
K1 222 2/17/2006 2/17/2006 4000 9
W1 333 2/17/2006 2/17/2006 5000 9

K1 111 2/18/2006 2/18/2006 3000 5
K1 222 2/18/2006 2/18/2006 2000 5
k1 222 2/18/2006 2/18/2006 2000 2
w1 333 2/18/2006 2/18/2006 5000 2
K1 111 2/19/2006 2/19/2006 3000 5
K1 222 2/19/2006 2/19/2006 2000 5
k1 222 2/19/2006 2/19/2006 2000 2
w1 333 2/19/2006 2/19/2006 5000 2
K1 111 2/20/2006 2/20/2006 3000 5
K1 222 2/20/2006 2/20/2006 2000 5
k1 222 2/20/2006 2/20/2006 2000 2
w1 333 2/20/2006 2/20/2006 5000 2

Tom Kyte
February 18, 2006 - 4:48 pm UTC

cool?

terminology is not one I understand (a tier row?)


I mean, you have some interesting looking data there, at a glance - don't see anyway to "put it together and get your output (procedurally or otherwise - the LOGIC appears to be *missing*)


and you missed the bit that says, "if you want anyone to even try to give you a query that might require a bit of testing, you best supply table creates and inserts"

sorry - but missing the algorithm, the logic, the create, the inserts....

(and that is not a promise that should they be supplied I'll have an answer..)

pat, February 18, 2006 - 12:58 pm UTC

sorry i forgot to type the sql,

create table q1
(dstart date,dend date,vfrom number,vto number,amt number);

create table q2
(x1 char(5),x2 number, dstart date,dend date,vol number);

insert into q1 ( dstart, dend, vfrom, vto, amt ) values (
to_date( '02/16/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/17/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 0, 0, 9);
insert into q1 ( dstart, dend, vfrom, vto, amt ) values (
to_date( '02/18/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/20/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 0, 5000, 5);
insert into q1 ( dstart, dend, vfrom, vto, amt ) values (
to_date( '02/18/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/20/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 5001, 10000, 2);
commit;

insert into q2 ( x1, x2, dstart, dend, vol ) values (
'k1 ', 111, to_date( '02/16/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/20/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 3000);
insert into q2 ( x1, x2, dstart, dend, vol ) values (
'k1 ', 222, to_date( '02/16/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/20/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 4000);
insert into q2 ( x1, x2, dstart, dend, vol ) values (
'w1 ', 333, to_date( '02/16/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am'), to_date( '02/20/2006 12:00:00 am', 'mm/dd/yyyy hh:mi:ss am')
, 5000);
commit;

Tom Kyte
February 18, 2006 - 4:48 pm UTC

and still don't know what it all "means"

pat, February 19, 2006 - 12:23 am UTC

oops sorry and thanks for quick response.

data is to be exploded daily between min(q1.dstart) and max(q1.dend)

when q1.vfrom = q1.vto then out put gets q2.vol
for 2/16 data looks like
X1 X2 DSTART DEND VOL AMT
K1 111 2/16/2006 2/16/2006 3000 9
K1 222 2/16/2006 2/16/2006 4000 9
W1 333 2/16/2006 2/16/2006 5000 9

there is no problem getting above output, the problem i am facing is
when q1.dstart = q1.dstart(+1) and q1.dend = q1.dend(+1) then
q2.vol should split according q1.vfrom and q1.vto.

for 2/18 - total q1.vfrom/vto has 10000 with different amt.
and total q2.vol has 12000.

first q2.vol 3000 falls between 0 to 5000 with amt 5.
now q1.vfrom/vto 0 - 5000 left with 2000.

second q2.vol 4000 should split from the remaining q1.vfrom/vto 0 to 5000 which is 2000(as 3000 is used by the q2.vol 3000 first row) with amt 5,
and then q2.vol remaining 2000 out of 4000 falls into q1.vfrom/vto 5001 to 10000 with amt 2.

third q2.vol 5000 falls and exceeds q1.vfrom/vto 5001 to 10000 with amt 2.
like this.
X1 X2 DSTART DEND VOL AMT
K1 111 2/18/2006 2/18/2006 3000 5
K1 222 2/18/2006 2/18/2006 2000 5
k1 222 2/18/2006 2/18/2006 2000 2
w1 333 2/18/2006 2/18/2006 5000 2



Response for the space in a column in the Table

A reader, February 19, 2006 - 10:34 pm UTC

Hi Tom,

I did find the spaces in the data for the "name" column in the "testuser" table. Thanks a lot for the responce


Pati, February 20, 2006 - 3:22 pm UTC

Hello Tom,
can you please help me in solving this query.

I have a contract with client that
first 0 - 5000 qty sell at $8
and next 5001 - 10000 qty sell at $7

I got orders for 3000, 4000, and 6000

as per the contract
first order for 3000 is priced at $8
second order 4000 should split into two
2000 is priced at $8
and 2000 is priced $7
and last order for 6000 is priced $7

create table x1
(qfrom number,qto number,amt number);

create table x2
(vol number);

insert into x1 ( qfrom, qto, amt ) values (
0, 5000, 8);
insert into x1 ( qfrom, qto, amt ) values (
5001, 10000, 7);
insert into x2 ( vol ) values ( 3000);
insert into x2 ( vol ) values ( 4000);
insert into x2 ( vol ) values (5000);

select * from x1;
qfrom qto amt
0 5000 8
5001 10000 7

select * from x2;
3000
4000
6000

output i want looks like
vol amt
3000 8
2000 8
2000 7
6000 7

is it possible to write a sql or do i have to code in pl/sql.

Thanks

Tom Kyte
February 21, 2006 - 7:22 am UTC

couple of problems - don't know how you got a vol of 6000 on the last line.  You switched from 5000 being inserted to 6000 being selected.  I choose 5000 as the right answer.

second, rows have no "order" in tables - so, to say you have orders 3000, 4000, 5000 - without anything to "order them by" makes this ambigous. Hopefully you have a column to really order by (I used rowid - meaning different people could get different answers from my query as the data is sorted randomly)

ops$tkyte@ORA9IR2> with data1
  2  as
  3  (
  4  select qfrom, case when next_qto is null then 99999 else qto end qto, amt
  5    from (
  6  select qfrom, qto, lead(qto) over (order by qfrom) next_qto, amt
  7    from x1
  8         )
  9  ),
 10  data2
 11  as
 12  (
 13  select vol,
 14             nvl(lag(sum_vol) over (order by rowid)+1,1) lo_vol,
 15         sum_vol hi_vol
 16    from (
 17  select vol, sum(vol) over (order by rowid) sum_vol
 18    from x2
 19         )
 20  )
 21  select qfrom, qto, amt,
 22         least( hi_vol, qto )-greatest( lo_vol, qfrom ) +1 vol
 23    from data1, data2
 24   where lo_vol between qfrom and qto
 25      or hi_vol between qfrom and qto
 26   order by lo_vol
 27  /

     QFROM        QTO        AMT        VOL
---------- ---------- ---------- ----------
         0       5000          8       3000
         0       5000          8       2000
      5001      99999          7       2000
      5001      99999          7       5000

 

Execution Order of Where Clause

Nikhilesh, February 21, 2006 - 12:44 am UTC

Dear Tom,
Could you please have a look..........
*********************************************************
SQL*Plus: Release 9.2.0.1.0 - Production on Tue Feb 21 10:48:07 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

10:48:15 vod@VODP> select * from test;

NAME JOIN_DETAIL
------------------------------ -----------------------------
Ravindra TEMP12-JAN-2005
Rajendra A16-FEB-2005
VILAS A6-FEB-2004
RAMESH COST6-JUN-2004

Elapsed: 00:00:00.03
10:48:26 vod@VODP> select name,substr(join_detail,2)
from test
WHERE
substr(join_detail,1,1)='A'
AND to_date(substr(join_detail,2),'DD-MON-RRRR')=to_date('06-FEB-2004','DD-MON-RRRR');

AND to_date(substr(join_detail,2),'DD-MON-RRRR')=to_date('06-FEB-2004','DD-MON-RRRR')
*
ERROR at line 5:
ORA-01858: a non-numeric character was found where a numeric was expected


Elapsed: 00:00:00.03
10:48:45 vod@VODP> select name,substr(join_detail,2)
from (select * from test where substr(join_detail,1,1)='A')
WHERE
to_date(substr(join_detail,2),'DD-MON-RRRR')=to_date('06-FEB-2004','DD-MON-RRRR');


NAME SUBSTR(JOIN_DETAIL,2)
------------------------------ -----------------------------
VILAS 6-FEB-2004

**********************************************************

Why don't optimizer executes the where clause so that the query won't fail.......

Tom Kyte
February 21, 2006 - 7:45 am UTC

you are assuming some order of predicate evaluation.

But there isn't one. SQL is non-procedural. It is free to evaluate that predicate in any order it chooses.


This stuff happens each and every time someone uses a string to store a date or a number. Ugh. You do it to yourself.

In this case, this is "easy",


where decode( substr( join_detail,1,1),
'A', to_date(substr(join_detail,2)),
NULL ) = to_date( ..... );


use DECODE or CASE when you do these conversions.


Actually, STOP PUTTING DATES and NUMBERS IN STRINGS.


Can this query be refined

Anne, February 22, 2006 - 10:00 am UTC

Hi Tom,

I would appreciate your advise on whether my query can be further refined/optimized.

create table run_history
(
run_id number(15) not null,
run_date date not null,
end_control_id number(15) not null,
start_control_id number (15)
)

--Master tables for controls
create table control_history
(
control_id number(15) not null,
create_date date not null,
comments varchar2(50)
)

I have a proc that runs for a set of control_ids and when done, inserts the last processed control_id into the run_history table. When it runs next it takes the
max(end_control_id) from run_history table and starts processing from the "next"
control_id in the control_history table
Objective :
Query to select the max(end_control_id) from run_history table and the "next"
control_id from the control_history table as one record.

select *
from
(
select
prev_control.*
, ch.*
from control_history ch
, ( select
*
from (
select rh.*
from run_history rh
order by run_id desc
)
where rownum = 1
) prev_control
where ch.control_id > prev_control.end_control_id
order by ch.control_id asc
)
where rownum = 1
;

Appreciate your help. Thanks!



Tom Kyte
February 22, 2006 - 10:29 am UTC

why not


select *
from (select *
from ch
where control_id > (select max(run_id) from run_history)
order by control_id )
where rownum = 1;


the max subquery should do a min/max scan on an index on run_id in run_history.
then the inline view should index range scan an index on ch.control_id
and rownum will return just the first record.

A reader, February 22, 2006 - 12:54 pm UTC


Can this query be refined

Anne, February 23, 2006 - 3:55 pm UTC

Tom, thanks for your follow up.

select *
from (select *
from ch
where control_id > (select max(run_id) from run_history)
order by control_id )
where rownum = 1;

Your query is great, however it only returns the record for the "next" control_id in the control_history. Maybe I did not explain better : I need to select the latest control_id from the run_history along with the "next"control id record - basically in the same row. I hope this helps... Appreciate your input.
Thanks!

Tom Kyte
February 23, 2006 - 7:52 pm UTC

thats what happens when someone doesn't indent like you do and you have no example (table creates, inserts to play with) ;)

(I must say - nothing personal - but your indenting and line breaks are "unique")




since you need them both (from both tables), then your join looks pretty good.

get first row from rh. use that to find first row in other table.

Might use first rows 1 optimization but other than that - it does what it says.


My indentation would look like this:

select *
from ( select prev_control.*, ch.*
from control_history ch,
( select *
from ( select rh.*
from run_history rh
order by run_id desc
)
where rownum = 1
) prev_control
where ch.control_id > prev_control.end_control_id
order by ch.control_id asc
)
where rownum = 1
;



You'll want to comment that query nicely :)

Can this query be refined

Anne, February 24, 2006 - 11:49 am UTC

Hi Tom, thanks for your feeback. I know, don't know how I got into this "unique" indentation! I certainly like yours too and maybe I should change mine. Appreciate your comments. Thanks!

Tom Kyte
February 24, 2006 - 11:57 am UTC

this is all opinion remember - pretty printers can always fix this stuff for anyone that needs to see it "their way"...


I've seen the leading comma approach lots - I understand why (you can comment out a column easily that way) but I've never liked it myself.

for some reason, this chunk:

select
prev_control.*
, ch.*
from control_history ch
, ( select
*
from (
select rh.*
from run_history rh
order by run_id desc
)
where rownum = 1
) prev_control


just doesn't read right to me - there is something about it that I just cannot "see it" for some reason.

I literally had to reformat that specific bit to understand it. Strange.

Query format

Michel Cadot, February 24, 2006 - 4:48 pm UTC

Not so strange, I often do the same when I receive a query to optimize. I have to reformat it to understand what it does and how it does it.

Regards
Michel


OK

A reader, March 03, 2006 - 2:54 am UTC

Hi Tom,
I would like to get the row having the minimum
value of deptno after grouping them..

I don't want to use * WHERE CLAUSE OR HAVING CLAUSE *.. 
But this is not working...

Any other idea??


SQL> select min(deptno),sum(sal)
  2  from emp
  3  group by deptno
  4  /

MIN(DEPTNO)   SUM(SAL)
----------- ----------
         10      30000
         20      37000
         30      38000

3 rows selected. 

Tom Kyte
March 03, 2006 - 8:15 am UTC

Well, we could do it, but it would be rather silly.

scott@ORA10GR2> select max(decode(deptno,min_deptno,deptno)) deptno,
2 max(decode(deptno,min_deptno,sal)) sal
3 from (
4 select deptno, sum(sal) sal ,
5 min(deptno) over () min_deptno
6 from emp
7 group by deptno
8 )
9 /

DEPTNO SAL
---------- ----------
10 8750


please don't do that however, use the tools properly.

SQL query using analytics (How to get rid of this problem..)

Atavur, March 03, 2006 - 11:41 am UTC

Tom, 

I believe you will help me out in getting the rid solved.

I have a serious problem..hope u understand better.

We have three different tables

Table #1. QI (Quote Items)
Table #2. QS (Quote Schedules)
Table #3. PRS (PR Schedule)

I am currently working on 10g database and here i am giving the description..

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - 64bit Production With the Partitioning, OLAP and Data Mining options

SQL> desc QI
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 QI_SEQ                                    NOT NULL NUMBER(10)
 ITEM_NBR                                  NOT NULL VARCHAR2(4)
 QUOTE_NBR                                 NOT NULL VARCHAR2(10)
 DATE_EFFECTIVE                                     DATE
 VENDOR_LT                                          NUMBER(5)
 DATE_EXP                                           DATE
 AWARD_FLAG                                NOT NULL VARCHAR2(1)
 VERBAL_FLAG                               NOT NULL VARCHAR2(1)
 PART                                               VARCHAR2(32)
 ACCT                                               VARCHAR2(16)
 WO_NBR                                             VARCHAR2(10)
 DESCRIPTION                               NOT NULL VARCHAR2(50)
 PART_REV                                           VARCHAR2(2)
 MFR_PART                                           VARCHAR2(32)
 UM                                                 VARCHAR2(2)
 PURCH_UM                                           VARCHAR2(2)
 UM_CONV_TYPE                                       VARCHAR2(1)
 UM_PURCH_CONV                                      NUMBER(7,2)
 COMMOD_CODE                                        VARCHAR2(4)
 INSP_FLAG                                 NOT NULL VARCHAR2(1)
 INSP_CODE                                          VARCHAR2(1)
 CERT_FLAG                                 NOT NULL VARCHAR2(1)
 PROJECT                                            VARCHAR2(20)
 CONTRACT                                           VARCHAR2(26)
 PRIORITY_RATING                                    VARCHAR2(10)
 WO_OPN                                             VARCHAR2(4)
 SHIP_VIA                                           VARCHAR2(4)
 FOB                                                VARCHAR2(16)
 FREIGHT_CODE                                       VARCHAR2(1)
 PRI_SEQ                                            NUMBER(10)
 REQUESTOR                                          VARCHAR2(30)
 TOOL_CODE                                          VARCHAR2(1)
 SRC_INSP_CODE                                      VARCHAR2(1)
 MATL_CODE                                          VARCHAR2(1)
 COMMENTS                                           VARCHAR2(2000)
 CREATED_BY                                NOT NULL VARCHAR2(30)
 DATE_CREATED                              NOT NULL DATE
 MODIFIED_BY                                        VARCHAR2(30)
 DATE_MODIFIED                                      DATE
 VAT_CODE                                           NUMBER(2)
 USER_01                                            VARCHAR2(50)
 USER_02                                            VARCHAR2(50)
 USER_03                                            VARCHAR2(50)
 USER_04                                            VARCHAR2(50)
 USER_05                                            VARCHAR2(50)
 USER_06                                            VARCHAR2(50)
 USER_07                                            VARCHAR2(50)
 USER_08                                            VARCHAR2(50)
 USER_09                                            VARCHAR2(50)
 USER_10                                            VARCHAR2(50)
 USER_11                                            VARCHAR2(50)
 USER_12                                            VARCHAR2(50)
 USER_13                                            VARCHAR2(50)
 USER_14                                            VARCHAR2(50)
 USER_15                                            VARCHAR2(50)

SQL> desc QS
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 QS_SEQ                                    NOT NULL NUMBER(10)
 QI_SEQ                                    NOT NULL NUMBER(10)
 DATE_DELV                                 NOT NULL DATE
 DATE_PROMISED                                      DATE
 QTY_REQD                                  NOT NULL NUMBER(12,3)
 QTY_PROMISED                                       NUMBER(12,3)
 CREATED_BY                                NOT NULL VARCHAR2(30)
 DATE_CREATED                              NOT NULL DATE
 MODIFIED_BY                                        VARCHAR2(30)
 DATE_MODIFIED                                      DATE
 QUOTE_NBR                                 NOT NULL VARCHAR2(10)
 QUOTE_ITEM                                NOT NULL VARCHAR2(4)
 USER_01                                            VARCHAR2(50)
 USER_02                                            VARCHAR2(50)
 USER_03                                            VARCHAR2(50)
 USER_04                                            VARCHAR2(50)
 USER_05                                            VARCHAR2(50)
 USER_06                                            VARCHAR2(50)
 USER_07                                            VARCHAR2(50)
 USER_08                                            VARCHAR2(50)
 USER_09                                            VARCHAR2(50)
 USER_10                                            VARCHAR2(50)
 USER_11                                            VARCHAR2(50)
 USER_12                                            VARCHAR2(50)
 USER_13                                            VARCHAR2(50)
 USER_14                                            VARCHAR2(50)
 USER_15                                            VARCHAR2(50)

SQL> desc PRS
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PRS_SEQ                                   NOT NULL NUMBER(10)
 PRI_SEQ                                   NOT NULL NUMBER(10)
 REQ_NBR                                            VARCHAR2(10)
 REQ_ITEM                                           VARCHAR2(4)
 DATE_REQD                                 NOT NULL DATE
 QTY                                       NOT NULL NUMBER(12,3)
 DATE_PLACE                                NOT NULL DATE
 HAS_PO                                             VARCHAR2(1)
 PO_NBR                                             VARCHAR2(10)
 PO_ITEM                                            VARCHAR2(4)
 COMMENTS                                           VARCHAR2(2000)
 CREATED_BY                                NOT NULL VARCHAR2(30)
 DATE_CREATED                              NOT NULL DATE
 MODIFIED_BY                                        VARCHAR2(30)
 DATE_MODIFIED                                      DATE
 USER_01                                            VARCHAR2(50)
 USER_02                                            VARCHAR2(50)
 USER_03                                            VARCHAR2(50)
 USER_04                                            VARCHAR2(50)
 USER_05                                            VARCHAR2(50)
 USER_06                                            VARCHAR2(50)
 USER_07                                            VARCHAR2(50)
 USER_08                                            VARCHAR2(50)
 USER_09                                            VARCHAR2(50)
 USER_10                                            VARCHAR2(50)
 USER_11                                            VARCHAR2(50)
 USER_12                                            VARCHAR2(50)
 USER_13                                            VARCHAR2(50)
 USER_14                                            VARCHAR2(50)
 USER_15                                            VARCHAR2(50)
 DATE_PLACE_ACT                                     DATE


SQL> SELECT qi_seq,item_nbr,quote_nbr,pri_seq FROM qi WHERE quote_nbr = '1108' AND item_nbr = '0001';

    QI_SEQ ITEM QUOTE_NBR     PRI_SEQ
---------- ---- ---------- ----------
      1203 0001 1108             1039

SQL> SELECT qs_seq,qi_seq,quote_nbr,quote_item,date_delv,date_promised FROM qs WHERE quote_nbr = '1108' AND quote_item = '0001';

    QS_SEQ     QI_SEQ QUOTE_NBR  QUOT DATE_DELV DATE_PROM
---------- ---------- ---------- ---- --------- ---------
      1732       1203 1108       0001 08-MAR-06 08-MAR-06
      1734       1203 1108       0001 18-APR-06 18-APR-06
      1733       1203 1108       0001 18-MAY-06 25-MAY-06
      1735       1203 1108       0001 16-JUN-06 20-JUN-06

SQL> SELECT prs_seq,pri_seq,req_nbr,qty FROM prs WHERE pri_seq = 1039( QI.pri_seq where the value is mentioned above);

   PRS_SEQ    PRI_SEQ REQ_NBR           QTY
---------- ---------- ---------- ----------
      1312       1039 995                50
      1313       1039 995                40
      1314       1039 995                60
      1315       1039 995                15

SQL> SELECT ps.prs_seq,qi.pri_seq,ps.req_nbr,
               qs.qs_seq,qs.qi_seq ,qs.date_delv,qs.date_promised,
            qs.qty_reqd,qs.qty_promised,qs.quote_nbr,qs.quote_item
       FROM QS qs, quote_items qi, pr_schedules ps
      WHERE qs.quote_nbr  = '1108'  -- Same quote number used above.
        AND qs.quote_item = '0001'  -- Same quote item used above.
        AND qs.quote_item=qi.item_nbr
        AND qs.quote_nbr=qi.quote_nbr
        AND qi.pri_seq=ps.pri_seq(+);


I am getting a peculiar result where it's multiplying the actual rows(four) into itself which comes to 16 rows. IF there are 3 rows then it's giving the result in 9 rows.

Result of the above query (Just taken few columns from the above query mentioned due to format issue)
----------------------------------------------------------------------------------------------------
PRS_SEQ    PRI_SEQ    REQ_NBR    QS_SEQ    QI_SEQ    QUOTE_NBR  QUOTE_ITEM
1312        1039         995        1732        1203        1108    0001
1312        1039         995        1734        1203    1108        0001
1312        1039         995        1733    1203    1108    0001
1312        1039         995        1735    1203    1108    0001

1313        1039         995        1732    1203    1108    0001
1313        1039         995        1734    1203    1108    0001
1313        1039         995        1733    1203    1108    0001
1313        1039         995        1735    1203    1108    0001

1314        1039         995        1732    1203    1108    0001
1314        1039         995        1734    1203    1108    0001
1314        1039         995        1733    1203    1108    0001
1314        1039         995        1735    1203    1108    0001

1315        1039         995        1732    1203    1108    0001
1315        1039         995        1734    1203    1108    0001
1315        1039         995        1733    1203    1108    0001
1315        1039         995        1735    1203    1108    0001

How to get the result which should give like the below ones?
------------------------------------------------------------
PRS_SEQ    PRI_SEQ    REQ_NBR    QS_SEQ    QI_SEQ    QUOTE_NBR    QUOTE_ITEM
1312        1039         995        1732    1203       1108        0001
1313        1039         995        1734    1203       1108        0001
1314        1039         995        1733    1203       1108        0001
1315        1039         995        1735    1203       1108        0001


Tom, Please help me out ...

 

Tom Kyte
March 03, 2006 - 2:02 pm UTC

no. I won't even look at stuff with descs and selects from a table. It sort of says that on the page you used to put all of this stuff that I won't look at here.

I'm sure any example you will create will have far far fewer columns, just the ones you need.

and will include an actual text description of the problem you are trying to solve - not just show queries that do not actually retrieve what you want. That is, "the question", not the failed answer.

A reader, March 04, 2006 - 2:20 am UTC

Tom,


As per my above result, The query i m looking for is to filter only those rows which have a matching quote# and quote item from QI, QS tables

QI.QUOTE_NBR = QS.QUOTE_NBR AND
QI.ITEM_NBR = QS.QUOTE_ITEM

and also get the corresponding PRS_seq from PRS table if PRS table has matching rows( qi.pri_seq = prs.pri_seq(+)).

I tried using the below query but it gives a wrong result in case if QI has a single row and QS table has multiple rows and PRS also has multiple rows.

SELECT qs.*,prs.pri_seq,prs.req_nbr
FROM qs, qi, prs
WHERE qs.quote_nbr = qi.quote_nbr
and qs.quote_item = qs.item_nbr
and qi.pri_seq = prs.pri_seq(+);



Hope this might have given a clear picture..

Tom Kyte
March 04, 2006 - 7:11 am UTC

I do not believe you actually read my response above.

ok

Raj, March 04, 2006 - 6:32 am UTC

Hi Tom,

How to TRANSPOSE THE ROWS??

SQL> select deptno,max(sal)
  2  from emp
  3  group by deptno
  4  /

    DEPTNO   MAX(SAL)
---------- ----------
        10       5000
        20       3000
        30       2850

3 rows selected. 

Tom Kyte
March 04, 2006 - 7:12 am UTC

search this site for PIVOT

many examples.

ok

A reader, March 05, 2006 - 10:06 am UTC

Hello,
Any other way to put this query??

This query gets the last row of the table EMP..


SQL> with a as (select rownum rn,e.* from emp e),
  2       b as (select max(rn) as rn from a)
  3  select * from a,b where a.rn = b.rn
  4  /

        RN      EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- ----------
        14       7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

1 row selected. 

Tom Kyte
March 05, 2006 - 1:50 pm UTC

there is no such thing as the last row of temp, you are getting a random row from emp.

period.

there is no first row, middle row, last row.

that gets a random row.


define to us what a "last row" is exactly.

OK

A reader, March 06, 2006 - 8:09 am UTC

Hi Tom,
Thanks for your reply.
You just assume that EMP table to be static(it never undergoes any inserts or deletes).

last row in the sense any row inserted lastly.

I want to pick that.

Do you have any timestamp related information for
any row inserts??
If so, we can use max(timestamp).

Please ro reply.
Bye




Tom Kyte
March 08, 2006 - 3:52 pm UTC

last row inserted is not the last row.

There is no way to get the last inserted row.

read "the last row" in this really old article of mine:

</code> https://www.oracle.com/technetwork/issue-archive/2011/11-nov/o61asktom-512015.html <code>

multiple inserts

Reader, March 06, 2006 - 11:30 am UTC

Hello Tom,

I have a table holiday
desc
Name Null? Type
----------------------------------------- -------- ----------------------------
STD_DATE NOT NULL VARCHAR2(11)
HOLIDAY_NAME NOT NULL VARCHAR2(64)

I need to updated this table only once a year with a list holidays( about 15). I aslo have to do this inserts in some other database.

Would please provide a hit how to write these multiple insert statements inside a procedure? Thanks.

insert into holiday values('holiday_name_1','01-jan-2005');
insert into holiday values('holiday_name_2','02-feb-2006');
insert into holiday values('holiday_name_3','05-mar-2006');
insert into holiday values('holiday_name_4','05-apr-2006');
insert into holiday values('holiday_name_5','06-may-2006');

commit;



Tom Kyte
March 08, 2006 - 4:06 pm UTC

begin
... code you have
end;

??

but you have a big big big big problem - you seem to be using a string for a date? that is about the wrongest thing you can do ever. don't do that.

Multiples Insert

Alf, March 07, 2006 - 10:22 am UTC

Hi Tom,

Probably you'd have answered this some where in this site, would you please point out the right keyword to search on wraping multiple inserts as in the above case? Thanks.



query result format

Lily, March 07, 2006 - 3:50 pm UTC

Tom, 
please help with changing query result format in oracle 9.2.0.4 as described below:


SQL> create table tt (txt varchar2(10));

Table created.

SQL> insert into tt values ('AA');

1 row created.

SQL> insert into tt values ('BB');

1 row created.

SQL> insert into tt values ('CC');

1 row created.

SQL> commit;

Commit complete.

SQL> desc tt
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TXT                                                VARCHAR2(10)

SQL> select * from tt;

TXT
----------
AA
BB
CC

Now we want to get result as 3 returned rows as one line (no space between each row) as

AABBCC

Is there single SQL statement can handle it or Oracle has some build in function we may utilize?

Thanks.

 

Tom Kyte
March 09, 2006 - 11:17 am UTC

search this site for pivot

from china!!!

gaozhiwen, March 08, 2006 - 12:34 am UTC

Tom have many many many method,below is among method!!!

select max(txt1) || max(txt2) || max(txt3)
from (select decode(rn, 1, txt, '') txt1,
decode(rn, 2, txt, '') txt2,
decode(rn, 3, txt, '') txt3
from (select row_number() over(order by txt) rn, txt from tt))

Great answer.

A reader, March 08, 2006 - 11:27 am UTC

Many thanks to gaozhiwen and Tom.

Lily

Multiple Inserts

Alf, March 09, 2006 - 11:24 am UTC

Thanks Tom,

In the above example the tables are already created. However, what would be the best method to correct this problem? 
I was thinking to add the to_date( '02-16-2006', 'mm-dd-yyyy') in the inserts but then the problem would still remain in the base table wouldn't it?

beging
 insert statements
 INSERT INTO hhc_custom.hhc_holiday_cfg VALUES(to_date('02-JAN-2006','dd-mon-yyyy'),'NEW YEAR''S DAY');
 ....
commit;
end;

SQL> desc hhc_custom.hhc_holiday_cfg
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 STD_DATE                                  NOT NULL VARCHAR2(11)
 HOLIDAY_NAME                              NOT NULL VARCHAR2(64)
 

Tom Kyte
March 09, 2006 - 3:17 pm UTC

I don't know what the problem you are trying to solve or hitting is exactly?

of course the table must exist before the inserts are submitted.

Mulitple Insert

Alf, March 10, 2006 - 11:17 am UTC

Tom,

In your followup to the Initial how to wrap multiple inserts question, you mentioned there is a wrong thing "using a string" inserting dates.


insert into holiday values('01-jan-2005','holiday_name_1');
...

holiday
=======
Name Null? Type
----------------------------------------- -------- ----------------------------
STD_DATE NOT NULL VARCHAR2(11)
HOLIDAY_NAME NOT NULL VARCHAR2(64)

your initial followup:
Followup:

begin
... code you have
end;

??

but you have a big big big big problem - you seem to be using a string for a
date? that is about the wrongest thing you can do ever. don't do that.

Kindly advice how to correct this problem, thanks.

Tom Kyte
March 10, 2006 - 8:22 pm UTC

explicitly convert said string into a date so that no problems could ever go wrong with it.


to_date( '01-jan-2005', 'dd-mon-yyyy' )

that is a date

'01-jan-2005' is a string just like 'aa-bbb-cccc' is

You want to control the format mask when you are supplying a string like that.

Multiple Inserts

Alf, March 13, 2006 - 11:32 am UTC

Thanks Tom, for your great help!



Rish Gupta, March 15, 2006 - 12:04 pm UTC

Tom,
 Here is a table that I have 
create table rate_test(
ndc number,
eff_date date,
end_date date,
reabte number,
icn varchar2(5))

insert into rate_test
values(1, '01-DEC-05', '31-DEC-05', 5, '101');
insert into rate_test
values(1, '01-NOV-05', '30-NOV-05', 0, '102');
insert into rate_test
values(1, '01-OCT-05', '31-OCT-05', 5, '103');
insert into rate_test(
values(2, '01-OCT-05', '31-OCT-05', 5, '104');
insert into rate_test
values(3, '01-DEC-05', '31-DEC-05', 0, '106');
insert into rate_test
values(2, '01-DEC-05', '31-DEC-05', 0, '105');
insert into rate_test
values(3, '01-NOV-05', '30-NOV-05', 5, '107');
insert into rate_test
values(4, '01-OCT-05', '31-OCT-05', 5, '108');
insert into rate_test
values(5, '01-DEC-05', '31-DEC-05', 0, '109');

SQL> select * from rate_test
  2  /

       NDC EFF_DATE  END_DATE      REBATE ICN
---------- --------- --------- ---------- -----
         1 01-DEC-05 31-DEC-05          5 101
         1 01-NOV-05 30-NOV-05          0 102
         1 01-OCT-05 31-OCT-05          5 103
         2 01-OCT-05 31-OCT-05          5 104
         3 01-DEC-05 31-DEC-05          0 106
         2 01-DEC-05 31-DEC-05          0 105
         3 01-NOV-05 30-NOV-05          5 107
         4 01-OCT-05 31-OCT-05          5 108
         5 01-DEC-05 31-DEC-05          0 109

What I want to do is to select the rows with eff_date = 01-DEC-2005 AND END_DATE = 31-DEC-2005. Simple so far. However, if the rebate is a 0, then it should look at the previous month to determine if the rebate is 0. It is not a zero, that row should be retrieved, else it should look back another month. In cases where the rate is 0 for the preceding months, or the  ndc has no record for the previous months of November or October, then it should return the current month data, i.e. for the month of december.  
In short, i need to create a query that returns data from the latest non zero rebate for each ndc, only if that ndc is valid for the month 01-dec-2005 and 31-dec-2005. So my query should return

1 01-DEC-05 31-DEC-05          5 101
2 01-OCT-05 31-OCT-05          5 104
3 01-NOV-05 30-NOV-05          5 107
5 01-DEC-05 31-DEC-05          0 109 --since this ndc has no record for the months of november and oct

Also, NDC of 4 should not be retrieved because its from a past month of October.

I have this query 
SQL> SELECT x.ndc, NVL((case when (x.rebate = 0 or x.rebate is null) then
  2                (case when (y.rebate = 0 OR Y.REBATE IS NULL)  
  3          then z.eff_date
  4        else y.eff_date end)
  5       else x.eff_date
  6     end), '01-DEC-2005') as eff_date,
  7     NVL((case when (x.rebate = 0 or x.rebate is null)then
  8                (case when (y.rebate = 0 OR Y.REBATE IS NULL)
  9          then z.end_date
 10        else y.end_date end)
 11       else x.end_date 
 12     end), '31-DEC-2005') as end_date,
 13     NVL((case when (x.rebate = 0 or x.rebate is null) then
 14                (case when (y.rebate = 0 OR Y.REBATE IS NULL)
 15          then z.rebate
 16        else y.rebate end)
 17       else x.rebate 
 18     end), 0) as rebate
 19  from
 20   (SELECT NDC,EFF_DATE, END_DATE, rebate
 21   FROM rate_test
 22   where eff_date <= '01-DEC-05' AND END_DATE >= '31-DEC-05')x ,
 23   (SELECT NDC, EFF_DATE, END_DATE, rebate
 24   FROM rate_test
 25   where eff_date <= '01-NOV-05' AND END_DATE >= '30-NOV-05')y,
 26   (SELECT NDC, EFF_DATE, END_DATE, rebate
 27   FROM rate_test
 28   where eff_date <= '01-OCT-05' AND END_DATE >= '31-OCT-05')z
 29  where x.ndc = y.ndc(+) and
 30  X.NDC = Z.NDC(+) ;

       NDC EFF_DATE  END_DATE      REBATE
---------- --------- --------- ----------
         1 01-DEC-05 31-DEC-05          5
         3 01-NOV-05 30-NOV-05          5
         5 01-DEC-05 31-DEC-05          0
         2 01-OCT-05 31-OCT-05          5
There are 2 parts to my question :
1. I was wondering if there was a better way of writing a query to achieve the results I want. It seems too data specific. 
2. This query works because I know that I will not be looking back more than 2 months. What if I wanted to retrieve the latest non zero rebate row, even if it dates back 5-6 months ago? Is it possible to accomplish using SQL? 
Thanks in advance 

Tom Kyte
March 15, 2006 - 5:36 pm UTC

ops$tkyte@ORA10GR2> select ndc,
  2         case when rebate = 0 and last_rebate is not null
  3                  then last_eff_date
  4                          else eff_date
  5                  end eff_date,
  6         case when rebate = 0 and last_rebate is not null
  7                  then last_end_date
  8                          else end_date
  9                  end end_date,
 10         case when rebate = 0 and last_rebate is not null
 11                  then last_rebate
 12                          else rebate
 13                  end rebate,
 14         case when rebate = 0 and last_rebate is not null
 15                  then last_icn
 16                          else icn
 17                  end icn
 18    from (
 19  select rate_test.*,
 20                  lag(eff_date) over (partition by ndc order by eff_date) last_eff_date,
 21                  lag(end_date) over (partition by ndc order by eff_date) last_end_date,
 22                  lag(rebate)   over (partition by ndc order by eff_date) last_rebate,
 23                  lag(icn)      over (partition by ndc order by eff_date) last_icn
 24    from rate_test
 25   where (eff_date = to_date( '01-dec-2005', 'dd-mon-yyyy' ) and end_date = to_date( '31-dec-2005', 'dd-mon-yyyy' ))
 26      or rebate > 0
 27         )
 28   where (eff_date = to_date( '01-dec-2005', 'dd-mon-yyyy' ) and end_date = to_date( '31-dec-2005', 'dd-mon-yyyy' ))
 29  /

       NDC EFF_DATE  END_DATE      REBATE ICN
---------- --------- --------- ---------- -----
         1 01-DEC-05 31-DEC-05          5 101
         2 01-OCT-05 31-OCT-05          5 104
         3 01-NOV-05 30-NOV-05          5 107
         5 01-DEC-05 31-DEC-05          0 109
 

Rish G, March 16, 2006 - 11:45 am UTC

For lack of a good vocabulary, that was awesome!
Thanks a terabyte!

SQL

Michael, March 20, 2006 - 1:46 pm UTC

Tom,

Can you please guide me in creating aggicated table.

I have one table which containts 60 months of data , each month 135 mil of rows.

I have an report which is procesing last 13 months of data

Every month i have to refressh the report. where currently i am refreshing all 13 months data. wheere as i want to refress only current month data. remaning months will be as it is.

can you plz help me how we can do this???

Thanks


Tom Kyte
March 22, 2006 - 1:23 pm UTC

sounds like a materialized view? are you aware of them and how they work?

SQL

michael, March 23, 2006 - 3:04 am UTC

Tom,

I know some thing about materialized view.

But not fully sure how that can be helpful in this case?.

Can you please explain..

Thanks

Tom Kyte
March 23, 2006 - 10:45 am UTC

materialized views do this - they aggregate data, they pre-join data, they summarize data, they work with incremental updates (changes only).

You'll want to read:
</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#g1028195 <code>

Update

Tony, March 23, 2006 - 11:45 am UTC

Tom,

What will be the impact on the large table if the rows are regularly updating.

for Example:

I have a table which contains more then 1500 millions of rows and due to some changes in history data I am updating one column value.This is the forth time I am doing this activity for the same column. It is taking more than 10 hours to complete.
Is there any impact on table data fetch?

Thanks

Tom Kyte
March 23, 2006 - 1:36 pm UTC

"depends" - as always.

if the data is being read as it is being modified - then the consistent read mechanism will have to rollback the updates for the reads, making the read take a little longer as it accesses the blocks.

Selective Group By

TS, April 11, 2006 - 10:41 am UTC

Hi Tom,

I want to do a selective group by on a bunch of columns
in a table and at the same time display all the columns
eventually. Here's what I want:-

select col_a,col_b,sum(col_c),col_d,col_e
from table_a
group by col_a,col_b

Any help on this is greatly appreciated.



Tom Kyte
April 11, 2006 - 4:13 pm UTC

"selective group by"??

I don't know what you mean.


What should the sum(col_c) be in the above - got an example?

re: Selective Group By

Oraboy, April 11, 2006 - 5:22 pm UTC

I think the prev poster (TS) is looking to have summary value along with scalar value in the SQL
(select c1,c2,c3,sum(c2) from table)

I think, Analytics would the way to go..

sQL>select * from test;

C1 C2 C3 C4
---------- ---------- ---------- ----------
1 11 101 499
2 12 102 498
3 13 103 497
4 14 104 496
5 15 105 495
6 16 106 494
7 17 107 493
8 18 108 492
9 19 109 491

9 rows selected.

sQL>select sum(c3) from test;

SUM(C3)
----------
945

sQL>select c1,c2,sum(c3) over () sum_of_c3,c4 from test;

C1 C2 SUM_OF_C3 C4
---------- ---------- ---------- ----------
1 11 945 499
2 12 945 498
3 13 945 497
4 14 945 496
5 15 945 495
6 16 945 494
7 17 945 493
8 18 945 492
9 19 945 491

9 rows selected.

--
option 2: use With clause to get all summaries

sQL>l
1 With sum_values as (select sum(c3) sum_of_c3, sum(c2) sum_of_c2 from test)
2 select c1,c2,sum_of_c2,c3,sum_of_c3,c4
3* from test,sum_values
sQL>/

C1 C2 SUM_OF_C2 C3 SUM_OF_C3 C4
----- ---- ---------- ---------- ---------- ----------
1 11 135 101 945 499
2 12 135 102 945 498
3 13 135 103 945 497
4 14 135 104 945 496
5 15 135 105 945 495
6 16 135 106 945 494
7 17 135 107 945 493
8 18 135 108 945 492
9 19 135 109 945 491

9 rows selected.

Tom Kyte
April 11, 2006 - 7:26 pm UTC

I agree, that is what I thought.

But - I would really like to hear it from them. Maybe I'm getting picky - but the ability to state specific requirements - to tell someone "this is what I want......" and have it be meaningful seems to be a lost art :(

INSTR documentation ambiguity?

Duke Ganote, April 13, 2006 - 3:20 pm UTC

The documentation for INSTR is pretty short
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/functions068.htm#sthref1467
but is it ambiguous?  Suppose the substring has a length greater than one.  I expected the search for, say, the 2nd occurrence to begin immediately after the <b>end</b> of the first occurrence.  However, it seems to begin after the <b>first character</b> of the first occurrence.

Here's an example.  I'm using CSV, and always enclosing the values in double-quotes.  But suppose the values are themselves commas?  Here I pass 3 values, each of which is a comma:

  1  with input_txt as (
  2     select '",",",",","' AS STRING
  3          , '","'         AS SUBSTRING
  4       from dual )
  5  select INSTR(string,substring,2,1) start_of_1st_occurs
  6       , INSTR(string,substring,2,2) start_of_2nd_occurs
  7       , INSTR(string,substring,2,3) start_of_3nd_occurs
  8* from input_txt
SQL> /

START_OF_1ST_OCCURS START_OF_2ND_OCCURS START_OF_3ND_OCCURS
------------------- ------------------- -------------------
                  3                   5                   7

However, I get a different answer if the payload values are, say, X

  1  with input_txt as (
  2     select '"x","x","x"' AS STRING
  3          , '","'         AS SUBSTRING
  4       from dual )
  5  select INSTR(string,substring,2,1) start_of_1st_occurs
  6       , INSTR(string,substring,2,2) start_of_2nd_occurs
  7       , INSTR(string,substring,2,3) start_of_3nd_occurs
  8* from input_txt
SQL> /

START_OF_1ST_OCCURS START_OF_2ND_OCCURS START_OF_3ND_OCCURS
------------------- ------------------- -------------------
                  3                   7                   0
I'd've expected the same results regardless of the values (well, unless we were passing "," as a value!).   

Tom Kyte
April 14, 2006 - 12:07 pm UTC

I must be missing something here - looks right to me?

you have entirely different strings to search, with different numbers if ","'s in them even!

INSTR - looks fine to me

Oraboy, April 13, 2006 - 10:52 pm UTC

Lets see carefully..
<quote>
1 with input_txt as (
2 select '",",",",","' AS STRING
</quote>

",",",","," <-- string data enclosed within single quote
12345678901 <-- character positions for better readability

Now Lets manually search your substring (which is ",")

instr(string,'","',2,1) :
from 2nd position looking at first occurance of ","
you can find character position 345 matching ..

so instr(string,",",2,1) = 3

instr(string,'","',2,2):
from 2nd char position look for 2nd occurance of ","
we know 1st occurance starts at 3rd char..looking for 2nd
you find characters in position 567 matching..

so inst(string,",",2,2) = 5

which is exactly what you got.

Apply the same technique to your other string "x","x","x", you would see instr is working properly

Perhaps, your eyes are splitting and reading the strings in your original data as 123,567,901...

what you missed to read is the fact there are more within the data string..which are
,345,789

hope that helps

Another SQL Query...

A reader, April 24, 2006 - 3:58 am UTC

Hi Tom,
I have a query like this:

I want to select data from a table where the item exists for all the given months. As the in operator is like wrting an or clause it is useless in this scenario.

For e.g. using the Scott schema's emp table:

I wish to select all the employees of a particular dept. who have been hired in the month of say, Jan, Dec and March.

If the hiredate does not fall in any of these months they should not be displayed i.e. the hiredate is only in Jan and Dec then they should not be displayed.

I want to use a single select statement to acheive the same.
I tried using analytics and connect by clause w/o any success.

Kindly help
Thanks as always



Tom Kyte
April 24, 2006 - 4:30 am UTC

select empno
from emp
where hiredate in ( to_date( :x ), to_date(:y), to_date(:z) )
group by empno
having count(distinct to_char(hiredate,'mm')) = 3;

find the records (where in)
aggregate the records
count to see that the distinct count of whatever is what you need.

You could use analytics as well if you didn't want to aggregate.

Word search

tony, April 26, 2006 - 8:03 am UTC

Tom,

Can you please guide me how to search the exact word in the string? I am using Oracle ver 9.1

If i have two strings
1) The oracle is great
2) The oracledatabase is great

If i want to search the row where only oracle word exist. so that i can fetch only 1st row.

Many thanks in advance.





Tom Kyte
April 26, 2006 - 8:32 am UTC

typically one would use a TEXT index for that.
</code> http://asktom.oracle.com/pls/ask/search?p_string=indextype+ctxsys.context <code>

Approximately 461 records found.

A reader, April 26, 2006 - 9:46 am UTC

I assume
</code> http://asktom.oracle.com/pls/ask/search?p_string=indextype+ctxsys.context <code>
uses a TEXT index, correct ?

However, it also shows
"Approximately 461 records found. "

what is the meaning of this ?
(I can click "Next" 5-times and last hit is
63 Context Index Score(7) 25 Oct 2000 7pm 25 Oct 2000 7pm 5.5 years old
)

however, 63 != 461 not even approximately

Tom Kyte
April 27, 2006 - 4:07 am UTC

It is a guess.

Goto google, search for Oracle. Report back

a) how many pages it guessed
b) how many you could actually get to

A reader, April 26, 2006 - 9:52 am UTC

already found the answer on
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3489618933902#5173364031614 <code>

"...
intermedia uses its index to get a guesstimate
..."

so why does intermedia guess so badly ?

Tom Kyte
April 27, 2006 - 4:09 am UTC

because I modify documents constantly and the maintainence of the index is "deferred" - so if I update a document 100 times, it might overcount that document by that much.

but - goto google, search for oracle...

google

A reader, April 27, 2006 - 4:32 am UTC

ok, I went to google and searched for oracle.

to make things perform better, I searched
only for pages written in these language(s): Catalan

google resulted to me 47600 hits, and I managed to get 661.

so, what did we learnt from that ?

a. google estimates even worse than Oracle (at least sometimes, at least once)

b. Oracle doesn't seem to be very ambitious in improving (it seems you are content when you are better than Google)

c. because of that, I'd probably better sell my Oracle stocks and buy ... ?


Tom Kyte
April 27, 2006 - 3:00 pm UTC

google is the gold standard for searching.

that is all I have to say. I'm not worried about it *at all*.

I'm not saying "better than google", I'm saying "we should aspire to be as fast as and as good as google when searching on the web"

and counting hits precisely is just "stupid", my humble opinion. To get it right you know what you have to do?

think about it *(you have to do the search, well, guess what - that would not be "fast")*

SQL join

John, April 27, 2006 - 8:25 am UTC

Tom,

I have a table A,B,C and D
A contains idx_no as (PK)
and B,C and D has the idx_no as FK
contains
A :
Idx_no Fld_nam
-------------
1 A
2 B
3 C
4 F
5 G
6 H
7 I
8 J
9 K
10 L

B :
Idx_no Fld_nam
-------------
1 F
2 G
3 H
C :
Idx_no Fld_nam
-------------
4 I
5 J

B :
Idx_no Fld_nam
-------------
6 K
7 L

No the thing is the relation is like : A-B A-C A-D
i.e.


can you please help me to write the query for this?

Tom Kyte
April 27, 2006 - 3:15 pm UTC

laughing out loud...

"query for this"

with absolutely zero description of what "this" is? neat.


but also remember-

no creates, no insert into's, no look

with no details - really - no look.

INSTR ambiguity revisited

Duke Ganote, April 30, 2006 - 9:40 pm UTC

I've looked at your comment
</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:4666823155820860062::::P61_ID:3083285970877#61672341613426 <code>
and Oraboy's comment following yours.

Please let me try again. "," is my delimiter. And here's the test query:
with input_txt as (
select '",",","' AS STRING
-- 1234567 character positions in STRING
-- , , payload is two strings, each one is just a comma
, '","' AS SUBSTRING -- delimiter
from dual )
select INSTR(string,substring,2,1) start_of_1st_occurs
, INSTR(string,substring,2,2) start_of_2nd_occurs
from input_txt

The first occurrence is in positions 3-5, we all agree. I'd expect the search for the 2nd occurrence to start at position 6 -- after the end of the 1st occurrence. (That's what we need when parsing the CSV.) There really is no 2nd occurrence after the first occurrence when parsing the CSV.

Instead INSTR's search begins at position 4, so it finds a "second occurrence" starting in position 5.

I just think the INSTR documentation is unclear, because it searches for 2nd and subsequent occurrences starting immediately after the position of the prior occurrence -- not after the end of the SUBSTRING (delimiter). And that's an important distinction when parsing the CSV.

What I'd expected was something that behaved more like this formulation:

with input_txt as (
select '",",","' AS STRING
-- 1234567
-- x,xxx,x
, '","' AS SUBSTRING
from dual )
select INSTR(string,substring,2,1) start_of_1st_occurs
, INSTR(string,substring,
INSTR(string,substring,2,1)+length(substring),2)
start_of_2nd_occurs
from input_txt
/
START_OF_1ST_OCCURS START_OF_2ND_OCCURS
------------------- -------------------
3 0

Which is exactly the result if the payload is anything other than commas...

with input_txt as (
select '"#","#"' AS STRING
-- 1234567
, '","' AS SUBSTRING
from dual )
select INSTR(string,substring,2,1) start_of_1st_occurs
, INSTR(string,substring,2,2) start_of_2nd_occurs
from input_txt
/
START_OF_1ST_OCCURS START_OF_2ND_OCCURS
------------------- -------------------
3 0

Tom Kyte
May 01, 2006 - 2:05 am UTC

what else can we say other then "sorry"?

I do not think it is fuzzy or anything. You are thinking instr works like "strtok" in C and parses strings.

It did not, it does not. It started looking at the character position after the place where the first string began, if you would like to have it work differently, you would have to specify the character to start from.


Your last example is just misleading - you did not change all of the comma's to #'s, just ones that suited you - instr is NOT a "parser" ala strtok of C, it is what it is. Nothing more.

biggest tablespace size

Sam, May 01, 2006 - 6:44 am UTC

Dear Tom,
how can i find out biggest tablespace size in oracle database.

Thanks with Regards,
sam

Tom Kyte
May 02, 2006 - 2:40 am UTC

via a query :)

my question to you - what if TWO of the tablespaces are "the biggest" or thirty of them?

Here is one way, it'll return ALL of the largest tablespaces

join 2 tables

A reader, May 01, 2006 - 12:30 pm UTC

tom:

I have 2 tables
t1:
a 11
b 11
c 11
t2:
a 22
b 22
d 22
e 22

I want a query result like
a 11 22
b 11 22
c 11
d 22
e 22

All I can think of is union 2 utter join query results, but obviously it is not the best solution, please advice.

Tom Kyte
May 02, 2006 - 2:58 am UTC

I would have shown you the syntax for a "full outer join" - but you didn't give any create tables or inserts - so, I won't.

You can read about it on some of these pages:
</code> http://asktom.oracle.com/pls/ask/search?p_string=%22full+outer+join%22 <code>



INSTR vs C's strtok (string tokenizer)

Duke Ganote, May 01, 2006 - 12:43 pm UTC

I see now that the redoubtable Alberto Dell'Era asked for a native SQL string tokenizer akin to C's strtok
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:23060676216774#50707673544404 <code>
As he said "parsing a comma or tab-separated string is something that happens quite frequently"

Regarding my last example, I did not "change ...just ones [commas] that suited [me]". I changed the payload from 2 strings of one comma each to 2 strings of a pound-sign each. Each example STRING has tokens using double-quotes and comma according to the CSV spec.

Tom Kyte
May 02, 2006 - 3:03 am UTC

you are asking for a string tokenizer.

That is not what INSTR does (sorry, just stating a "fact", I cannot change the "fact").

You did just change the comma's that suited you to make the example "work" in your case. You are looking for a STRING TOKENIZER, you are not looking for what INSTR does - for what INSTR is documented to do.

string tokenizer

Duke Ganote, May 03, 2006 - 10:19 am UTC

INSTR does what INSTR does; no problem. Just noting that the documentation's statements:
* "position [indicates] the character of string where Oracle Database begins the search"
* "occurrence [indicates] which occurrence of string Oracle should search for"
Don't warn the reader (or -at least- didn't this reader) that when SUBSTRING has a length greater than 1 and OCCURRENCE is greater than 1, there's a difference in functionality between INSTR and a string tokenizer.

Tom Kyte
May 03, 2006 - 1:06 pm UTC

"position is an nonzero integer indicating the character of string where Oracle Database begins the search. If position is negative, then Oracle counts backward from the end of string and then searches backward from the resulting position."

You start as position N in the string.
You look for the N'th occurrence of a string from THAT POSITITION.

Instr has no clue about what happend BEFORE that position, it isn't remembering your previous function calls - not like the C function strtok for example.

I'm not sure why such a warning would be necessary since instr makes no claims to be a tokenizer (tokenizers are usually very proud of being such - and would make the claim)

Warning: Instr is not a way to add two numbers
Warning: Instr would not be a good choice to concatenate two strings

?

token comment on INSTR

Duke Ganote, May 03, 2006 - 3:12 pm UTC

I wholly agree that "INSTR has no clue about what happened BEFORE that [starting] position".   To me (at least from the documented description and its examples), it's unclear that INSTR finds "overlapping" occurrences of a SUBSTRING after the starting POSITION, when you ask for more than one occurrence:

select INSTR(',,,,,,,',',,',1,3)
  from dual
SQL> /

INSTR(',,,,,,,',',,',1,3)
-------------------------
                        3 

query to compare data

thirumaran, May 08, 2006 - 10:32 am UTC

Hi Tom,

Table_name = EMPLOYEE
Columne_name = SAL
I need to retrieve data only comparing the SAL column .
(ie)
SAL
===
100
110
200
200
i am trying to write a query to check the next below/above value and return the record if it matches.

100 <> 110 (record doesnot match)
110 <>200 (record doesnot match)
200 = 200 (matches and this should be listed).

how do i achive this ?

thanks in adv
thirumaran



Tom Kyte
May 08, 2006 - 10:53 am UTC

Need more than "that".

perhaps what you really mean is something like:

select sal, count(*)
from emp
group by sal
having count(*) > 1
order by sal;


that'll print out all repeated sal's and how many times they repeat.

Making copy of Table with Nested Table column.

Sujit Mondal, May 09, 2006 - 6:26 am UTC

Hi Tom,
We are maintaining a database developed by other team.
I have a table in which there is a column of type NESTED TABLE. I want to create a copy of this table into some other database schema , while trying with "Creat table as select * " we are getting error because of this nested table type column. Can you please let me know how can I do that?

Tom Kyte
May 09, 2006 - 8:27 am UTC

ops$tkyte@ORA10GR2> drop user a cascade;
 
User dropped.
 
ops$tkyte@ORA10GR2> drop user b cascade;
 
User dropped.
 
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> grant create session, create type, create table to a identified by a;
 
Grant succeeded.
 
ops$tkyte@ORA10GR2> alter user a default tablespace users quota unlimited on users;
 
User altered.
 
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> grant create session, create table to b identified by b;
 
Grant succeeded.
 
ops$tkyte@ORA10GR2> alter user b default tablespace users quota unlimited on users;
 
User altered.
 
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> connect a/a
Connected.
a@ORA10GR2> create type myTableType as table of number
  2  /
 
Type created.
 
a@ORA10GR2> create table t ( x int, y myTableType ) nested table y store as y_tab;
 
Table created.
 
a@ORA10GR2>
a@ORA10GR2> grant execute on myTableType to b;
 
Grant succeeded.
 
a@ORA10GR2> grant select on t to b;
 
Grant succeeded.
 
a@ORA10GR2>
a@ORA10GR2> connect b/b
Connected.
b@ORA10GR2> create table t nested table y store as y_tab as select * from a.t;
 
Table created.
 

Sql Question

Yoav, May 12, 2006 - 2:45 pm UTC

Hi Tom,
The following where statment:
where decode(some_column_name,0,:b1,some_column_name) = :b1
can also be written like this:
where (case when some_column_name = 0 then
:b1
else
some_column_name
end ) = :b1
can you suggest another option ?
Thanks

Tom Kyte
May 12, 2006 - 9:34 pm UTC

sure, there are thousands of convoluted ways to write that.

but the two you have are more than sufficient. stick with them.

Update statement - need to know old value

Anne, May 17, 2006 - 1:19 pm UTC

Hi Tom,

My need is to write an update statement, but I need to know the old value. Would you please advise on which is the better method performance wise :
1. Method 1: Two update stmts. The first one is just to get the original value :
update transactions_detail
set line_dist_bal_amt = line_dist_bal_amt
where trx_id = cmadj_rec.trx.trx_id
and org_id = cmadj_rec.trx.org_id
and trx_type <> g_cmadj
returning line_dist_bal_amt into l_orig_cmadj_dist_bal_amt;

update transactions_detail
set line_dist_bal_amt = line_dist_bal_amt - l_line_dist_applied_amt
where trx_id = cmadj_rec.trx.trx_id
and org_id = cmadj_rec.trx.org_id
and trx_type <> g_cmadj
returning line_dist_bal_amt into l_new_cmadj_dist_bal_amt ;

2. Method 2 : Using select then update by rowid
select line_dist_bal_amt , rowid
into l_orig_cmadj_dist_bal_amt, l_rowid
from transactions_detail
where trx_id = cmadj_rec.trx.trx_id
and org_id = cmadj_rec.trx.org_id
and trx_type <> g_cmadj ;

update transactions_detail
set line_dist_bal_amt = line_dist_bal_amt - l_line_dist_applied_amt
where rowid = l_rowid
returning line_dist_bal_amt into l_new_cmadj_dist_bal_amt;

Appreciate your comments on this.

Thanks,
Anne

Tom Kyte
May 18, 2006 - 10:18 am UTC

it really goes back to your transaction logic here. You are updating a "row blindly" in #1 above.

how can you update this row without having seen this row in the first place? And if you have seen this row (eg: you selected it out on some other page and displayed it to the user), how do you prevent LOST UPDATES....


but if you are doing a blind update, it would be option #2 with a select FOR UPDATE (to lock and protect the row) followed by the update.

Actually, if you just did the blind update, get the new value back - you would know the old value simply by adding l_line_dist_applied_amt to it!

A strange case of case

Rish G., May 21, 2006 - 4:05 pm UTC

Hi Tom,
  I'm trying to display years as a range. For example, from the emp table, for any employee that has worked more than 24 years, I want to display the number of years worked as 25+, else just display the number of years.
I first tried using the case statement and could find an explanation for the following. 

SQL*Plus: Release 9.2.0.1.0 - Production on Sun May 21 14:47:54 2006

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.1.0 - Productio
NLSRTL Version 10.2.0.1.0 - Production

Case 1 : works fine without the else statement.

SQL> select empno, hiredate, case when trunc(((sysdate-hiredate)/365.26), 0) > 24 then '25+' end as num_years
  2  from emp;

     EMPNO HIREDATE  NUM
---------- --------- ---
      7369 17-DEC-80 25+
      7499 20-FEB-81 25+
      7521 22-FEB-81 25+
      7566 02-APR-81 25+
      7654 28-SEP-81
      7698 01-MAY-81 25+
      7782 09-JUN-81
      7788 09-DEC-82
      7839 17-NOV-81
      7844 08-SEP-81
      7876 12-JAN-83
      7900 03-DEC-81
      7902 03-DEC-81
      7934 23-JAN-82

14 rows selected.

Case 2 : When I add the else statement to display the number of years worked if less than or equal to 24 then it errors 
SQL> select empno, hiredate, case when trunc((sysdate-hiredate)/365.26) > 24 then '25+'
  2               else  trunc((sysdate-hiredate)/365.26) end as num_years
  3  from emp;
             else  trunc((sysdate-hiredate)/365.26) end as num_years
                   *
ERROR at line 2:
ORA-00932: inconsistent datatypes: expected CHAR got NUMBER

Case 3 : When I convert to char datatypes, it works partially but when it encounters a difference of say 100, it shows the value 100 because of a character compare. 

insert into emp(empno, hiredate, deptno)
values(8888, to_date('01/01/1905', 'mm/dd/yyyy'), 20)    

SQL> select empno, hiredate, case when to_char(trunc((sysdate-hiredate)/365.26)) > to_char(24) then '25+'
  2               else  to_char(trunc((sysdate-hiredate)/365.26)) end as num_years
  3  from emp;

     EMPNO HIREDATE  NUM_YEARS
---------- --------- ----------------------------------------
      7369 17-DEC-80 25+
      7499 20-FEB-81 25+
      7521 22-FEB-81 25+
      7566 02-APR-81 25+
      7654 28-SEP-81 24
      7698 01-MAY-81 25+
      7782 09-JUN-81 24
      7788 09-DEC-82 23
      7839 17-NOV-81 24
      7844 08-SEP-81 24
      7876 12-JAN-83 23
      7900 03-DEC-81 24
      7902 03-DEC-81 24
      7934 23-JAN-82 24
      8888 01-JAN-05 101

15 rows selected.

Can you explain this behavior? Why does it error on adding the else statement whereas and works after an explicit char covnersion after adding the else? 

FYI, this works using the decode function without an explicit conversion to character

SQL> select empno, decode(greatest(trunc((sysdate-hiredate)/365.25), 25), trunc((sysdate-hiredate)/365.25), '25+',
  2                trunc((sysdate-hiredate)/365.25)) years
  3  from emp;

     EMPNO YEARS
---------- ----------------------------------------
      7369 25+
      7499 25+
      7521 25+
      7566 25+
      7654 24
      7698 25+
      7782 24
      7788 23
      7839 24
      7844 24
      7876 23
      7900 24
      7902 24
      7934 24
      8888 25+

15 rows selected.

 

Tom Kyte
May 21, 2006 - 8:18 pm UTC

with the case, you are one time returning a string (25+) and one time a number and case says "oh no, you don't get anything implicit from me, TELL me what to do - am I to return a string or a number"

you could use to_char on your number to tell it.


decode on the other hand says "i will peek at the first return value and implicitly convert every other return value to this type". Which is a really bad thing (the side effects I've seen over the years, OUCH)



SQL Query

A reader, May 23, 2006 - 9:19 am UTC

Hi tom, 
i have a data like this.

SQL> select net_name, component from test_table;

NET_NAME                       COMPONENT
------------------------------ ----------
E_S_DIAFI                      CON_PIN
E_S_DIAFI                      Cemi
E_S_DIAFI                      R3
DIAFI_P5_GPIO08                R3
DIAFI_P5_GPIO08                R4

now this table does not have ID-PARENTID column for me to use connect by clause. But my require is something similar.

Based on the start point of a component value 'CON_PIN', i want to start the tree.

so based on CON_PIN value, i will get the net_name, for that net_name, i will again get the components, not for the components again i will get the net_name and so on.

in the process,i want to build the tree based on the retrieved order. something like this.

output 
--------
CON_PIN
       E_S_DIAFI
                Cemi
                R3
                  DIAFI_P5_GPIO08
                                 R4

is this possible with a query.
here is the test case.

create table test_table(net_name varchar2(30), component varchar2(30))
/
insert into test_table values('E_S_DIAFI','CON_PIN')
/
insert into test_table values('E_S_DIAFI','Cemi')
/
insert into test_table values('E_S_DIAFI','R3')
/
insert into test_table values('DIAFI_P5_GPIO08','R3')
/
insert into test_table values('DIAFI_P5_GPIO08','R4')
/
 

Tom Kyte
May 23, 2006 - 10:07 am UTC

i don't understand the logic. sometimes you connect by one thing and again by something else.

A reader, May 23, 2006 - 10:13 am UTC

Thanks, Its like start with component, get the net names, for each net name, get the components, for each component get the new names.

in the mean time, if the net name is GND, stop there, if the net name is already been followed, stop there.

Tom Kyte
May 23, 2006 - 10:19 am UTC

got version, unless 10g - the "if the net name is already been followed" bit isn't going to happen unless you are on 10g (where the iscycle function exists)

A reader, May 23, 2006 - 10:28 am UTC

Yes we will be migrating to 10g, can you pls give the solution how it might be.

Tom Kyte
May 23, 2006 - 3:25 pm UTC

here is an idea of what you could get for example:

ops$tkyte@ORA10GR2> select rpad('*',2*level,'*') || net_name || ',' || component txt
  2    from test_table
  3   start with component = 'CON_PIN'
  4  connect by NOCYCLE
  5             case when mod(level,2) = 0 then prior net_name else prior component end
  6             =
  7             case when mod(level,2) = 0 then net_name else component end
  8  /

TXT
----------------------------------------
**E_S_DIAFI,CON_PIN
****E_S_DIAFI,Cemi
****E_S_DIAFI,R3
******DIAFI_P5_GPIO08,R3
********DIAFI_P5_GPIO08,R4

 

sql auery

nn, May 23, 2006 - 3:39 pm UTC

NET_NAME COMPONENT
------------------------------ ----------
E_S_DIAFI CON_PIN
E_S_DIAFI Cemi
E_S_DIAFI R3
DIAFI_P5_GPIO08 R3
DIAFI_P5_GPIO08 R4

I need your help to format output like

NET_NAME COMPONENT
------------------------------ ----------
E_S_DIAFI CON_PIN,Cemi,R3


Tom Kyte
May 24, 2006 - 6:54 am UTC

search site for stragg

stragg

nn, May 24, 2006 - 5:25 pm UTC

select distinct vehicle_id, pl.name
from vehicles_d2d_rules vd,
sales_rules_access_groups srg,
access_groups_prv_label_infos agpl,
private_label_infos pl
where vd.D2D_RULE_ID = srg.SALES_RULE_ID
and srg.ACCESS_GROUP_ID = agpl.ACCESS_GROUP_ID
and agpl.PRIVATE_LABEL_INFO_ID = pl.PRIVATE_LABEL_INFO_ID
and vd.vehicle_id in ( 5102847,4300949)

How can I add stragg here?


Tom Kyte
May 25, 2006 - 1:26 pm UTC

select vehicle_id, stragg(pl.name)
from ....
....
group by vehicle_id

error on strag

nn, May 26, 2006 - 1:37 pm UTC

SQL> select vehicle_id, stragg(pl.name) 
  2  from vehicles_d2d_rules vd,
  3  sales_rules_access_groups srg,
  4  access_groups_prv_label_infos agpl,
  5  private_label_infos pl
  6  where vd.D2D_RULE_ID = srg.SALES_RULE_ID
  7  and srg.ACCESS_GROUP_ID = agpl.ACCESS_GROUP_ID
  8  and agpl.PRIVATE_LABEL_INFO_ID = pl.PRIVATE_LABEL_INFO_ID
  9  and vd.vehicle_id in ( 5102847,4300949) group by vehicle_id ;
select vehicle_id, stragg(pl.name)
                   *
ERROR at line 1:
ORA-00904: "STRAGG": invalid identifier
 

Tom Kyte
May 27, 2006 - 9:30 pm UTC

so, did you search on stragg on this site (and hence discover it is a package that I wrote and you can use?)

Interesting that this page alone has "search this site for stragg" more than once, more than twice, Heck there is even a comment "STRAGG as defined here: "

Pauline, May 30, 2006 - 5:32 pm UTC

Tom,
We have one query taken from package needs to be tuned from your help.

1. This query will be always slow at first time run but faster if keep execute it because
it is cached in memory. For example, in our development database, it returns result as 3 seconds when first run, 1.5 seconds when second run, 0.9 seconds when 3rd run, 0.5 seconds when forth run.

We really need to speed as faster as 4th run. How to make it fast when application first
execute this package?

2. The speed of returning result related to how many data in result set. For example, when we test the query, in development database, we query campaign_id = 18 which retuns 45 rows, in staging database, we query campaign_id = 1432(no campaign_Id 18 in staging db) which returns more than 180 rows, the speed in staging is much slower(triple time of dev) than development database.
If we query campaign_id = 18 in staging, since no data in result set, it retunrs right away.
How could I tune the query fast based on this behaviour?

The query is as following:

SELECT
cp.campaign_pty_id,
cp.campaign_id,
cp.note_txt,
cp.create_dt,
cp.create_user_id,
cp.last_update_dt,
cp.last_update_user_id,
pas.pty_id ,
pas.party_nm,
cp.number_attending ,
cp.campaign_id,
cp.guest_list_txt,
cp.campaign_id,
cp.campty_stat_id,
cps.campty_status_nm ,
cp.invited_by_txt,
pas.address_id,
pas.address_line_txt,
pas.city_nm,
pas.greg_st_prov_id ,
pas.state_county_nm,
pas.postal_code,
pas.greg_country_id,
pas.country_nm,
i.dietary_list_txt ,
pas.clientsegment_id,
MAIL_TO_PTY_ID,
P.PARTY_NM AS MAIL_TO_PTY_NM
FROM CAMPAIGN_PARTIES cp
, CAMPAIGNS c
, CAMPAIGN_PARTY_STATUSES cps
,( SELECT PTY_ID, ADDRESS_ID, party_nm, address_line_txt, city_nm, greg_st_prov_id , state_county_nm
, postal_code , greg_country_id, country_nm, clientsegment_id, org_or_last_nm_sort, first_nm
FROM PARTY_ADDRESS_SEARCH pas
where SubStr(upper(org_or_last_nm_sort),1,1) = upper('a')) pas
, INDIVIDUALS i
, PARTIES p
where cp.campaign_id = 18
AND cp.campty_stat_id = cps.campty_stat_id
AND cps.campty_stat_id <> 6
AND cp.address_id = pas.address_id
AND cp.pty_id = pas.pty_id
AND cp.campaign_id = c.campaign_id
AND cp.pty_id = i.pty_id(+)
AND cp.MAIL_TO_PTY_ID= P.PTY_ID
ORDER BY pas.address_line_txt
/


The execution plan and cost for the query is:

1.92 SELECT STATEMENT Hint=CHOOSE 92
2.1 SORT ORDER BY 92
3.1 NESTED LOOPS 90
4.1 NESTED LOOPS OUTER 88
5.1 NESTED LOOPS 86
6.1 NESTED LOOPS 85
7.1 NESTED LOOPS 7
8.1 INDEX UNIQUE SCAN XPKCAMPAIGNS UNIQUE
8.2 TABLE ACCESS BY INDEX ROWID CAMPAIGN_PARTIES 6
9.1 INDEX RANGE SCAN XIF3CAMPAIGN_PARTIES NON-UNIQUE 1
7.2 TABLE ACCESS BY INDEX ROWID PARTY_ADDRESS_SEARCH 3
8.1 INDEX RANGE SCAN PA_SEARCH_ADDRESS_ID_IDX NON-UNIQUE 2
6.2 TABLE ACCESS BY INDEX ROWID CAMPAIGN_PARTY_STATUSES 1

7.1 INDEX UNIQUE SCAN XPKCAMPAIGN_PARTY_STATUSES UNIQUE
5.2 TABLE ACCESS BY INDEX ROWID INDIVIDUALS 2
6.1 INDEX UNIQUE SCAN XPKINDIVIDUALS UNIQUE 1
4.2 TABLE ACCESS BY INDEX ROWID PARTIES 2
5.1 INDEX UNIQUE SCAN XPKPARTIES UNIQUE 1

18 rows selected.

Please give the hand or idea of how to speed up the query.

Thanks in adance.

Tom Kyte
May 30, 2006 - 7:11 pm UTC

1) buy infinitely fast disk? Think about it - if we have to do physical IO (unavoidable in the REAL WORLD - your development instance is "like a dust free lab", you'll likely never really see "4" in real life as you'll likely have other stuff going on)



2) suggest before you do anything, you dig into "how to use tkprof" to analyze what your queries are

a) doing
b) how they are doing it
c) what they are waiting on

an understanding of that will go a long long way towards understanding what you can and cannot do

(hint, one thing I would be looking for would be index scans that have LOTS of rows returned but the resulting table access by index rowid has FEW rows - by adding a column or two to an index, you might well be able to massively cut down the work performed. For example..

I force the use of an index on just owner, and then on owner,object_type - showing that by simply adding object_type to the index we can massively reduce the work:

select /*+ index( big_table bt_owner_idx ) */ * from big_table
where owner = 'SYS' and object_type = 'SCHEDULE'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15 12.92 35.70 131052 131188 0 202
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 12.93 35.70 131052 131188 0 202

Rows Row Source Operation
------- ---------------------------------------------------
202 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=131188 pr=131052 pw=0 time=38076066 us)
4623132 INDEX RANGE SCAN BT_OWNER_IDX (cr=9689 pr=9674 pw=0 time=4692939 us)(object id 66535)

here we see 4,623,132 rows out of the index, resulting in a mere 202 rows from the table - by simply adding another column to the index:

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 15 0.00 0.00
db file sequential read 131052 0.02 25.95
SQL*Net message from client 15 0.01 0.03
********************************************************************************
select /*+ index( big_table bt_owner_object_type_idx ) */ * from big_table
where owner = 'SYS' and object_type = 'SCHEDULE'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15 0.03 0.57 165 252 0 202
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.04 0.58 165 252 0 202

Rows Row Source Operation
------- ---------------------------------------------------
202 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=252 pr=165 pw=0 time=41046 us)
202 INDEX RANGE SCAN BT_OWNER_OBJECT_TYPE_IDX (cr=18 pr=3 pw=0 time=12930 us)(object id 66536)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 15 0.00 0.00
db file sequential read 165 0.01 0.55
SQL*Net message from client 15 0.01 0.05

we can avoid going to and from the index/table/index/table ..... and avoid the work involved



bits ..

Gabe, May 30, 2006 - 11:45 pm UTC

Regarding the query above … is there any merit to these observations:

1. They don’t select anything from “c” (“campaigns”) … assuming, as the naming seems to suggest, that “campaign_parties” is a child of “campaigns” strictly enforced with a FK constraint then “campaigns” can be taken out of the query all together.

2. Re-write the predicate “cps.campty_stat_id <> 6” against “cp” and investigate adding “cp.campty_stat_id” to the index on “campaign_id” (XIF3CAMPAIGN_PARTIES I assume) … might be there already.


Tom Kyte
May 31, 2006 - 8:48 am UTC

1) indeed, in fact:

FROM CAMPAIGN_PARTIES cp
, CAMPAIGNS c
, CAMPAIGN_PARTY_STATUSES cps
,( SELECT PTY_ID, ADDRESS_ID, party_nm, address_line_txt, city_nm,
greg_st_prov_id , state_county_nm
, postal_code , greg_country_id, country_nm, clientsegment_id,
org_or_last_nm_sort, first_nm
FROM PARTY_ADDRESS_SEARCH pas
where SubStr(upper(org_or_last_nm_sort),1,1) = upper('a')) pas
, INDIVIDUALS i
, PARTIES p
where cp.campaign_id = 18
AND cp.campty_stat_id = cps.campty_stat_id
AND cps.campty_stat_id <> 6
AND cp.address_id = pas.address_id
AND cp.pty_id = pas.pty_id
AND cp.campaign_id = c.campaign_id
AND cp.pty_id = i.pty_id(+)
AND cp.MAIL_TO_PTY_ID= P.PTY_ID
ORDER BY pas.address_line_txt
/

could at least be (assuming campaign_id is primary key of campaigns - as it seems it "would be")

FROM CAMPAIGN_PARTIES cp
, CAMPAIGN_PARTY_STATUSES cps
,( SELECT PTY_ID, ADDRESS_ID, party_nm, address_line_txt, city_nm,
greg_st_prov_id , state_county_nm
, postal_code , greg_country_id, country_nm, clientsegment_id,
org_or_last_nm_sort, first_nm
FROM PARTY_ADDRESS_SEARCH pas
where SubStr(upper(org_or_last_nm_sort),1,1) = upper('a')) pas
, INDIVIDUALS i
, PARTIES p
where cp.campaign_id = 18
AND cp.campty_stat_id = cps.campty_stat_id
AND cps.campty_stat_id <> 6
AND cp.address_id = pas.address_id
AND cp.pty_id = pas.pty_id
AND exists ( select null from campaigns where campaign_id=18)

AND cp.pty_id = i.pty_id(+)
AND cp.MAIL_TO_PTY_ID= P.PTY_ID
ORDER BY pas.address_line_txt
/

also,

where SubStr(upper(org_or_last_nm_sort),1,1) = upper('a')) pas


could be

where org_or_last_nm_sort like 'A%' or org_or_last_nm_sort like 'a%'

leading *possibly* to using an index on org_or_last_nm_sort if applicable

excellent help

Pauline, June 01, 2006 - 1:43 pm UTC

Tom,
Thanks so much for your great help with analyzing the query and suggestion. I have used
tkprof and see the change from

where SubStr(upper(org_or_last_nm_sort),1,1) = upper('a')

to

where org_or_last_nm_sort like 'A%' or org_or_last_nm_sort like 'a%'

making big difference.


The former is showing in the report as


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.08 1.34 1 6 0 0
Execute 1 0.00 0.02 0 0 0 0
Fetch 4 0.41 6.42 607 6306 0 45
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.49 7.79 608 6312 0 45


The latter is showing in the report as

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.10 0.10 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 0.24 0.44 1 6306 0 45
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.34 0.55 1 6306 0 45




Tom Kyte
June 01, 2006 - 2:36 pm UTC

not really - you just hit the cache the second time, you missed it the first.

The time was spent doing physical IO for the first query. Nothing has changed.

SQL Query

Sankar Kumar, June 10, 2006 - 12:48 pm UTC

Hi Tom,

Similar to the first question of this page, I have a table like this:

Personid Change_Sequence Class Cls_EffDate Location Loc_EffDate
----------------------------------------------------------------------------
1000 1 FullTime Hawaii
1000 2 FullTime California 1/1/2005
1000 3 PartTime 1/1/2006 California 1/1/2005
1000 4 PartTime 1/1/2006 Texas 10/1/2005
1000 5 FullTime 1/1/2007 Boston 1/1/2007

1. The primary key is (Personid, change_sequence)

2. The effective dates of the first row for each person is null (i.e. for change_sequence = 1)

3. For each row only one column value will be effected (i.e. Either Class or Location can be changed) and the remaining data will be copied from the previous row

4. Both Class and Location can be changed in a row, only if the effective date is same. [See where change_sequence = 5]

I am using the following queries for getting Class and Location of the person as per given Effective Date.

Using Correlated SubQuery.

Select Cls.personid, Cls.Class, Loc.Location
from employee Cls, employee Loc
where Cls.personid = Loc.personid
and Cls.change_sequence =
(select max(change_sequence)
from employee
where personid = Cls.Personid
and nvl(Cls_EffDate, to_date('&givenDate','MM/DD/YYYY')) <= to_date('&givenDate','MM/DD/YYYY'))
and Loc.change_sequence =
(select max(change_sequence)
from employee
where personid = Loc.Personid
and nvl(Loc_Effdate, to_date('&givenDate','MM/DD/YYYY')) <= to_date('&givenDate','MM/DD/YYYY'))


Using Analytical functions

Select ECls.personid, ECls.class, ELoc.Location
from (select *
from (select personid,
class,
row_number() over(partition by personid order by change_sequence desc) r
from employee
where nvl(Cls_EffDate, to_date('&givenDate','MM/DD/YYYY')) <= to_date('&givenDate','MM/DD/YYYY')) Cls
where Cls.r = 1) ECls,
(Select *
from (select personid,
Location,
row_number() over(partition by personid order by change_sequence desc) r
from employee
where nvl(Loc_EffDate, to_date('&givenDate','MM/DD/YYYY')) <= to_date('&givenDate','MM/DD/YYYY')) Loc
where Loc.r = 1) ELoc
where ECls.personid = ELoc.personid;



The table has more than 1 Lakh rows and more than 12 columns based on effective dates. (For example Dept,Dept_EffDate,Status, Status_EffDate,.....)

I am creating a separate instance in the query, for getting each columns data, based on the given effective date.

My question is, Is there any way to get all the Columns using only single instance (Employee)?
If not, Is there any other good design to handle this type of data?

create table employee
(personid number(10),
change_sequence number(3),
class varchar2(50),
cls_effdate date,
location varchar2(50),
loc_effdate date,
primary key (personid,change_sequence));

insert into employee values(1000,1,'FullTime',null,'Hawaii',null);
insert into employee values(1000,2,'FullTime',null,'California',to_date('1/1/2005','MM/DD/YYYY'));
insert into employee values(1000,3,'PartTime',to_date('1/1/2006','MM/DD/YYYY'),'California',to_date('1/1/2005','MM/DD/YYYY'));
insert into employee values(1000,4,'PartTime',to_date('1/1/2006','MM/DD/YYYY'),'Texas',to_date('10/1/2005','MM/DD/YYYY'));
insert into employee values(1000,5,'FullTime',to_date('1/1/2007','MM/DD/YYYY'),'Boston',to_date('1/1/2007','MM/DD/YYYY'));


Thanks in Advance
Sankar




Tom Kyte
June 11, 2006 - 11:50 am UTC

wouldn't you just need the record where the sequence is the maximum for that person???

suppose record 5 did not exist, wouldn't you just want record 4 - even though both values were not modified.

create table

Tony, June 12, 2006 - 7:52 am UTC

Tom,

I want to create one table from another table with partition and indexes.

like if I have a table emp which is partition table and having 5 indexes.

I want to create a table emp_temp witch will be the mirror image of emp table.

create table emp_temp as select * from emp.

this script will not generate the partitions and indexes.

Tom Kyte
June 12, 2006 - 10:04 am UTC

what script, we are not even talking about creating tables here.

read about dbms_metadata, it is something you'll likely be able to use as you develop your script to do this.

test

test, June 12, 2006 - 10:35 am UTC

test

test

test, June 12, 2006 - 10:42 am UTC

test

rowid into column

A reader, June 12, 2006 - 12:03 pm UTC

Hi tom

How can we capture rowid into column of another table for
debugging purpose.

Tom Kyte
June 13, 2006 - 10:35 am UTC

anyway you want to???

sorry, but this is too vague to answer. What rowid do you want to capture, under what circumstances and how does this help "debugging"

SQL Query

A reader, June 12, 2006 - 12:58 pm UTC

Hi,

Sorry Tom, I would have added these two rows too.

insert into employee values(1000,6,'COBRA-Class',to_date('1/1/2005','MM/DD/YYYY'),'Boston',to_date('1/1/2007','MM/DD/YYYY'));

insert into employee values(1000,7,'COBRA-Terminated',to_date('5/1/2006','MM/DD/YYYY'),'Boston',to_date('1/1/2007','MM/DD/YYYY'));

Now lets say givenDate = 1/1/2005

Then the output would be

PERSONID CLASS LOCATION CLS_CHANGE_SEQ LOC_CHANGE_SEQ
------------------------------------------------------------------------------
1000 COBRA-Class California 6 3


when givenDate = 1/1/2006 the output would be

PERSONID CLASS LOCATION CLS_CHANGE_SEQ LOC_CHANGE_SEQ
------------------------------------------------------------------------------
1000 COBRA-Class Texas 6 4


when givenDate = 1/1/2007 the output would be

PERSONID CLASS LOCATION CLS_CHANGE_SEQ LOC_CHANGE_SEQ
------------------------------------------------------------------------------
1000 COBRA-Terminated Boston 7 7

Thanks,
Sankar

Tom Kyte
June 13, 2006 - 10:41 am UTC

explain what I'm looking at here - basically "start over" (I cannot patch together everything spread all over the place) and explain using TEXT what your goal is.

create table

Tony, June 13, 2006 - 4:47 am UTC

Hi Tom,

Thanks a lot for info.

But what i am looking for is creating the table from other table with partition and indexes.

I created the DB link and now i am creating the same tables with the contants.

Like in SCOTT schema i have the product table with 1 index and partition on mis_date.

No i am creating the table product_back in testing schema.
but the product_back shuld contain all the data with partition and index which has the table product in scott schema.

Like this i have 1000+ tables. So i want to write a sql statment where it will genrate the table with all the Partition and indexes.

Like
create table testing.product_back as select * from scott.product.

Can you please help in writing this.

Many thanks in advance.

Tony



Tom Kyte
June 13, 2006 - 12:20 pm UTC

use dbms_metadata


that is all I can say and all I'll keep saying over and over. create table as select does not do that.

There are tools in graphical interfaces like Enterprise manager that can do a "copy table and make it look like that table"

But - read about dbms_metadata.

SQL Queries - Different Syntax, Same Output, Plan Variations.

Vikas Sangar, June 22, 2006 - 2:51 am UTC

Dear Mr. Kyte

Please refer to the Queries and their respective Execution plans given below.

A) SELECT * FROM ACCOUNTS
SELECT STATEMENT, GOAL = CHOOSE Cost=3 Cardinality=458 Bytes=67784
TABLE ACCESS FULL Object name=ACCOUNTS Cost=3 Cardinality=458 Bytes=67784

B) SELECT * FROM ACCOUNTS ORDER BY ACCOUNTID
SELECT STATEMENT, GOAL = CHOOSE Cost=14 Cardinality=458 Bytes=67784
SORT ORDER BY Cost=14 Cardinality=458 Bytes=67784
TABLE ACCESS FULL Object name=ACCOUNTS Cost=3 Cardinality=458 Bytes=67784

c) SELECT * FROM ACCOUNTS ORDER BY ACCOUNTID ASC
SELECT STATEMENT, GOAL = CHOOSE Cost=14 Cardinality=458 Bytes=67784
SORT ORDER BY Cost=14 Cardinality=458 Bytes=67784
TABLE ACCESS FULL Object name=ACCOUNTS Cost=3 Cardinality=458 Bytes=67784

D) SELECT * FROM ACCOUNTS WHERE 1 = 1 ORDER BY 1
SELECT STATEMENT, GOAL = CHOOSE Cost=14 Cardinality=458 Bytes=67784
SORT ORDER BY Cost=14 Cardinality=458 Bytes=67784
TABLE ACCESS FULL Object name=ACCOUNTS Cost=3 Cardinality=458 Bytes=67784

All these queries return the same number (Count) of records in a similar Format. What I want to know here is-


1. Why are the above queries returning the similar records but with the diffrent Plan output?

2. I have learnt that by default Oracle displays its output, sorted in ascending order; Then - Why are queries B & c, different from A in their Execution scheme? Is not oracle internally applying ASC to my query A? if yes, then why does it feels bad of my adding up explicit "ORDER BY" and "ASC" clauses?

3. Why does Queries B, C, & D have same execution plan?

4. Is query D syntactically right? Why at all the query D executes and displays output with "ORDER BY 1" and not ORDER BY 2?

Take care, regards.
Vikas.



Tom Kyte
June 22, 2006 - 12:19 pm UTC

1) because they do entirely DIFFERENT things????

is that not "obvious"?

query 1 - select * from t;
query 2 - select * from t order by something;


entirely different queries - totally different - radically different, not even close to being the same. Night and Day different.


2) you have learnt "wrong". By default, results from a query are returned in any order we like. If you use order by column, the default sort order for column is in fact ASCENDING.

Unless and until you add "order by" to a query, you cannot say ANYTHING about the order of the rows returned. Well, short of "you cannot rely on the order of the rows being returned"

Until you have order by - NO ASSUMPTION CAN BE MADE ABOUT THE ORDER OR ROWS.


3) order by 1 - by definition - orders by the first selected column in the SELECT list.

Using Cursor

Ramu, June 29, 2006 - 5:26 pm UTC

Hi,

my table is having above 15 million records, I want to delete duplicate records from that table, can any one suggest by writing pl/sql(cursor) it will be faster or simply with delete statement using partition clasue will be faster. pls suggest me feasible solution.

Regards
Ramu

Tom Kyte
June 29, 2006 - 6:51 pm UTC

delete from t
where rowid in ( select rid
from (select rowid rid,
row_number() over (PARTITION BY KEY-COLUMNS
order by rn)
from t )
where rn <> 1 );

single sql statement - you have to find the duplicates (the subquery) and then delete them - that does it.

SQL Queries - Different Syntax, Same Output, Plan Variations.

Vikas, June 30, 2006 - 2:26 am UTC

Dear Mr. Kyte

Thanks a lot. It really helped me in refinement of Concepts(especially sort related stuf).

Take care.
Vikas.

SQL-query

gunasekaran Radhika, June 30, 2006 - 3:13 am UTC

Tom please answer me,
Film {title(pk),director,language,genre,releaseyear,price}
video{videono(pk)title(fk), location, maxdaysloan, overdue}
customer{name(pk),address,maxnovideosforloan}
loan{name(fk),videono(fk),dateout,returndate}
Query : list all customer(name,address) who have videos overdue,togeather with the titles of the videos-a video is considered as overdue if it was not returned and it was on loan for more than the maxdaysloan.



Tom Kyte
June 30, 2006 - 7:22 am UTC

do I get your grade too?

Split the column and create multiple rows

ST, June 30, 2006 - 3:45 pm UTC

How do we split the string in column C to have multiple records. The first characters, ('ABC') before ';' should be concatenated with the characters ('A200') after the space to the end of ';'. The same for the second set of characters, 'ABCD' which is after the first ';' should be concatenated with the string, 'A1001', comming after the first ';' after the space. Here is the sample data and expecting to get the desired output. Where as line 2 has 3 different valuse with a space before 'A201'.

create table test1 (
a number,
b varchar2(10)'
c varchar2(20));

create table test2 (
a number,
b varchar2(10)'
c varchar2(20));

insert into test1 values(1,1001,'ABC;ABCD A200;A1001');

insert into test1 values(1,'1002','ABD;BCD;CDEF;DEFG A201;B102;C1003;D4001');
commit;


Table TEST1
A B C
- ---- -------------------
1 1001 ABC;ABCD A200;A1001
1 1002 ABD;BCD;CDEF;DEFG A201;B102;C1003;D4001



Should be loaded into table TEST2 as shown below.

A B C
- ---- --------------
1 1001 ABC A200
1 1001 ABCD A1001
1 1002 ABD A201
1 1002 BCD B102
1 1002 CDEF C1003
1 1002 DEFG D4001




Tom Kyte
June 30, 2006 - 5:09 pm UTC

I added "d" to test2 - you don't want to create yet another string problem right - we'll keep them as two fields. 20 is a number I picked, you can increase if needed:

ops$tkyte@ORA10GR2> insert into test2(a,b,c,d)
  2  with data
  3  as
  4  (select level r
  5     from dual
  6  connect by level <= 20
  7  ),
  8  t
  9  as
 10  (
 11  select a, b, ';'||substr(c,1,instr(c,' ')-1)||';' part1, ';'||substr(c,instr(c,' ')+1)||';' part2,
 12         (length(c)-length(replace(c,';')))/2+1 r
 13    from test1
 14  )
 15  select t.a, t.b,
 16         substr( part1, instr(part1,';',1,data.r)+1, instr(part1,';',1,data.r+1)- instr(part1,';',1,data.r)-1 ) p1,
 17         substr( part2, instr(part2,';',1,data.r)+1, instr(part2,';',1,data.r+1)- instr(part2,';',1,data.r)-1 ) p2
 18    from data, t
 19   where data.r <= t.r
 20   order by t.a, t.b, data.r
 21  /

6 rows created.

ops$tkyte@ORA10GR2> select * from test2;

         A B          C                    D
---------- ---------- -------------------- --------------------
         1 1001       ABC                  A200
         1 1001       ABCD                 A1001
         1 1002       ABD                  A201
         1 1002       BCD                  B102
         1 1002       CDEF                 C1003
         1 1002       DEFG                 D4001

6 rows selected.
 

Split the column and create multiple rows

ST, June 30, 2006 - 5:18 pm UTC

Thank you for your response.
Creating an additional column does not serve my purpose. I gave as an example with just two records. But my data may contain more than those three combinations in column C.
So when we have like more than 10 different combinations in column C, then I need to create them into that many different columns which is not what I am looking for. Is there any way using 'substr' and 'instr' functions to have them concatenated instead. Appreciate your helps.



Tom Kyte
July 01, 2006 - 7:38 am UTC

you do not see how you could concatenate them yourself easily instead of selecting out p1 and p2?

Just || them

Split the column and create multiple rows

ST, June 30, 2006 - 6:23 pm UTC

Sorry for posting this on another review. But still the same request for splitting the string based on separators and concatenating them when more than three occurances exists within the column.

Tom Kyte
July 01, 2006 - 7:47 am UTC

I believe you have everything you need above right? It shows the technique, you can concatenate output any which way you like.

Query Time

Jal, July 01, 2006 - 2:51 am UTC

CREATE TABLE problem (
KEY NUMBER,
KEYNAME VARCHAR2 (4000),
VALUE1 VARCHAR2 (4000),
VALUE2 VARCHAR2 (4000),
VALUE3 VARCHAR2 (4000) ) ;

this table contains 1236742 + rows and i want to run following querie on this table which is taking too much time !!

SELECT value2
FROM weo_itr_gen_param_drm
WHERE keyname='Action'
AND value1='Sent to HO'
AND KEY=:1
AND value2= (
SELECT MIN(value2)
FROM weo_itr_gen_param_drm
WHERE keyname='Action' AND
VALUE1='Sent to HO' AND
KEY = :1
)
Please guide as to what should be done to increase speed !! Kindly note currently there is index on
1)key 2) keyname 3)value1

Explain Plan

Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop

SELECT STATEMENT Hint=CHOOSE
TABLE ACCESS BY INDEX ROWID WEO_ITR_GEN_PARAM_DRM
AND-EQUAL
INDEX RANGE SCAN WEO_ITR_GENPRM_KEY_INDX
INDEX RANGE SCAN IDX_VALUE1
INDEX RANGE SCAN WEO_ITR_GENPRM_KEY_DRM_INDX
INDEX RANGE SCAN IDX_VALUE2
SORT AGGREGATE
TABLE ACCESS BY INDEX ROWID WEO_ITR_GEN_PARAM_DRM
AND-EQUAL
INDEX RANGE SCAN WEO_ITR_GENPRM_KEY_INDX
INDEX RANGE SCAN IDX_VALUE1
INDEX RANGE SCAN WEO_ITR_GENPRM_KEY_DRM_INDX


Tom Kyte
July 01, 2006 - 7:54 am UTC

Isn't:

SELECT value2
FROM weo_itr_gen_param_drm
WHERE keyname='Action'
AND value1='Sent to HO'
AND KEY=:1
AND value2= (
SELECT MIN(value2)
FROM weo_itr_gen_param_drm
WHERE keyname='Action' AND
VALUE1='Sent to HO' AND
KEY = :1
)

Just a complex way pretty much to say:

select min(value2)
from weo_itr_gen_param_drm
WHERE keyname='Action'
AND VALUE1='Sent to HO'
AND KEY = :1


and a CONCATENATED index would be called for - seems you have a bunch of them being used.

create index t_idx on t(keyname,value1,key,value2);



another query

hash, July 01, 2006 - 2:59 pm UTC

Hi,
May be this query is not related to this thread but I m sure I'll get an answer

Consider this:

SQL> create table t (
  2  ca_no varchar2(6),
  3  ltr_no varchar2(30),
  4  ltr_date date);

--case 1: the three rows are identical except the ltr_date
insert into t values('01/04','30/27/Stores/DH-01/04', '16/12/2005');
insert into t values('01/04','30/27/Stores/DH-01/04', '20/12/2005');
insert into t values('01/04','30/27/Stores/DH-01/04', '25/12/2005');

--case 2: ltr_no are different
insert into t values('12/05','30/27/Stores/DH-12/05/1','25/05/2005');
insert into t values('12/05','30/27/Stores/DH-12/05/2','25/05/2005');

--case 3: they are identical
insert into t values('20/04','30/27/Stores/DH-20/04','17/07/2005');
insert into t values('20/04','30/27/Stores/DH-20/04','17/07/2005');

Now I want a string value for each ca_no like:

for case 1, I want:
our letter number 30/27/Stores/DH-01/04 dated 16/12/2005, even number dated 20/12/2005 and even number dated 25/12/2005.

for case 2, I want:
our letter number 30/27/Stores/DH-12/05/1 and 30/27/Stores/DH-12/05/2 both dated 25/05/2005.

for case 3:
our letter number 30/27/Stores/DH-20/04 and even number both dated 17/07/2005.

The number of rows can vary from 1 to 4 for each ca_no and the combination of ltr_no and ltr_date can be of any number.

can I achieve this in SQL or PL/SQL? I m using 8.0.5

Thanks 

Split the column and create multiple rows

ST, July 03, 2006 - 9:24 am UTC

Tom,
The solution for splitting the data is perfect and I appreciate your help. Just a quick question, Instead of loading data into temp table and then manipulating it, can this be done using sqlldr when we have the data in the same format like

1 1001 ABC;ABCD A200;A1001
1 1002 ABD;BCD;CDEF;DEFG A201;B102;C1003;D4001
.......
.......

in a csv file.

Tom Kyte
July 07, 2006 - 6:49 pm UTC

two words for you:

external tables

Split the column and create multiple rows

ST, July 03, 2006 - 9:34 am UTC

I have given the wrong format above. Here is the correct format that needs to be loaded using sqlldr.

1,1001,ABC;ABCD A200;A1001
1,1002,ABD;BCD;CDEF;DEFG A201;B102;C1003;D4001
.....
.....

Select just after where clause in PL/SQL

Ravi Kumar, July 03, 2006 - 10:49 am UTC

I am trying to write this statement, It is working in SQL but not in PL/SQL.

SQL> insert into currency select * from currency where (select 5 from dual) between 10 and 20;

0 rows created.

Now in a PLSQL Block

SQL> begin
  2  insert into currency select * from currency where (select 5 from dual) between 10 and 20;
  3  end;
  4  /
insert into currency select * from currency where (select 5 from dual) between 10 and 20;
                                                   *
ERROR at line 2:
ORA-06550: line 2, column 52:
PLS-00103: Encountered the symbol "SELECT" when expecting one of the following:
( - + mod not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string>
ORA-06550: line 2, column 72:
PLS-00103: Encountered the symbol "BETWEEN" when expecting one of the
following:
; return returning and or
ORA-06550: line 3, column 1:
PLS-00103: Encountered the symbol "END"


Can you please suggest me what can I do to make it run in PL/SQL ? 

Tom Kyte
July 07, 2006 - 6:58 pm UTC

upgrade to software written this century?  I believe you must be in 8i (not stated anywhere) and in 8i, using scalar subqueries in PLSQL was a non-starter (not supported)

ops$tkyte@ORA10GR2> create table t ( x varchar2(1) );

Table created.



ops$tkyte@ORA10GR2> insert into t select * from dual where (select 5 from dual) between 10 and 20;

0 rows created.

ops$tkyte@ORA10GR2> begin insert into t select * from dual where (select 5 from dual) between 10 and 20; end;
  2  /

PL/SQL procedure successfully completed.



Of course your query is the same as:

insert into currency
select * 
  from currency
 where EXISTS ( select null
                  from dual
                 where 5 between 10 and 20 );

so perhaps you can simply rewrite using an EXISTS. 

Pat, July 03, 2006 - 6:13 pm UTC

create table pub1
(stdt date,
eddt date,
pubdt date,
amt number)

insert into pub1
values('01-MAY-2006','01-MAY-2006','08-MAY-2006',6.25);
insert into pub1
values('02-MAY-2006','02-MAY-2006','08-MAY-2006',6.38);
insert into pub1
values('03-MAY-2006','03-MAY-2006','08-MAY-2006',6.12);
insert into pub1
values('04-MAY-2006','04-MAY-2006','08-MAY-2006',6.05);
insert into pub1
values('05-MAY-2006','05-MAY-2006','08-MAY-2006',6.00);
insert into pub1
values('06-MAY-2006','06-MAY-2006',NULL,6.42);
insert into pub1
values('07-MAY-2006','07-MAY-2006',NULL,6.25);
insert into pub1
values('08-MAY-2006','08-MAY-2006',NULL,6.80);
insert into pub1
values('09-MAY-2006','09-MAY-2006',NULL,6.45);
insert into pub1
values('10-MAY-2006','10-MAY-2006',NULL,6.98);
insert into pub1
values('11-MAY-2006','11-MAY-2006',NULL,6.45);
insert into pub1
values('12-MAY-2006','12-MAY-2006',NULL,6.11);
insert into pub1
values('13-MAY-2006','13-MAY-2006',NULL,6.55);
insert into pub1
values('14-MAY-2006','14-MAY-2006',NULL,6.12);
insert into pub1
values('15-MAY-2006','15-MAY-2006','19-MAY-2006',6.45);
insert into pub1
values('16-MAY-2006','16-MAY-2006','19-MAY-2006',6.45);
insert into pub1
values('17-MAY-2006','17-MAY-2006','19-MAY-2006',6.45);
insert into pub1
values('18-MAY-2006','18-MAY-2006','19-MAY-2006',6.12);
insert into pub1
values('19-MAY-2006','19-MAY-2006','24-MAY-2006',6.91);
insert into pub1
values('20-MAY-2006','20-MAY-2006',NULL,6.72);
insert into pub1
values('21-MAY-2006','21-MAY-2006',NULL,6.34);
insert into pub1
values('22-MAY-2006','22-MAY-2006',NULL,6.78);
insert into pub1
values('23-MAY-2006','23-MAY-2006',NULL,6.28);
insert into pub1
values('24-MAY-2006','24-MAY-2006',NULL,6.38);
insert into pub1
values('25-MAY-2006','25-MAY-2006',NULL,6.18);
insert into pub1
values('26-MAY-2006','26-MAY-2006',NULL,6.72);
insert into pub1
values('27-MAY-2006','27-MAY-2006',NULL,6.56);
insert into pub1
values('28-MAY-2006','28-MAY-2006',NULL,6.24);
insert into pub1
values('29-MAY-2006','29-MAY-2006',NULL,6.43);
insert into pub1
values('30-MAY-2006','30-MAY-2006','05-jun-2006',6.22);
insert into pub1
values('31-MAY-2006','31-MAY-2006','05-jun-2006',6.44);
insert into pub1
values('01-JUN-2006','30-JUN-2006',NULL,6.72);

SELECT * FROM PUB1;
STDT EDDT PUBDT AMT
1-May-2006 1-May-2006 8-May-2006 6.25
2-May-2006 2-May-2006 8-May-2006 6.38
3-May-2006 3-May-2006 8-May-2006 6.12
4-May-2006 4-May-2006 8-May-2006 6.05
5-May-2006 5-May-2006 8-May-2006 6
6-May-2006 6-May-2006 [NULL] 6.42
7-May-2006 7-May-2006 [NULL] 6.25
8-May-2006 8-May-2006 [NULL] 6.8
9-May-2006 9-May-2006 [NULL] 6.45
10-May-2006 10-May-2006 [NULL] 6.98
11-May-2006 11-May-2006 [NULL] 6.45
12-May-2006 12-May-2006 [NULL] 6.11
13-May-2006 13-May-2006 [NULL] 6.55
14-May-2006 14-May-2006 [NULL] 6.12
15-May-2006 15-May-2006 19-May-2006 6.45
16-May-2006 16-May-2006 19-May-2006 6.45
17-May-2006 17-May-2006 19-May-2006 6.45
18-May-2006 18-May-2006 19-May-2006 6.12
19-May-2006 19-May-2006 24-May-2006 6.91
20-May-2006 20-May-2006 [NULL] 6.72
21-May-2006 21-May-2006 [NULL] 6.34
22-May-2006 22-May-2006 [NULL] 6.78
23-May-2006 23-May-2006 [NULL] 6.28
24-May-2006 24-May-2006 [NULL] 6.38
25-May-2006 25-May-2006 [NULL] 6.18
26-May-2006 26-May-2006 [NULL] 6.72
27-May-2006 27-May-2006 [NULL] 6.56
28-May-2006 28-May-2006 [NULL] 6.24
29-May-2006 29-May-2006 [NULL] 6.43
30-May-2006 30-May-2006 5-Jun-2006 6.22
31-May-2006 31-May-2006 5-Jun-2006 6.44
1-Jun-2006 30-Jun-2006 [NULL] 6.72

Output i want
STDT EDDT PUBDT AMT
1-May-2006 1-May-2006 8-May-2006 6.8
2-May-2006 2-May-2006 8-May-2006 6.8
3-May-2006 3-May-2006 8-May-2006 6.8
4-May-2006 4-May-2006 8-May-2006 6.8
5-May-2006 5-May-2006 8-May-2006 6.8
6-May-2006 6-May-2006 [NULL] 6.42
7-May-2006 7-May-2006 [NULL] 6.25
8-May-2006 8-May-2006 [NULL] 6.8
9-May-2006 9-May-2006 [NULL] 6.45
10-May-2006 10-May-2006 [NULL] 6.98
11-May-2006 11-May-2006 [NULL] 6.45
12-May-2006 12-May-2006 [NULL] 6.11
13-May-2006 13-May-2006 [NULL] 6.55
14-May-2006 14-May-2006 [NULL] 6.12
15-May-2006 15-May-2006 19-May-2006 6.91
16-May-2006 16-May-2006 19-May-2006 6.91
17-May-2006 17-May-2006 19-May-2006 6.91
18-May-2006 18-May-2006 19-May-2006 6.91
19-May-2006 19-May-2006 24-May-2006 6.38
20-May-2006 20-May-2006 [NULL] 6.72
21-May-2006 21-May-2006 [NULL] 6.34
22-May-2006 22-May-2006 [NULL] 6.78
23-May-2006 23-May-2006 [NULL] 6.28
24-May-2006 24-May-2006 [NULL] 6.38
25-May-2006 25-May-2006 [NULL] 6.18
26-May-2006 26-May-2006 [NULL] 6.72
27-May-2006 27-May-2006 [NULL] 6.56
28-May-2006 28-May-2006 [NULL] 6.24
29-May-2006 29-May-2006 [NULL] 6.43
30-May-2006 30-May-2006 5-Jun-2006 6.72
31-May-2006 31-May-2006 5-Jun-2006 6.72
1-Jun-2006 30-Jun-2006 [NULL] 6.72


I want the amt for pubdt to be as
if the pubdt is null amt stays same
for each pubdt get the amt of that date between stdt and eddt
starting on sort order stdt.
Please see data for dates 15 to 18 get amt from 19
and for 19 get amt from 24
Is it possible to write a sql for this.

Thanks


Functional, not flashy.

Tyler, July 04, 2006 - 1:25 pm UTC

There are no shortages of assumptions being made here, but for the data you've provided, here's a solution :)

SELECT
PUBBY.STDT,
PUBBY.EDDT,
PUBBY.PUBDT,
CASE
WHEN PUBBY.AMT IS NULL
THEN
( SELECT p3.AMT
FROM (
SELECT
p2.AMT,
MAX(P2.EDDT) OVER () as max_date,
p2.EDDT
FROM PUB1 P2) P3
WHERE p3.EDDT = max_date)
ELSE
PUBBY.AMT
END AS AMT
FROM (
SELECT
P.STDT,
P.EDDT,
P.PUBDT,
CASE
WHEN P.PUBDT IS NOT NULL
THEN
( SELECT P1.AMT
FROM PUB1 P1
WHERE P1.EDDT = P.PUBDT)
ELSE
P.AMT
END AS AMT
FROM PUB1 P) PUBBY;


I think this works too...

Chris, July 04, 2006 - 2:50 pm UTC

SELECT stdt,
eddt,
pubdt,
(SELECT amt
FROM pub1 p4
WHERE p4.eddt = p3.pubdt2) amt
FROM (SELECT stdt,
eddt,
pubdt,
NVL2 (pubdt,
(SELECT MIN (p2.eddt)
FROM pub1 p2
WHERE p2.eddt >= p1.pubdt),
eddt
) pubdt2,
amt
FROM pub1 p1) p3


Pat, July 04, 2006 - 10:21 pm UTC

Thanks for all your response.
I forgot to metion actually I am looking for a solution without any correlated subquery.

Tom Kyte
July 07, 2006 - 9:13 pm UTC

laugh out loud - only thing I can say to that is "why - what possible technical reason beyond 'teacher said no correlated subquery'" could there POSSIBLLY be?



Not sure why....but................

Tyler Forsyth, July 05, 2006 - 12:22 am UTC

Hey Pat, not sure why you're looking for a query w/o a correlated subquery, but at the very least, it gave me something to do as i was bored.

SELECT
p1.STDT,
p1.EDDT,
p1.PUBDT,
NVL(
CASE
WHEN p1.pubdt IS NOT NULL
THEN
p2.amt
ELSE
p1.amt
END
, FIRST_VALUE (p1.amt) OVER (ORDER BY p1.STDT DESC NULLS LAST)
) AS amt
FROM pub1 p1, pub1 p2
WHERE p1.PUBDT = p2.EDDT (+)
ORDER BY p1.STDT ASC
/


SQL query

yadev, July 05, 2006 - 2:01 am UTC

I have a table data like this,
    SQL> select sample_dt,substr(event,1,30),total_waits from perf_data
      2  /
    
    SAMPLE_DT           SUBSTR(EVENT,1,30)             TOTAL_WAITS
    ------------------- ------------------------------ -----------
    04-07-2006:17:45:17 latch free                               3510740904
    04-07-2006:17:45:53 latch free                               3510741037
    04-07-2006:17:51:28 latch free                               3510741842
    04-07-2006:17:45:17 buffer busy waits                         4375816
    04-07-2006:17:45:53 buffer busy waits                         4375819
    04-07-2006:17:51:28 buffer busy waits                         4375830
    
    6 rows selected.
    
    The values of the column "Total Waits" is cumulative, i will need to substract
    the current value from the previous value to get the results
    
    For example:-
    total waits for latchfree event at 04-07-2006:17:45:53 is (3510741037 -
    3510740904)
    total waits for  buffer busy waits at 04-07-2006:17:51:28 is (4375830 - 4375819
    )
 

Tom Kyte
July 08, 2006 - 7:45 am UTC

select sample_dt, lag(sample_dt) over (partition by event order by sample_dt),
event,
total_waits-lag(total_waits) over (partition by event order by sample_dt)
from t;



SQL query

yadev, July 05, 2006 - 2:02 am UTC

Hi Tom,

I have a table data like this,
    SQL> select sample_dt,substr(event,1,30),total_waits from perf_data
      2  /
    
    SAMPLE_DT           SUBSTR(EVENT,1,30)             TOTAL_WAITS
    ------------------- ------------------------------ -----------
    04-07-2006:17:45:17 latch free                               3510740904
    04-07-2006:17:45:53 latch free                               3510741037
    04-07-2006:17:51:28 latch free                               3510741842
    04-07-2006:17:45:17 buffer busy waits                         4375816
    04-07-2006:17:45:53 buffer busy waits                         4375819
    04-07-2006:17:51:28 buffer busy waits                         4375830
    
    6 rows selected.
    
    The values of the column "Total Waits" is cumulative, i will need to substract
    the current value from the previous value to get the results
    
    For example:-
    total waits for latchfree event at 04-07-2006:17:45:53 is (3510741037 -
    3510740904)
    total waits for  buffer busy waits at 04-07-2006:17:51:28 is (4375830 - 4375819
    )

Regards,
Yadev 

Query

Chris, July 05, 2006 - 10:32 am UTC

Yadev,

A CREATE TABLE script and INSERT script would be most helpful. Barring that, this query should help you on your way. It makes use of the analytic function LAG() which I highly suggest you read about in the online documentation if you haven't already.

SELECT sample_dt,
event,
total_waits,
LAG (total_waits) OVER (PARTITION BY event ORDER BY sample_dt) prev_wait,
total_waits - NVL (LAG (total_waits) OVER (PARTITION BY event ORDER BY sample_dt), total_waits) diff
FROM perf_data


SQL query

yadev, July 06, 2006 - 8:53 am UTC

Hi Chris,

Thanks for the query..

Regards
Yadev

A reader, July 06, 2006 - 11:22 am UTC


Error On Index

Jal, July 07, 2006 - 8:38 am UTC

create index t_idx on t(keyname,value1,key,value2);

failes with error
ora-01450 maximum key length (6398) exceeded

Tom Kyte
July 08, 2006 - 10:49 am UTC

yes? and??

Order of the query...

Arangaperumal G, July 07, 2006 - 10:01 am UTC

Hi Tom,

I have query like this
select * from emp where empno in (5,3,6,1,8,10)

I got the records in the same order --> 10,8,1,6,3,5
we have tested twoo servers i got the same. when we move to production server we got in this order 5,3,6,1,8,10

But i want the same order of 5,3,6,1,8,10.

why the answer is different?
Please advice me how to get in the same order?







Tom Kyte
July 08, 2006 - 10:51 am UTC

no no NO!!!!!

unless and until you have an ORDER BY statement, you can have NO ASSUMPTIONS ABOUT THE ORDER OF THE ROWS.

You have NO ORDER BY.

Therefore, you can make ZERO assumptions about the order of rows returned. They'll come back in any order we darn well feel like returning them to you.


Period.

NO ORDER BY = NO ASSUMPTIONS AS TO THE ORDER.


I really hope you are not concatenting inlists like that either - you don't really have literals in there do you???????

Order and sort

Michel Cadot, July 08, 2006 - 8:23 am UTC

Hi Arangaperumal,

If you want a specific order there is only one way: give an ORDER BY clause.

Regards
Michel


Tom Kyte
July 08, 2006 - 8:55 pm UTC

say it LOUDER and over and over again!!

Always amazed at this one - unless and until there is an order by, hey - THERE IS NO IMPLIED ORDER WHATSOEVER...

Using ROWID in SQL

RN, July 13, 2006 - 4:11 pm UTC

Tom,

Is using ROWID in SQL queries of any beneficial?

Tom Kyte
July 13, 2006 - 5:30 pm UTC

sure. yes, there are times when it is :)

What is minimum and maximum cardinality

mal, July 17, 2006 - 1:37 am UTC


Tom Kyte
July 17, 2006 - 1:23 pm UTC

zero and infinity?

Very Useful

srinivasa rao bachina, July 17, 2006 - 5:59 am UTC

Hi Tom,

Can't we use select statement in the decode statement.my requirement is
select decode(nvl(acc_name,'NO'),'NO',(select account_name from table_a),acc_name) from table_b
if table_b.acc_name is null then i need to select table_a.account_name.

How can we get using a single sql

Tom Kyte
July 17, 2006 - 2:40 pm UTC

sure you can, looks like you just need nvl however:

tkyte@ORCL> select nvl(dummy,(select ename from scott.emp where rownum=1)) from dual;

NVL(DUMMY,
----------
X


tkyte@ORCL> select nvl(null,(select ename from scott.emp where rownum=1)) from dual;

NVL(NULL,(
----------
SMITH

Double Quotes in a column Header

Maverick, July 18, 2006 - 1:26 pm UTC

Tom, Here is my requirement. I need an output to show column header in double quotes
eg:

something like this
select empno,ename,deptno "deptno datatype="int"" from emp

Empno Ename deptno datatype="INT"
------ ------- ---------------------
1234 John 20
2345 XXXX 30

How to acheive this? I could get a single quote around INT but not double quotes..

Thanks,

Tom Kyte
July 19, 2006 - 8:36 am UTC

does this suffice:

tkyte@ORCL> column x format a30 heading "deptno datatype=""int"""
tkyte@ORCL> select dummy x from dual;

deptno datatype="int"
------------------------------
X


Column header

Michel Cadot, July 19, 2006 - 8:45 am UTC

Some funny things on column header (all versions) :)

SQL> select 1 as from dual;
       1AS
----------
         1

1 row selected.

SQL> col dummy format a10
SQL> select dummy as from dual;
DUMMY
----------
X

1 row selected.

Michel 

Double Quotes in a column Header

Maverick, July 19, 2006 - 9:04 am UTC

Thanks for your reply Tom. But I need to use it in PL/SQL and not in sql plus. Actually , to give more clear picture, I have a dynamic query with quotes all around and want to put this as heading..

v_qry:='select empno,ename,deptno as "deptno "int"" from dual';

something like this.

Thanks,

Tom Kyte
July 19, 2006 - 1:29 pm UTC

quoted identifiers do not support double quotes themselves.

So, what is the solution?

A reader, July 19, 2006 - 5:37 pm UTC

Tom, Is there any way I can get it or that simply cannot be done?

Thanks for all your help.

Tom Kyte
July 22, 2006 - 4:22 pm UTC

not with quoted identifiers, no. having a double quote in the identifier itself is not a happening thing.

sql query format

Baqir Hussain, July 20, 2006 - 2:40 pm UTC

The following records were inserted into the audit_tbl when a trigger condition is satisfied.

select * from audit_tbl; --> gives out the following information

TIMESTAMP WHO OP TNAME CNAME OLD NEW
-------------------- ---------- ------ ------- ------- ----- ---

19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP STOPTYPE N


19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP SIGNID 44


19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP LINEDIRID 444


19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP NODEID 0


19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP STOPID 4444

I would like to get your help in writing a sql in such a way that the out from the audit_tbl should fall on one line as follows:

19-jul-2006 10:40:27 TRAPEZE DELETE LINESTOP STOPTYPE N SIGNID 44 LINEDIRID 444 NODEID 0 STOPID 4444

Thanks

Tom Kyte
July 22, 2006 - 5:11 pm UTC

search site for "pivot"

this is just a pivot query.

You seem to be missing a "key" here - timestamp (bad idea for a column name these days since we know 9i has a datatype named that) + who + op + tname isn't "unique enough".

delete from t where x in ( 1, 2 );

I could definitely - most assuredly - delete two rows in a single second.

so, you need to add a sequence so these rows can be grouped. Then:


select timestamp, who, op, tname,
max( decode( rownum, 1, cname ) ),
max( decode( rownum, 1, old ) ),
max( decode( rownum, 1, new ) ),
....
max( decode( rownum, 10, cname ) ),
max( decode( rownum, 10, old ) ),
max( decode( rownum, 10, new ) )
from audit_table
where seq = :x
group by timestamp, who, op, tname;

will "pivot" it. Here I assumed you have 10 columns - if you have more or less, adjust accordingly




sql format

Baqir, July 21, 2006 - 2:33 pm UTC

I got it working by looking at an example from your book in the analytic functions chapter.

Thanks

It was Great

Sunil Shah, July 26, 2006 - 3:01 pm UTC

I m wanted to know how cani replace the enter key in a particular column with a space using replace function of sql.

Tom Kyte
July 26, 2006 - 4:08 pm UTC

I assume you mean a carriage return and maybe a linefeed.

str = translate( str, chr(13)||chr(10), ' ' )

sql query processing

romit, July 29, 2006 - 7:39 am UTC

Tom,
Please let me know exactly what happens when a user fires a sql query( DDL,DML etc) from his terminal till the time he gets a message in his screen. I know that it will be different in case of a simple select stmt Vs an update,delete stmt, but what exactly i am not so sure.It will be great if you can expalin by taking cases of select, update, delete and insert.
Thanks


Tom Kyte
July 29, 2006 - 9:11 am UTC

Get Effective Oracle by Design, I spent a chapter doing that. Chapter 5 "Statement Processing"

query

sam, August 02, 2006 - 12:33 pm UTC

Tom:

I am trying to get all tables with a count of the records in each using this. Do you know why it does not work.

  1  select table_name,count(*) from (select table_name from user_tables)
  2* group by table_name
SQL> /

TABLE_NAME                       COUNT(*)
------------------------------ ----------
ACT                                  1
ACTIVE                               1 

Reader

Mike, August 02, 2006 - 1:14 pm UTC


For a given two dates if there is atleast one full month then return 'Y' else return 'N'
I got the result by creating all months table.
I am trying without using all months table by using functions months_between etc.

create table ad
(d1 date,d2 date);

insert into ad
values('07-jul-2006','08-aug-2006');
insert into ad
values('15-jul-2006','31-aug-2006');
insert into ad
values('19-jul-2006','01-sep-2006');
insert into ad
values('27-jul-2006','08-oct-2006');

select * from ad
D1 D2
7/7/2006 8/8/2006
7/15/2006 8/31/2006
7/19/2006 9/1/2006
7/27/2006 10/8/2006

I want the result like
D1 D2 result
7/7/2006 8/8/2006 N
7/15/2006 8/31/2006 Y
7/19/2006 9/1/2006 Y
7/27/2006 10/8/2006 Y

I appreciate your help

Tom Kyte
August 02, 2006 - 3:51 pm UTC

why isn't there at least one full month between the 7th of July and the 8th of August???

That is more than a month if you ask me.

To Mike

Michel Cadot, August 02, 2006 - 1:26 pm UTC

Have a look at extract and last_day functions and case expression.

</code> http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/functions072.htm#i83733 <code>

Michel


Tom Kyte
August 02, 2006 - 3:51 pm UTC

I don't know why you would need last_day??

Michel Cadot, August 02, 2006 - 4:52 pm UTC


In the second row, if you had 8/30 instead of 31/8 then there is no full month as the month ends the 31.
Given the example, I assume a full month is not from one day a month to the same day one month later but a month from 1 to its last day.
I assume that from the first row with result N but this is just an hypothesis as it was not clearly explained.

Michel

Tom Kyte
August 03, 2006 - 9:02 am UTC

I would prefer the original poster specified precisely what they meant.

When you assume....
^^^
^
^^

that is why I just refuse to answer until clarity is achieved.

Reader

A reader, August 03, 2006 - 10:17 am UTC

Sorry about that.

Michel explanation is true.
Full month is 1st day of the month to Last day of the same month, i am not looking for the number of days between the two given dates. Hope this clarifies.




Tom Kyte
August 03, 2006 - 10:24 am UTC

sure you are (looking for the number of days between the two given dates)

so you can divide that by the number days in the month.

In which case:


ops$tkyte%ORA10GR2> select d1, d2, d2-d1, to_number(to_char(last_day(d1),'dd'))
  2    from ad;

D1        D2             D2-D1 TO_NUMBER(TO_CHAR(LAST_DAY(D1),'DD'))
--------- --------- ---------- -------------------------------------
07-JUL-06 08-AUG-06         32                                    31
15-JUL-06 31-AUG-06         47                                    31
19-JUL-06 01-SEP-06         44                                    31
27-JUL-06 08-OCT-06         73                                    31



I am still perplexed why you don't think there is a full month between the two dates in the first row there. 

Michel Cadot, August 03, 2006 - 10:35 am UTC

SQL> select d1, d2,
  2         case when     extract(month from d2) - extract(month from d1) = 0
  3                   and extract(day from d1) = 1
  4                   and extract(day from d2) = extract(day from last_day(d2))
  5                then 'Y'
  6              when     extract(month from d2) - extract(month from d1) = 1
  7                   and (  extract(day from d1) = 1
  8                       or extract(day from d2)
  9                          = extract(day from last_day(d2)) )
 10                then 'Y'
 11              when     extract(month from d2) - extract(month from d1) > 1
 12                then 'Y'
 13              else 'N'
 14              end result
 15  from ad
 16  order by 1, 2
 17  /
D1          D2          R
----------- ----------- -
07-jul-2006 08-aug-2006 N
15-jul-2006 31-aug-2006 Y
19-jul-2006 01-sep-2006 Y
27-jul-2006 08-oct-2006 Y

Michel
 

Tom Kyte
August 03, 2006 - 10:44 am UTC

whatever, I'd still like to see the requirement spelled out. It is still (to me) as clear as a muddy pond after it's been stirred up.

Seems a rather unique definition of a month as well - pretty much all of the supplied dates are a month or more apart according to the convention that is "the month"

Michel Cadot, August 03, 2006 - 11:42 am UTC

Oh! I thought the definition "Full month is 1st day of the month to Last day of the same month" was clear but this may be that my poor english knowledge translated it into a clear definition in French. :)
I translated that as "there is a full month between if the first day and the last day of the same month are between".

My previous query does not cross the end of the year but it is easy to enhance it. For instance, add 12*extract(year from d*) to each extract(month from d*).

Regards
Michel


Tom Kyte
August 03, 2006 - 4:19 pm UTC

To me - if it is the 15th of August - one month from now is the 15th of September.

The only "caveat" is the last day of the month is by convention the last day of the next month.

Months are wicked tricky - you have to define what you mean - explicitly.

A day - is a day.
A week - 7 days - always.
A month - 28 to 31 days
A year - 12 months :) years are tricky as well - you cannot convert 5 years into a fixed number of days (eg: you cannot just use 365)

Ned, August 03, 2006 - 3:57 pm UTC

I think I understand. A month, in this case, is a named calendar month, not the difference between two dates. The first day of a month and the last day of the same month must exist between the two given dates.

select d1,d2,
case
when (trunc(add_months(d1,1),'MM') between d1 and d2
and last_day(add_months(d1,1)) between d1 and d2) or
(trunc(d1,'MM') between d1 and d2
and last_day(d1) between d1 and d2) then 'Y'
else 'N'
end
from ad
order by d1,d2

D1 D2 C
--------- --------- -
02-JUL-06 30-AUG-06 N
02-JUL-06 30-DEC-06 Y
07-JUL-06 08-AUG-06 N
15-JUL-06 31-AUG-06 Y
19-JUL-06 01-SEP-06 Y
27-JUL-06 08-OCT-06 Y
31-JUL-06 30-AUG-06 N
01-DEC-06 31-DEC-06 Y


Tom Kyte
August 03, 2006 - 6:04 pm UTC

See, now that is a specification I can deal with :)

Reader

Mike, August 03, 2006 - 5:51 pm UTC

WOW, this is what i am looking for, Thanks for all your help. Michel Cadot & Ned you are great.

and another way ...

Gabe, August 04, 2006 - 12:06 pm UTC

flip@FLOP> select t.*
2 ,case when to_number(to_char(greatest(d1,d2)+1,'yyyymm')) -
3 to_number(to_char(least (d1,d2)-1,'yyyymm')) < 2
4 then 'N'
5 else 'Y'
6 end flg
7 from ad t
8 ;

D1 D2 F
----------- ----------- -
07-jul-2006 08-aug-2006 N
15-jul-2006 31-aug-2006 Y
19-jul-2006 01-sep-2006 Y
27-jul-2006 08-oct-2006 Y
01-aug-2006 31-aug-2006 Y
31-jul-2006 30-aug-2006 N

6 rows selected.

Check existence of number in a character string

Ravi, August 14, 2006 - 3:15 pm UTC

Hello Tom,
I am not sure if my question fits here, I am really sorry if I have it in wrong place.
I am looking for a way to find if any number character exists in character string
if start or end of a character string is a letter

I can not implement any constraints on the table level, user is free to enter any values
and the table that holds this values is defined as char because, it has to hold diffrent values for diffrent codes,

for example, need to check for below values
1st validation

'ABC5XYZ' --> number character exists
'ABDFADF' --> number character does not exist

2nd validation

'1245-234' --> Letter values exits
'ajghsdfj' --> letter values exits
'2312ds10' --> letter values exits
'3423 242' --> letter values exits

3rd validation

'kajshdfk' --> first or last value ends with a letter
'12jksdfh' --> first or last values ends with a letter
'8972389k' --> first or last values ends with a letter
'9384jd88' --> first or last values does not start or end with a letter

please show me a way to get this done Thank you in advance Tom

Ravi

Tom Kyte
August 14, 2006 - 3:22 pm UTC

how to do what exactly?

you don't want a constraint, anything goes, so all data is valid.

why do you call these validations then?

Check existence of number in a character string

Ravi, August 14, 2006 - 4:16 pm UTC

Sorry for confusion,

I have to select data from stage 1 table and validate the data, give a specific error code and send the data to stage 2 table along with error code

I have to either do it in a query or pl/sql code

Thank you
Ravi


Tom Kyte
August 14, 2006 - 4:28 pm UTC

ok, can you just phrase what you want to look for - are you trying to get a "true/false" for just

string must begin OR end with a character (A-Z, a-z) and include a number (0-9) either in the "middle" or "the end" of the string


(since you say OR end... I assume that A1 is as 'ok' as A1A would be)



Check existence of number in a character string

Ravi, August 14, 2006 - 6:10 pm UTC

Tom,
Thank you for your quick response and for your priceless time,

how can I get something like this

CASE <STRING>
WHEN contains ( 1 - 9 ) THEN
dbms_output.put_line("string contains number");
WHEN contains "(A-Z, a-z) at start or ends" THEN
dbms_output.put_line(" format of string something like ([A-Z,a-Z]%[A-Z,a-Z])or ([A-Z,a-Z]%)or (%[A-Z,a-Z])");
END;

Thank you
Ravi



Tom Kyte
August 15, 2006 - 7:28 am UTC

ops$tkyte%ORA10GR2> select case when instr(translate(x,'0123456789','000000000'),'0') > 0
  2              then 'contains digit'
  3                          when translate(lower(substr(x,1,1)||substr(x,length(x),1)),'abcdefghijklmnopqrstuvwxyz',rpad('a',26,'a')) like '%a%'
  4                          then 'starts/ends with letter'
  5                  end what,
  6                  x
  7    from t;

WHAT                           X
------------------------------ -----
starts/ends with letter        a%@b
starts/ends with letter        A%@
contains digit                 @1@
contains digit                 a1b
                               #%@#

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select case when regexp_like( x, '.*[0-9].*' )
  2              then 'contains digit'
  3                          when regexp_like( x, '^[a-z].*|.*[a-z]$', 'i' )
  4                          then 'starts/ends with letter'
  5                  end what,
  6                  x
  7    from t;

WHAT                           X
------------------------------ -----
starts/ends with letter        a%@b
starts/ends with letter        A%@
contains digit                 @1@
contains digit                 a1b
                               #%@#

 

Check existence of number in a character string

Ravi, August 15, 2006 - 10:48 am UTC

Thank you very much, This is exactly for what I was looking for, Thank you for your quickest possible response

Thank you
Ravi

Query

Yiming Li, August 15, 2006 - 6:03 pm UTC

Hi, Tom,

I have a situation there. Table T with 3 columns, based on the column count number to generate the result. I do not know if I can use select statement to generate the results instead of writing procedure or function?

select * from t;

id col count
--- --- -----
1 a 3
2 b 2
3 c 2
4 d 1

I want to generate the result like:

id col number
--- --- ------
1 a 1
1 a 2
1 a 3
2 b 1
2 b 2
3 c 1
3 c 2
4 d 1

Thanks




Tom Kyte
August 15, 2006 - 6:26 pm UTC

no create table
no insert into
no look

can we do this?

absolutely, we just need to generate a set of rows with cardinality = max(count) and join to it.

query

Yiming Li, August 16, 2006 - 11:24 am UTC

Thanks Tom,

Could you give the example for generating a set of rows with cardinality = max(count)? I feel I saw this in your site before, but I forgot where it is. But in the actual real data, some of the count number is greater than 40, do you think it is still worth using this method?



Tom Kyte
August 16, 2006 - 11:31 am UTC

40 is small.


ops$tkyte%ORA9IR2> with data as (select level l from (select max(user_id) maxuid from all_users) connect by level <= maxuid )
  2  select * from data;
 
         L
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        52
        53
        54
        55
        56
        57
        58
        59
        60
 
60 rows selected.
 

query

yiming li, August 16, 2006 - 12:43 pm UTC

Thanks again,
I can't imagine your response soon quick. You mean "to generate a set of rows with cardinality = max(count)
and join to it." is that using max(decode count...) method. But I got a little bit confussing. How to associate your current marvellous code.

I would like to create table t for you.

create table t (id number, col varchar2(20), count number);

insert into t values(1, 'a', 3);

insert into t values(2, 'b', 2);

insert into t values(3, 'c', 2);

insert into t values(4, 'd', 1);

yiming@> select * from t;

ID COL COUNT
---------- -------------------- ----------
1 a 3
2 b 2
3 c 2
4 d 1







Tom Kyte
August 16, 2006 - 3:48 pm UTC

ops$tkyte%ORA9IR2> with data
  2  as
  3  (
  4  select level l
  5    from (select max("COUNT") maxcnt from t)
  6  connect by level <= maxcnt
  7  )
  8  select *
  9    from t, data
 10   where data.l <= t.count
 11   order by id, col, data.l
 12  /

        ID COL                       COUNT          L
---------- -------------------- ---------- ----------
         1 a                             3          1
         1 a                             3          2
         1 a                             3          3
         2 b                             2          1
         2 b                             2          2
         3 c                             2          1
         3 c                             2          2
         4 d                             1          1

8 rows selected.
 

Thanks

Yiming Li, August 16, 2006 - 10:54 pm UTC

Thanks, Thanks Tom, This is exactly what I need. You are always No. 1.


RE: query

A reader, August 17, 2006 - 4:31 pm UTC

Tom, can we adopt this method of generating a set of values from 9iR2 on-wards ? (that is, the select .. connect by trick).

Thanks!

Tom Kyte
August 17, 2006 - 5:31 pm UTC

yes

declare in sql file

A reader, August 17, 2006 - 5:26 pm UTC

can we do spool abc.log
declare

a number;

begin
...
....
end;

in a .sql file and run it.

Tom Kyte
August 18, 2006 - 7:52 am UTC

not sure what you mean.

can you spool to a file an anonymous block, stop spooling and then execute it? YES

can you start spooling, run a plsql block and have its "output" spooled to a file? YES

Things change over time... :-)

Marcio Portes, August 18, 2006 - 12:04 am UTC

well with regarding Yiming Li's problem, my first time that had to come up with was as follows

ops$marcio@ORA10G> create or replace type n as table of number
2 /

Type created.

ops$marcio@ORA10G>
ops$marcio@ORA10G> create or replace
2 function f( p_cursor in sys_refcursor ) return n pipelined
3 is
4 l_rec t%rowtype;
5 begin
6 loop
7 fetch p_cursor into l_rec;
8 exit when p_cursor%notfound;
9 for i in 1 .. l_rec.times
10 loop
11 pipe row(l_rec.linha);
12 end loop;
13 end loop;
14 close p_cursor;
15 return;
16 end;
17 /

Function created.

ops$marcio@ORA10G> show error

No errors.

ops$marcio@ORA10G>
ops$marcio@ORA10G> select * from t;

X TIMES
------------- -------------
1 1
2 2
3 1
4 3
5 0

5 rows selected.

ops$marcio@ORA10G>
ops$marcio@ORA10G> select t.*
2 from t, table( f( cursor( select * from t ) ) )
3 where column_value = x
4 /

X TIMES
------------- -------------
1 1
2 2
2 2
3 1
4 3
4 3
4 3

7 rows selected.

ops$marcio@ORA10G>
ops$marcio@ORA10G> update t set times = 2 where x = 5;

1 row updated.

ops$marcio@ORA10G> commit;

Commit complete.

ops$marcio@ORA10G>
ops$marcio@ORA10G> select t.*
2 from t, table( f( cursor( select * from t ) ) )
3 where column_value = x
4 /

X TIMES
------------- -------------
1 1
2 2
2 2
3 1
4 3
4 3
4 3
5 2
5 2

9 rows selected.

Tom Kyte
August 18, 2006 - 8:01 am UTC

if I don't have to write code, I don't

declare in swl file

A reader, August 18, 2006 - 1:32 pm UTC

A simple anonymous block like this not working when i call it from batch file.
It is going into sqllplus but not doing anything.


declare
a number;
begin
dbms_output.put_line(' a is number');
end;

Tom Kyte
August 18, 2006 - 4:23 pm UTC

missing slash to actually run it perhaps?

declare

A reader, August 18, 2006 - 4:48 pm UTC

yes..........
so stupid on my part
sorry and thanks

how to write this insert-select

A reader, August 25, 2006 - 10:29 am UTC

Dear Tom,

I have a sql question and I hope you can help.

I am developing an application in oracle forms in which I have a block of entry items composed of 3 records

entryItem A1 entryItem A2 entryItem A3 entryItem A4
entryItem A1 entryItem A2 entryItem A3 entryItem A4
entryItem A1 entryItem A2 entryItem A3 entryItem A4

A1 and A4 entry items when entered will be refering to table T1
A2 entry item when entered will be refering to table T2
A3 entry item when entered will be refering to table T3

My goal is to take the above possible combinations and insert into a dedicated table T4 as follows

LOOP on records

insert into T4(x)
select T1.y
from T1 sbct,
T2 stpa,
T3 vpar
where sbct.y = vpar.y
and sbct.y = stpa.y
and decode(A1, NULL, 1, T1.A1) = decode (A1, NULL, 1, EntryItem A1)
and decode(A4, NULL, 1, T1.A4) = decode (A4, NULL, 1, EntryItem A4)
and decode(A2, NULL, 1, T2.A2) = decode (A2, NULL, 1, EntryItem A2)
and decode(A3, NULL, 1, T3.A3) = decode (A3, NULL, 1, EntryItem A3);


IF last record then
exit;
END IF;
next record
END LOOP;

This works perfectly when all tables are involved. Unfortunately if for example only entry item A1 is entred I will have the following insert-select

insert into T4(x)
select T1.y
from T1 sbct,
T2 stpa,
T3 vpar
where sbct.y = vpar.y
and sbct.y = stpa.y
and T1.A1 = EntryItem A1
and 1 = 1
and 1 = 1
and 1 = 1

which is incorrect. The correct insert-select should be
insert into T4(x)
select T1.y
from T1 sbct,
where T1.A1 = EntryItem A1
and 1 = 1
and 1 = 1
and 1 = 1

Have you any elegant idea as always instead for me to start with if and elsif

Thanks in advance for your precious help





SQL query

Sridhar.S, September 01, 2006 - 12:33 am UTC

Hi Tom,
Greetings!!!
I have to build a SQL to do the following.

Data in the table is as below:

Slno. Prod_List
1 stno6030, 3MB, 2Spindle, 3, 1
2 stno611, 5MB,NULL, 2, 1, 3
3 stno612a, 5MB,2Spindle

Output of the SQL should be like below:

slno Prod_Name P_Id1 P_Id2 P_Id3 P_Id4 P_Id5
1 stno6030 3MB 2Spindle 3 1 -
2 stno611 5MB - 3 1 3
3 stno612a 5MB 2Spindle - - -

how i can do this only with sql??. Thanks.

Regards,
Sridhar.S

Tom Kyte
September 01, 2006 - 8:28 am UTC

substr and instr will be your friend. Unlike the kind person that gave you this data.

Normally, I'd give an example but I couldn't find a create table and insert statements to test with.

sql query

AD, September 06, 2006 - 7:33 pm UTC

Hi Tom,

Could you please help with the following.

First details about table/data

create table zzz
(acc number,
id number,
dt date)
/

 insert into zzz values (1111, 1, sysdate -1700);
 insert into zzz values (2222, 2, sysdate -2100);

create table xxx
(current_year number(4),
 quarter number(1),
 year_of_purchase number(4),
 id number,
 x number)
/

 insert into xxx values (2006, 1, 2002, 1, 1);
  insert into xxx values (2006, 1, 2002, 2, 1.2);
  insert into xxx values (2006, 1, 2003, 1, 1.8);
  insert into xxx values (2006, 1, 2003, 2, 1.9);
  insert into xxx values (2006, 2, 2002, 1, 6);
  insert into xxx values (2006, 2, 2002, 2, 5.5);
  insert into xxx values (2006, 2, 2003, 1, 4.5);
  insert into xxx values (2006, 2, 2003, 2, 3.5);
  insert into xxx values (2006, 3, 2002, 1, 6.5);
  insert into xxx values (2006, 3, 2002, 2, 7);
  insert into xxx values (2006, 3, 2003, 1, 8);
  insert into xxx values (2006, 3, 2003, 2, 9);

SQL> select * from zzz;

       ACC         ID DT
---------- ---------- ---------------
      1111          1 10-JAN-02
      2222          2 06-DEC-00

No duplicate data, i.e. acc, id, dt unique

SQL> select * from xxx;

CURRENT_YEAR     QUARTER YEAR_OF_PURCHASE         ID          X
------------ ---------- ---------------- ---------- ----------
        2006          1             2002          1          1
        2006          1             2002          2        1.2
        2006          1             2003          1        1.8
        2006          1             2003          2        1.9
        2006          2             2002          1          6
        2006          2             2002          2        5.5
        2006          2             2003          1        4.5
        2006          2             2003          2        3.5
        2006          3             2002          1        6.5
        2006          3             2002          2          7
        2006          3             2003          1          8

CURRENT_YEAR     QUARTER YEAR_OF_PURCHASE         ID          X
------------ ---------- ---------------- ---------- ----------
        2006          3             2003          2          9

12 rows selected.

By joining table xxx and zzz on id and (year(of dt) = year_of_purchase), I will retrieve the row from xxx for which the current_year and quarter is maximum. If there is no row matching year( of dt) with year_of_purchase, then I will select the row which has lowest year_of_purchase in the same current_year/quater and corresponding to the same id.

Acc    x
-----------------
1111    6.5      (since the row matches with year_of_purchase =2002 and id =1)
2222    7         (since there is no row matching dt =2000 corresponding to acc=2222, we would return the row for which the year_of_purchase is lowest while current_year/and quarter are still maximum)

Many thanks,
 

How can I substract the values AX12N986 and AXN989.

N.L.Prasad, September 07, 2006 - 5:21 am UTC

Hi Tom,
I want the result is 3 when i substract AX12N986 and AXN989 using SQL Query. actually those are AX12N{986} and AXN{989}. mentioned brace part should be substracted values. Have to eliminate the Alphabets upto the last numaric value.
Can you Please help me out regarding the same.


Tom Kyte
September 07, 2006 - 7:20 am UTC

do you have some "logic" defined to find the number here - is it always the last bit after the last N in the string for example.

N.L.Prasad, September 07, 2006 - 8:50 am UTC

after last character only we will find number....

Tom Kyte
September 07, 2006 - 8:55 am UTC

ok, more info:

define character - simply A-Z?



ops$tkyte%ORA10GR2> select substr( 'ABC7N123',
  2          instr(
  3          translate( 'ABC7N123', 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', rpad(' ',26) ),
  4                  ' ',
  5                  -1 )+1)
  6    from dual;

SUB
---
123
 

Transpose

jagjal, September 08, 2006 - 2:44 am UTC

Hi Tom,
I have a table where cols are used for reporting .. but the cols should be in rows to extract the report in that format .. how to convert cols to rows



Tom Kyte
September 08, 2006 - 4:27 pm UTC

define "cols to rows", give example using "smallish table" to demonstrate your idea of cols to rows.

generate the insert script from table

Vinod, October 08, 2006 - 8:44 am UTC

Hi Tom,

I am trying to write the procedure to genrate the insert script.

For exam. If i have the table emp

empno ename sal
1 AB 30
2 BC 30


then i will be able to gerate the insert into emp (empno,ename,sal) values('1','AB','30')

Can you please help me .


Tom Kyte
October 08, 2006 - 9:28 am UTC

apex can do this if you like, </code> http://apex.oracle.com/

Otherwise, this is just an exercise in your plsql programming skills.  Don't forget about quotes in quoted strings - and deal with it.

And, please make the first line of your generated script be:

alter session set cursor_sharing=force;

and the last line be:

alter session set cursor_sharing=exact;


Unless of course you want to utterly trash the poor unsuspecting recipient of this scripts shared pool, and spend 90 or more percent of your time parsing SQL, not actually executing it.


If you would like a "much better solution", consider using just a CSV file and sqlldr.

http://asktom.oracle.com/~tkyte/flat/index.html <code>

Sql query

A reader, October 08, 2006 - 11:29 am UTC

I have a table with the followind data

customer code, customer name and deparement

1 JULIO DAM LAM 100
1 JULIO DAM LAM 101
2 JULIO ALEJANDRO 100
2 JULIO ALEJANDRO 101
3 VIENA 100
3 VIENA 101
4 MARIA LUCIA 100

I want another column with a unique key that group customer code,customer name and depto

1 JULIO DAM LAM 100 1
1 JULIO DAM LAM 101 1
2 JULIO ALEJANDRO 100 2
2 JULIO ALEJANDRO 101 2
3 VIENA 100 3
3 VIENA 101 3
4 MARIA LUCIA 100 4
4 RODOLFO 100 5




Tom Kyte
October 08, 2006 - 2:40 pm UTC

ok, seems like a bad idea perhaps, but go ahead?

TABLE Function in Join

klabu, October 10, 2006 - 11:13 am UTC

10gR2

Tom, need some help here, never seen it done before....

SQL> SELECT --...whatever...
  2  FROM taba
  3  JOIN tabb
  4     ON taba.pk = tabb.pk
  5  JOIN tabc
  6     ON tabc.fk = taba.pk
  7  JOIN TABLE(apkg.afunc(taba.pk,
  8                        taba.col_x,
  9                        tabb.col_y)) tabd
 10     ON taba.pk = tabd.pk
 11  /


This SQL....I have a hard time "visualize" it
How is TABLE(...) tabd derived/evaluated ?
Most importantly: How are the function parameters determined ?
(func returns a nested table of objects)

Is "TABLE(...)" evaluated AFTER the OTHER tables are joined & ON clauses applied ?

thanks
 

Tom Kyte
October 10, 2006 - 7:53 pm UTC

apkg.afunc is pipelined, it works like a table, just:

select * from table( apkg.afunct(.....) );

and you'll "see" the table.

the table function would have to be evaluated after taba and tabb were joined (we hope, hate to have a cartesian join there :)

query

sam, October 12, 2006 - 3:36 pm UTC

Tom:

Can the 9i CBO have issues with it. I have a procedure that runs fine in 8i. when I take it to 9i it runs fine in sql*plus but when I do it through pl/sql using mod_plsql it takes a long time.

It uses a ref cursor. The only explanation is the way the CBO is building the query plan. Here is th query.

l_query varchar2(4000)
default 'select storage_Code,qty_available from (select * from select a.warehouse_id, a.stock_number, max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1';


dbms_session.set_context('VW_CTX','STOCK_NUMBER',i_stock_number);

l_query := l_query || ' and stock_number = sys_context (''VW_CTX'',''STOCK_NUMBER'') '||')';

DO you see any issues with this? How can you move the stock number context to the inner query before the 'where effective_Date';

Tom Kyte
October 13, 2006 - 6:57 am UTC

have you bothered to compare the plans, to see if and how they differ, have you looked at the estimated cardinalities? Are they realistic? have you compared them to a tkprof?....

no one is going to be able to look at a standalone query (and have no other bits of information) and say a thing.

Losing Package State ?

klabu, October 12, 2006 - 5:28 pm UTC

Tom,
Back to this SQL...

SQL> SELECT *
  2  FROM taba
  3  JOIN tabb
  4     ON taba.pk = tabb.pk
  5  JOIN tabc
  6     ON tabc.fk = taba.pk
  7  JOIN TABLE(apkg.afunc(taba.pk,
  8                        taba.col_x,
  9                        tabb.col_y)) tabd
 10     ON taba.pk = tabd.pk
 11  /

apkg.afunc is not pipelined func, Return is collection of objects.

I instrumented apkg.afunc (log to file)
I have a GLOBAL counter variable in the package
to find, among others, how many times this func gets executed.

What I'm shocked to find is that the counter gets RESET -
and it does it MOST of the time, 
* meaning the pkg state apparently gets wiped out *

Is this "normal" or what you'd expect ?

thanks 
  

Tom Kyte
October 13, 2006 - 6:59 am UTC

got example?

query

sam, October 13, 2006 - 7:53 am UTC

TOm:

How can i compare the plan if it goes fast in SQL*plus in both 8i and 9i. How can you get the plan that pl*sql uses? I have never seen a query that runs fine in SQL*plus and a long time in pl/sql. strange!

Tom Kyte
October 13, 2006 - 8:22 am UTC

do you know how to get a plan?

I've a feeling you are NOT using bind variables when you run it in sqlplus are you - you are NOT comparing apples to apples.

Not strange, you are not running the same SQL I believe.

query

sam, October 14, 2006 - 11:02 am UTC

Tom:
yes your feeling is accurate. I am not usng bind variable in sql*plus. I am substituting the variables with data which i guess is STATIC sql. I will use bind and see the difference and post the results.

Thanks for the hint.

Tom Kyte
October 14, 2006 - 7:33 pm UTC

search this site for

bind variable peeking


before you ask the next question that probably comes along with this :)

query

sam, October 17, 2006 - 3:35 pm UTC

Tom:

I did use bind variables and it ran fast. However I removed the context reference and replaced it with bind variable (:stock_number).


SQL>variable v_orgn  varchar2(10);
SQL>exec :v_orgn := 'ABC';
SQL>variable stock_number varchar2(15);
SQL>exec :stock_number := 'AB002';

But Am i not suppose to set the context value using for stock number. When I did that I could not:


9iSQL> exec dbms_session.set_context('VW_CTX','STOCK_NUMBER','AB002');
BEGIN dbms_session.set_context('VW_CTX','STOCK_NUMBER','AB002'); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 78
ORA-06512: at line 1

SQL>variable v_orgn  varchar2(10);
SQL>exec :v_orgn := 'ABC';
SQL>variable stock_number varchar2(15);
SQL>exec :stock_number := 'AB002';

select storage_Code,qty_available from (select * from ( select a.warehouse_id, a.stock_number, max(effective_date) over (partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt, effective_date,storage_code, compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available, birth_date,b.class,c.orgcd from physical_inventory a, stock_item b ,warehouse c where a.stock_number = b.stock_number and a.warehouse_id = c.warehouse_id and a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) ) where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) 


I think the problem is somewhere in the DBMS_SESSION? Should not i be using the context in sql*plus and do you think it is an access issue. 

Tom Kyte
October 17, 2006 - 3:40 pm UTC

when you create a context, you bound it to a package/procedure/or function

create context vw_ctx using <SOMETHING>

the only thing that can set it's values is "something"

so, call SOMETHING to set the values

query

sam, October 17, 2006 - 4:19 pm UTC

Mike:

I thought you can't use tkprof or trace a procedure (just queries). DO i need to create a dummy procedure, and link the context to it and then set the value and then test the query. Or are you saying to just run the procedure to set the context and repeat steps.


This is really strange. You run the procedure in 8i instance and it takes 4 seconds. Same code, same indexes, etc. When you run it in 9i database it keeps going for


8iSQL> set timing on
8iSQL> set autotrace traceonly;
8iSQL> exec view_stock_item('mike','AB002');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04



9iSQL> exec view_stock_item('mike','AB002');

PL/SQL procedure successfully completed.

Elapsed: 00:04:03.08 

Tom Kyte
October 17, 2006 - 4:23 pm UTC

Mike??

turn on sql_trace=true, run code, tkprof it.

you'll find the difference is likely in a single sql statement.

query

sam, October 17, 2006 - 4:29 pm UTC

Tom:

'mike' is the "i_user_id" parameter for that mod_plsql procedure view_Stock_item(i_user_id,i_stock_item).

Do you want me to run the QUERY only inside the procedure or the whole web procedure from the sql*plus prompt before tkprof.





Tom Kyte
October 17, 2006 - 4:37 pm UTC

you want to find out whats different between the two - it'll be a sql statement.

so, turn on trace, run procedure, exit sqlplus, tkprof
repeat for other database

compare (you compare, please do not post them) the tkprofs and see what is different.

correct it.

query

sam, October 17, 2006 - 6:04 pm UTC

Tom:

1. is there a way to change the default oracle dump directory for the session. i found I dont have access to that. The trace files were created.

2. tkprof report is all statistics. How can that tell you which sql statement is that. But still, the sql is exactly the same. Can it be the CBO. what are your guesses?

Tom Kyte
October 17, 2006 - 6:44 pm UTC

1) you'll have to have the DBA help you out there.

2) you would, well, be looking for "a big difference" - runtimes, it'll be "obvious"

Second approch

Yoav, October 20, 2006 - 2:30 pm UTC

Hi Tom,
The follwoing query selecting twice from v$sysstat ,
and scan all the table twice to get gust one record.

I have two question:
1. Can you please show a second way to write that query ?
2. v$sysstat contain about 270 records.
Suppose there is 100,000 in a table with the same
structure as v$sysstat and you fetch in the same
way as the query bellow, is it still the best way to
get the results ?

SELECT disk.value disk_value ,
mem.value memory_value,
(disk.value/mem.value) * 100
FROM v$sysstat disk, v$sysstat mem
WHERE disk.name = 'sorts (disk)'
AND mem.name = 'sorts (memory)'

Thanks.

Tom Kyte
October 20, 2006 - 4:56 pm UTC

select disk_value, memory_value,
decode( memory_value, 0, to_number(null), disk_value/memory_value) * 100
from (
select max( decode( name, 'sorts (disk)', value ) ) disk_value,
max( decode( name, 'sorts (memory)', value ) ) memory_value
from v$sysstat
where name in ( 'sorts (disk)', 'sorts (memory)' )
)


tkprof

sam, October 22, 2006 - 9:34 pm UTC

Tom:

1. Is there something that has to be set in the DB for sql_trace to create a file. DBA changed the dump directory to something I created in unix that I have access on. But on that database, no files get created.

Tom Kyte
October 23, 2006 - 9:56 am UTC

nope, make sure they changed the right thing and that you create a new session and that you are looking at the right init.ora (user versus background dump destination)

query

sam, October 26, 2006 - 1:08 pm UTC

Tom:

I did what you said and found the difference in queries. But now how do find what is causing that. It is the same procedure in 8i and 9i. Same data. Can it be the CBO. Can you give me a hint.

THIS IS 8i TRACE FILE

********************************************************************************

SELECT SUM(QUANTITY_PICKED)
FROM
STORE_SHIPMENT A,SHIPMENT B WHERE A.SHIPMENT_ID = B.SHIPMENT_ID AND
A.STOCK_NUMBER = :b1 AND A.WAREHOUSE_ID = :b2 AND A.STORAGE_CODE = :b3
AND B.SHIPMENT_DATE >= :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 0 0 0
Fetch 16 0.28 0.31 2688 2884 80 16
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 33 0.28 0.31 2688 2884 80 16

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)
********************************************************************************


THIS IS 9i TRACE FILE

********************************************************************************

SELECT SUM(quantity_picked) FROM store_shipment a,shipment b
WHERE a.shipment_id = b.shipment_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.shipment_date >= :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.23 0.24 0 0 0 0
Fetch 6434 89.83 247.86 1190290 1248848 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 90.06 248.11 1190290 1248848 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)
********************************************************************************


Tom Kyte
October 26, 2006 - 3:25 pm UTC

and where are the row source operation plan steps? less than useful without them.

sql query

sam, October 26, 2006 - 5:14 pm UTC

Tom:

There was not anything after those quries. But there is one after the ref cursor and they seem differnt. I am not sure if this is what you want.

8i TRACE

********************************************************************************

select storage_Code,qty_available from (select * from (
select a.warehouse_id, a.stock_number,
max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) union all select storage_code,qty_available from ( select a.warehouse_id,a.stock_number,sysdate,max(d.receipt_date),a.storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
max(d.receipt_date) birth_date,
b.class,c.orgcd
from storage_receipt a, stock_item b, warehouse c,receipt d
where a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.receipt_id = d.receipt_id and
a.warehouse_id||a.stock_number||a.storage_code not in (select
warehouse_id||stock_number||storage_code from physical_inventory) and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) and a.stock_number = sys_context ('VW_CTX','STOCK_NUMBER') group by a.warehouse_id,a.stock_number,a.storage_code,b.class,c.orgcd ) order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 6 0.02 0.01 0 204 16 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.02 0.01 0 204 16 4

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT ORDER BY
2 UNION-ALL
2 VIEW
6 WINDOW SORT
6 NESTED LOOPS
7 HASH JOIN
4 NESTED LOOPS
2 INDEX UNIQUE SCAN (object id 16422)
4 TABLE ACCESS FULL WAREHOUSE
12 INDEX RANGE SCAN (object id 16394)
6 INDEX UNIQUE SCAN (object id 16440)
0 VIEW
0 SORT GROUP BY
0 FILTER
2 NESTED LOOPS
2 NESTED LOOPS
2 NESTED LOOPS
24 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID STOCK_ITEM
2 INDEX UNIQUE SCAN (object id 16422)
24 TABLE ACCESS FULL STORAGE_RECEIPT
24 TABLE ACCESS BY INDEX ROWID WAREHOUSE
46 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE
2 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID RECEIPT
2 INDEX UNIQUE SCAN (object id 16400)
1 INDEX FULL SCAN (object id 16394)

********************************************************************************

9i TRACE

********************************************************************************

SAME QUERY ABOVE

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 3.44 4.18 0 6667 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 3.44 4.18 0 6667 0 3

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT ORDER BY (cr=1591329 r=1190105 w=0 time=266452905 us)
3 UNION-ALL (cr=1591329 r=1190105 w=0 time=266452784 us)
2 VIEW (cr=1590933 r=1189920 w=0 time=266397260 us)
6445 WINDOW SORT (cr=6510 r=0 w=0 time=267417 us)
6445 NESTED LOOPS (cr=6510 r=0 w=0 time=99386 us)
6445 NESTED LOOPS (cr=63 r=0 w=0 time=61226 us)
6445 HASH JOIN (cr=61 r=0 w=0 time=32768 us)
4 TABLE ACCESS FULL WAREHOUSE (cr=7 r=0 w=0 time=152 us)
8502 INDEX FAST FULL SCAN PK_PHYSICAL_INVENTORY (cr=54 r=0 w=0 time=9245 us)(object id 26808)
6445 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=11612 us)(object id 26874)
6445 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=6447 r=0 w=0 time=23148 us)(object id 26845)
1 SORT GROUP BY (cr=157 r=0 w=0 time=18653 us)
1 FILTER (cr=157 r=0 w=0 time=18543 us)
2 NESTED LOOPS (cr=105 r=0 w=0 time=2350 us)
2 NESTED LOOPS (cr=101 r=0 w=0 time=2316 us)
24 NESTED LOOPS (cr=75 r=0 w=0 time=2095 us)
24 NESTED LOOPS (cr=49 r=0 w=0 time=1890 us)
1 TABLE ACCESS BY INDEX ROWID STOCK_ITEM (cr=3 r=0 w=0 time=28 us)
1 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=2 r=0 w=0 time=16 us)(object id 26845)
24 TABLE ACCESS FULL STORAGE_RECEIPT (cr=46 r=0 w=0 time=1845 us)
24 INDEX UNIQUE SCAN PK_RECEIPT (cr=26 r=0 w=0 time=138 us)(object id 26816)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=26 r=0 w=0 time=161 us)
24 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=49 us)(object id 26874)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=4 r=0 w=0 time=21 us)
2 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=9 us)(object id 26874)
1 INDEX FULL SCAN PK_PHYSICAL_INVENTORY (cr=52 r=0 w=0 time=16098 us)(object id 26808)

********************************************************************************


Tom Kyte
October 27, 2006 - 7:32 am UTC

umm, why are the numbers all different from before all of a sudden??????

It this a bug?

Ed, October 27, 2006 - 6:44 am UTC

Hi Tom,

By executing the query below in Oracle 10 both with a same logic but different pattern, I get a different result. May I know it is so?

create table table1 (cola char(10), colb char(10));
create table table2 (cola char(10), colb char(10));

insert into table1 (cola, colb) values ('A','1');
insert into table1 (cola, colb) values ('B','2');
insert into table1 (cola, colb) values ('C','3');
insert into table1 (cola, colb) values ('D','4');
insert into table1 (cola, colb) values ('E','5');
insert into table1 ( colb) values ('6');


insert into table2 (cola, colb) values ('A','1');
insert into table2 (cola, colb) values ('B','2');
insert into table2 (cola, colb) values ('C','3');
insert into table2 (cola, colb) values ('D','4');
insert into table2 (cola, colb) values ('E','5');
insert into table2 ( colb) values ('6');

-- First Method,
-- 6 rows
select * from table1 left join table2 on table1.cola = table2.cola;

-- 6 rows
select * from table1 left join table2 on table1.cola = table2.cola and table1.cola is not null and table2.cola is not null;

--Second Method
-- 6 rows
select * from table1, table2
where table1.cola(+) = table2.cola;

-- 5 rows
select * from table1, table2
where table1.cola (+) = table2.cola
and table1.cola is not null and table2.cola is not null;



Tom Kyte
October 27, 2006 - 7:57 am UTC

you need to use a WHERE clause on query2 to make the logic "the same", there is a huge difference between "on" and "where"


ops$tkyte%ORA9IR2> select * from table1 left join table2 on table1.cola  = table2.cola
  2  AND
  3  table1.cola is not null and  table2.cola is not null;

COLA       COLB       COLA       COLB
---------- ---------- ---------- ----------
A          1          A          1
B          2          B          2
C          3          C          3
D          4          D          4
E          5          E          5
           6

6 rows selected.

ops$tkyte%ORA9IR2> select * from table1 left join table2 on table1.cola  = table2.cola
  2  WHERE
  3  table1.cola is not null and  table2.cola is not null;

COLA       COLB       COLA       COLB
---------- ---------- ---------- ----------
A          1          A          1
B          2          B          2
C          3          C          3
D          4          D          4
E          5          E          5

ops$tkyte%ORA9IR2>
 

query

sam, October 27, 2006 - 9:37 am UTC

Tom:

What numbers are different?
The report generated by TKPROF sometimes lists row source plan after a quary and sometimes it does not.

The table I listed first was listed in the report after that specific query only.

SELECT SUM(quantity_picked) FROM store_shipment a,shipment b
WHERE a.shipment_id = b.shipment_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.shipment_date >= :b1

There was no source operation plan after it.

So I took the ref cursor operation plan and listed it hoping it gives you what you want. I can email the two trace files if you like to see it. It is big to list here.

Tom Kyte
October 27, 2006 - 9:51 am UTC

all of the numbers where different, you went from millions of IO's down to nothing - there is nothing really to look at.

the row source stuff is emitted into the trace file when the cursor closes, make sure to "exit" your session to get a complete trace file.

do not email me anything.

query

sam, October 27, 2006 - 10:50 am UTC

Tom:

1.  OK i am going to do it again. Here is what I will do:

8iSQL> ALTER SESSION SET SQL_TRACE=TRUE;
8iSQL> exec view_stock_item('mike','AC002'):
8iSQL> ALTER SESSSION SET SQL_TRACE=FALSE;

TKPROF the trace file created in the unix oracle dump directory.

Same thing for the 9i database.

Is this correct? 

Tom Kyte
October 27, 2006 - 6:18 pm UTC

no, do this:

enable trace
run procedure
EXIT


plsql caches cursors, does not close them, you need to exit the session.

query

sam, October 27, 2006 - 4:39 pm UTC

Tom

Can you resolve this mystery. I run the procedure, and it gives me some errors and then I run it again it runs fine after taking 4 minutes.  Can the mod_plsql or OWA_UTIL have something worng with it.

SQL> alter session set sql_trace=true;

Session altered.

SQL> exec view_stock_item('mike','AC002');
BEGIN view_stock_item('mike','AC002'); END;

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "SYS.OWA_UTIL", line 323
ORA-06512: at "SYS.HTP", line 860
ORA-06512: at "SYS.HTP", line 975
ORA-06512: at "SYS.HTP", line 993
ORA-06512: at "ITTADMIN.VIEW_STOCK_ITEM", line 93
ORA-06512: at line 1


SQL> exec view_stock_item('mike','AC002');

PL/SQL procedure successfully completed.
 

Tom Kyte
October 27, 2006 - 8:12 pm UTC

you are running mod_plsql based code that is referencing the OWA_UTIL package, unless mod_plsql sets up that package - it fails the first time.  I use a script like this:

ops$tkyte%ORA10GR2> @owainit
ops$tkyte%ORA10GR2> declare
  2          nm      owa.vc_arr;
  3          vl      owa.vc_arr;
  4  begin
  5          nm(1) := 'WEB_AUTHENT_PREFIX';
  6          vl(1) := 'WEB$';
  7          owa.init_cgi_env( nm.count, nm, vl );
  8  end;
  9  /

PL/SQL procedure successfully completed.


to fake the setup of the cgi-environment to avoid that issue (it does sort of what mod_plsql would have done) 

query

sam, October 27, 2006 - 6:20 pm UTC

Tom:

Here are 9i and 8i differences redone. Does this answer your original question. Can this be cause by a function I have that have a query that joins "shipment.shipment_id (PK)" and "store_shipment.shipment_id (FK)".


TKPROF: Release 9.2.0.2.0 - Production on Fri Oct 27 16:51:33 2006
9i TRACE FILE
********************************************************************************
select storage_Code,qty_available from (select * from (
select a.warehouse_id, a.stock_number,
max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) union all select storage_code,qty_available from ( select a.warehouse_id,a.stock_number,sysdate,max(d.receipt_date),a.storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
max(d.receipt_date) birth_date,
b.class,c.orgcd
from storage_receipt a, stock_item b, warehouse c,receipt d
where a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.receipt_id = d.receipt_id and
a.warehouse_id||a.stock_number||a.storage_code not in (select
warehouse_id||stock_number||storage_code from physical_inventory) and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) and a.stock_number = sys_context ('VW_CTX','STOCK_NUMBER') group by a.warehouse_id,a.stock_number,a.storage_code,b.class,c.orgcd ) order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 3.67 5.15 0 6667 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 3.67 5.15 0 6667 0 3

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT ORDER BY (cr=1591329 r=1196538 w=0 time=292124712 us)
3 UNION-ALL (cr=1591329 r=1196538 w=0 time=292124587 us)
2 VIEW (cr=1590933 r=1196352 w=0 time=292021021 us)
6445 WINDOW SORT (cr=6510 r=0 w=0 time=305374 us)
6445 NESTED LOOPS (cr=6510 r=0 w=0 time=99584 us)
6445 NESTED LOOPS (cr=63 r=0 w=0 time=60313 us)
6445 HASH JOIN (cr=61 r=0 w=0 time=32373 us)
4 TABLE ACCESS FULL WAREHOUSE (cr=7 r=0 w=0 time=170 us)
8502 INDEX FAST FULL SCAN PK_PHYSICAL_INVENTORY (cr=54 r=0 w=0 time=8801 us)(object id 26808)
6445 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=11234 us)(object id 26874)
6445 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=6447 r=0 w=0 time=24501 us)(object id 26845)
1 SORT GROUP BY (cr=157 r=0 w=0 time=21424 us)
1 FILTER (cr=157 r=0 w=0 time=21326 us)
2 NESTED LOOPS (cr=105 r=0 w=0 time=2513 us)
2 NESTED LOOPS (cr=101 r=0 w=0 time=2481 us)
24 NESTED LOOPS (cr=75 r=0 w=0 time=2251 us)
24 NESTED LOOPS (cr=49 r=0 w=0 time=2049 us)
1 TABLE ACCESS BY INDEX ROWID STOCK_ITEM (cr=3 r=0 w=0 time=33 us)
1 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=2 r=0 w=0 time=21 us)(object id 26845)
24 TABLE ACCESS FULL STORAGE_RECEIPT (cr=46 r=0 w=0 time=1991 us)
24 INDEX UNIQUE SCAN PK_RECEIPT (cr=26 r=0 w=0 time=130 us)(object id 26816)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=26 r=0 w=0 time=161 us)
24 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=51 us)(object id 26874)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=4 r=0 w=0 time=22 us)
2 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=9 us)(object id 26874)
1 INDEX FULL SCAN PK_PHYSICAL_INVENTORY (cr=52 r=0 w=0 time=18732 us)(object id 26808)

********************************************************************************

SELECT max(effective_date) FROM physical_inventory b
WHERE b.warehouse_id = :b3 AND
b.stock_number = :b2 AND
b.storage_code = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6436 0.30 0.33 0 0 0 0
Fetch 6436 0.28 0.29 0 13187 0 6436
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12873 0.58 0.62 0 13187 0 6436

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)
********************************************************************************

SELECT quantity FROM physical_inventory a
WHERE a.warehouse_id = :b4 AND
a.stock_number = :b3 AND
a.storage_code = :b2 AND
a.effective_date = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.22 0.18 0 0 0 0
Fetch 6434 0.14 0.13 0 19302 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 0.36 0.31 0 19302 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)
********************************************************************************

SELECT SUM(quantity_stored) FROM storage_receipt a,receipt b
WHERE a.receipt_id = b.receipt_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.receipt_date >= :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.36 0.24 0 0 0 0
Fetch 6434 12.36 15.24 0 303595 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 12.72 15.49 0 303595 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)
********************************************************************************

SELECT SUM(quantity_picked) FROM store_shipment a,shipment b
WHERE a.shipment_id = b.shipment_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.shipment_date >= :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.20 0.27 0 0 0 0
Fetch 6434 94.63 270.28 1196724 1248848 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 94.83 270.56 1196724 1248848 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)
********************************************************************************
----------------------------------------------------------------------------------
TKPROF: Release 8.1.7.2.0 - Production on Fri Oct 27 15:41:47 2006

8i TRACE FILE
********************************************************************************

select storage_Code,qty_available from (select * from (
select a.warehouse_id, a.stock_number,
max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) union all select storage_code,qty_available from ( select a.warehouse_id,a.stock_number,sysdate,max(d.receipt_date),a.storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
max(d.receipt_date) birth_date,
b.class,c.orgcd
from storage_receipt a, stock_item b, warehouse c,receipt d
where a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.receipt_id = d.receipt_id and
a.warehouse_id||a.stock_number||a.storage_code not in (select
warehouse_id||stock_number||storage_code from physical_inventory) and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) and a.stock_number = sys_context ('VW_CTX','STOCK_NUMBER') group by a.warehouse_id,a.stock_number,a.storage_code,b.class,c.orgcd ) order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.03 0.04 0 0 0 0
Fetch 3 0.01 0.03 0 64 8 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.04 0.08 0 64 8 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT ORDER BY
2 UNION-ALL
2 VIEW
2 WINDOW SORT
2 NESTED LOOPS
3 HASH JOIN
4 NESTED LOOPS
2 INDEX UNIQUE SCAN (object id 16422)
4 TABLE ACCESS FULL WAREHOUSE
8 INDEX RANGE SCAN (object id 16394)
2 INDEX UNIQUE SCAN (object id 16440)
0 VIEW
0 SORT GROUP BY
0 FILTER
2 NESTED LOOPS
2 NESTED LOOPS
2 NESTED LOOPS
5 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID STOCK_ITEM
2 INDEX UNIQUE SCAN (object id 16422)
5 TABLE ACCESS FULL STORAGE_RECEIPT
5 TABLE ACCESS BY INDEX ROWID WAREHOUSE
8 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE
2 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID RECEIPT
2 INDEX UNIQUE SCAN (object id 16400)
1 INDEX FULL SCAN (object id 16394)

********************************************************************************

SELECT MAX(EFFECTIVE_DATE)
FROM
PHYSICAL_INVENTORY B WHERE B.WAREHOUSE_ID = :b1 AND B.STOCK_NUMBER = :b2
AND B.STORAGE_CODE = :b3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 4 0.00 0.02 0 0 0 0
Fetch 4 0.00 0.00 0 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.00 0.04 0 8 0 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)
********************************************************************************

SELECT QUANTITY
FROM
PHYSICAL_INVENTORY A WHERE A.WAREHOUSE_ID = :b1 AND A.STOCK_NUMBER = :b2
AND A.STORAGE_CODE = :b3 AND A.EFFECTIVE_DATE = :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.05 0 0 0 0
Execute 4 0.00 0.02 0 0 0 0
Fetch 4 0.00 0.00 0 12 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.01 0.07 0 12 0 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)
********************************************************************************

SELECT SUM(QUANTITY_STORED)
FROM
STORAGE_RECEIPT A,RECEIPT B WHERE A.RECEIPT_ID = B.RECEIPT_ID AND
A.STOCK_NUMBER = :b1 AND A.WAREHOUSE_ID = :b2 AND A.STORAGE_CODE = :b3
AND B.RECEIPT_DATE >= :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.04 0 0 0 0
Execute 4 0.00 0.02 0 0 0 0
Fetch 4 0.01 0.03 5 162 16 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.01 0.09 5 162 16 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)
********************************************************************************

SELECT SUM(QUANTITY_PICKED)
FROM
STORE_SHIPMENT A,SHIPMENT B WHERE A.SHIPMENT_ID = B.SHIPMENT_ID AND
A.STOCK_NUMBER = :b1 AND A.WAREHOUSE_ID = :b2 AND A.STORAGE_CODE = :b3
AND B.SHIPMENT_DATE >= :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 4 0.00 0.01 0 0 0 0
Fetch 4 0.11 0.15 672 676 20 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.11 0.17 672 676 20 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)
********************************************************************************


Tom Kyte
October 27, 2006 - 8:17 pm UTC

sorry, but don't you see that the query in question does not have any information (row source)?????

query

sam, October 27, 2006 - 10:53 pm UTC

Tom:

You are the expert in TKPROF not me! How would I know Why it did not create one. Is it because that this query is referenced in a function? WHat shall I do now.


Tom Kyte
October 28, 2006 - 10:33 am UTC

I told you to exit the session. More than one above.

I said:

enable trace
run procedure
EXIT <<<<==================


the approach you took explicitly will NOT WORK.

query

sam, October 28, 2006 - 1:06 pm UTC

TOm:

OK, I will do that. I was missing hte "EXIT".

On the meantime, why do i need the initialization procedure you listed above in 9i. it works fine from the first time in 8i. It sounds to me that MOD_PLSQL should be doing this instead of the developer. Do you think this might happen when you call a procedure from the web application?

Tom Kyte
October 28, 2006 - 1:17 pm UTC

you needed it in 8i, you have always needed it, that script was one I wrote in 1995 or 1996. It has ALWAYS been true.

if you call the procedure from the web - it would have already been called for you!!! mod_plsql DOES do it, you get the error expressly because you are NOT USING mod_plsql to run the procedure.

query

sam, October 30, 2006 - 12:36 pm UTC

Tom:

OK here it is with row source operation in 8i and 9i. Do you see the problem?


TKPROF: Release 9.2.0.2.0 - Production on Mon Oct 30 12:01:07 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

9i

********************************************************************************

select storage_Code,qty_available from (select * from (
select a.warehouse_id, a.stock_number,
max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) union all select storage_code,qty_available from ( select a.warehouse_id,a.stock_number,sysdate,max(d.receipt_date),a.storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
max(d.receipt_date) birth_date,
b.class,c.orgcd
from storage_receipt a, stock_item b, warehouse c,receipt d
where a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.receipt_id = d.receipt_id and
a.warehouse_id||a.stock_number||a.storage_code not in (select
warehouse_id||stock_number||storage_code from physical_inventory) and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) and a.stock_number = sys_context ('VW_CTX','STOCK_NUMBER') group by a.warehouse_id,a.stock_number,a.storage_code,b.class,c.orgcd ) order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 0 0 0 0
Fetch 4 3.30 4.07 0 6667 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 3.30 4.09 0 6667 0 3

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT ORDER BY (cr=1591329 r=1192018 w=0 time=297515139 us)
3 UNION-ALL (cr=1591329 r=1192018 w=0 time=297515032 us)
2 VIEW (cr=1590933 r=1191833 w=0 time=297457698 us)
6445 WINDOW SORT (cr=6510 r=0 w=0 time=293323 us)
6445 NESTED LOOPS (cr=6510 r=0 w=0 time=97443 us)
6445 NESTED LOOPS (cr=63 r=0 w=0 time=60411 us)
6445 HASH JOIN (cr=61 r=0 w=0 time=32036 us)
4 TABLE ACCESS FULL WAREHOUSE (cr=7 r=0 w=0 time=192 us)
8502 INDEX FAST FULL SCAN PK_PHYSICAL_INVENTORY (cr=54 r=0 w=0 time=8754 us)(object id 26808)
6445 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=11682 us)(object id 26874)
6445 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=6447 r=0 w=0 time=22818 us)(object id 26845)
1 SORT GROUP BY (cr=157 r=0 w=0 time=18287 us)
1 FILTER (cr=157 r=0 w=0 time=18217 us)
2 NESTED LOOPS (cr=105 r=0 w=0 time=2415 us)
2 NESTED LOOPS (cr=101 r=0 w=0 time=2383 us)
24 NESTED LOOPS (cr=75 r=0 w=0 time=2162 us)
24 NESTED LOOPS (cr=49 r=0 w=0 time=1956 us)
1 TABLE ACCESS BY INDEX ROWID STOCK_ITEM (cr=3 r=0 w=0 time=26 us)
1 INDEX UNIQUE SCAN PK_STOCK_ITEM (cr=2 r=0 w=0 time=14 us)(object id 26845)
24 TABLE ACCESS FULL STORAGE_RECEIPT (cr=46 r=0 w=0 time=1910 us)
24 INDEX UNIQUE SCAN PK_RECEIPT (cr=26 r=0 w=0 time=137 us)(object id 26816)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=26 r=0 w=0 time=160 us)
24 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=49 us)(object id 26874)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE (cr=4 r=0 w=0 time=20 us)
2 INDEX UNIQUE SCAN PK_WAREHOUSE (cr=2 r=0 w=0 time=8 us)(object id 26874)
1 INDEX FULL SCAN PK_PHYSICAL_INVENTORY (cr=52 r=0 w=0 time=15725 us)(object id 26808)

********************************************************************************

SELECT max(effective_date) FROM physical_inventory b
WHERE b.warehouse_id = :b3 AND
b.stock_number = :b2 AND
b.storage_code = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6436 0.30 0.31 0 0 0 0
Fetch 6436 0.21 0.27 0 13187 0 6436
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12873 0.51 0.58 0 13187 0 6436

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
6436 SORT AGGREGATE (cr=13187 r=0 w=0 time=252546 us)
19764 INDEX RANGE SCAN PK_PHYSICAL_INVENTORY (cr=13187 r=0 w=0 time=200554 us)(object id 26808)

********************************************************************************

SELECT quantity FROM physical_inventory a
WHERE a.warehouse_id = :b4 AND
a.stock_number = :b3 AND
a.storage_code = :b2 AND
a.effective_date = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.16 0.19 0 0 0 0
Fetch 6434 0.08 0.12 0 19302 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 0.24 0.31 0 19302 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
6434 TABLE ACCESS BY INDEX ROWID PHYSICAL_INVENTORY (cr=19302 r=0 w=0 time=106536 us)
6434 INDEX UNIQUE SCAN PK_PHYSICAL_INVENTORY (cr=12868 r=0 w=0 time=51400 us)(object id 26808)

********************************************************************************

SELECT SUM(quantity_stored) FROM storage_receipt a,receipt b
WHERE a.receipt_id = b.receipt_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.receipt_date >= :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.24 0.21 0 0 0 0
Fetch 6434 11.32 12.82 0 303595 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 11.56 13.04 0 303595 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
6434 SORT AGGREGATE (cr=303595 r=0 w=0 time=12801863 us)
46 NESTED LOOPS (cr=303595 r=0 w=0 time=12771587 us)
2660 TABLE ACCESS FULL STORAGE_RECEIPT (cr=295964 r=0 w=0 time=12710760 us)
46 TABLE ACCESS BY INDEX ROWID RECEIPT (cr=7631 r=0 w=0 time=42310 us)
2660 INDEX UNIQUE SCAN PK_RECEIPT (cr=4971 r=0 w=0 time=24768 us)(object id 26816)

********************************************************************************

SELECT SUM(quantity_picked) FROM store_shipment a,shipment b
WHERE a.shipment_id = b.shipment_id AND
a.stock_number = :b4 AND
a.warehouse_id = :b3 AND
a.storage_code = :b2 AND
b.shipment_date >= :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6434 0.27 0.24 0 0 0 0
Fetch 6434 90.66 279.28 1192202 1248848 0 6434
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12869 90.93 279.52 1192202 1248848 0 6434

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
6434 SORT AGGREGATE (cr=1248848 r=1192202 w=0 time=279247082 us)
2077 NESTED LOOPS (cr=1248848 r=1192202 w=0 time=279186857 us)
14226 TABLE ACCESS FULL STORE_SHIPMENT (cr=1216026 r=1192202 w=0 time=278906700 us)
2077 TABLE ACCESS BY INDEX ROWID SHIPMENT (cr=32822 r=0 w=0 time=208958 us)
14226 INDEX UNIQUE SCAN PK_SHIPMENT (cr=18596 r=0 w=0 time=110535 us)(object id 26835)

********************************************************************************

SELECT SUM(quantity_stored) FROM storage_receipt a,receipt b WHERE a.receipt_id = b.receipt_id AND
a.receipt_id = b.receipt_id AND
a.stock_number = :b3 AND
a.warehouse_id = :b2 AND
a.storage_code = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.00 0 96 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.01 0.00 0 96 0 2

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT AGGREGATE (cr=96 r=0 w=0 time=3670 us)
2 NESTED LOOPS (cr=96 r=0 w=0 time=3654 us)
2 TABLE ACCESS FULL STORAGE_RECEIPT (cr=92 r=0 w=0 time=3621 us)
2 INDEX UNIQUE SCAN PK_RECEIPT (cr=4 r=0 w=0 time=23 us)(object id 26816)

********************************************************************************

SELECT SUM(quantity_picked) FROM store_shipment a,shipment b
WHERE a.shipment_id = b.shipment_id AND
a.stock_number = :b3 AND
a.warehouse_id = :b2 AND
a.storage_code = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.06 370 378 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.01 0.06 370 378 0 2

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT AGGREGATE (cr=378 r=370 w=0 time=68365 us)
0 NESTED LOOPS (cr=378 r=370 w=0 time=68352 us)
0 TABLE ACCESS FULL STORE_SHIPMENT (cr=378 r=370 w=0 time=68349 us)
0 INDEX UNIQUE SCAN PK_SHIPMENT (cr=0 r=0 w=0 time=0 us)(object id 26835)

********************************************************************************


TKPROF: Release 8.1.7.2.0 - Production on Mon Oct 30 11:50:17 2006

(c) Copyright 2000 Oracle Corporation. All rights reserved.

8i

********************************************************************************

select storage_Code,qty_available from (select * from (
select a.warehouse_id, a.stock_number,
max(effective_date) over
(partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
effective_date,storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
birth_date,b.class,c.orgcd
from physical_inventory a, stock_item b ,warehouse c where
a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) )
where effective_date = max_eff_dt and 1=1 and stock_number = sys_context ('VW_CTX','STOCK_NUMBER') ) union all select storage_code,qty_available from ( select a.warehouse_id,a.stock_number,sysdate,max(d.receipt_date),a.storage_code,
compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
max(d.receipt_date) birth_date,
b.class,c.orgcd
from storage_receipt a, stock_item b, warehouse c,receipt d
where a.stock_number = b.stock_number and
a.warehouse_id = c.warehouse_id and
a.receipt_id = d.receipt_id and
a.warehouse_id||a.stock_number||a.storage_code not in (select
warehouse_id||stock_number||storage_code from physical_inventory) and
a.warehouse_id in (select warehouse_id from warehouse where orgcd=:v_orgn) and a.stock_number = sys_context ('VW_CTX','STOCK_NUMBER') group by a.warehouse_id,a.stock_number,a.storage_code,b.class,c.orgcd ) order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.03 0.03 0 0 0 0
Fetch 3 0.01 0.00 0 102 8 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.05 0.04 0 102 8 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT ORDER BY
2 UNION-ALL
2 VIEW
6 WINDOW SORT
6 NESTED LOOPS
7 HASH JOIN
4 NESTED LOOPS
2 INDEX UNIQUE SCAN (object id 16422)
4 TABLE ACCESS FULL WAREHOUSE
12 INDEX RANGE SCAN (object id 16394)
6 INDEX UNIQUE SCAN (object id 16440)
0 VIEW
0 SORT GROUP BY
0 FILTER
2 NESTED LOOPS
2 NESTED LOOPS
2 NESTED LOOPS
24 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID STOCK_ITEM
2 INDEX UNIQUE SCAN (object id 16422)
24 TABLE ACCESS FULL STORAGE_RECEIPT
24 TABLE ACCESS BY INDEX ROWID WAREHOUSE
46 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID WAREHOUSE
2 INDEX UNIQUE SCAN (object id 16440)
2 TABLE ACCESS BY INDEX ROWID RECEIPT
2 INDEX UNIQUE SCAN (object id 16400)
1 INDEX FULL SCAN (object id 16394)

********************************************************************************

SELECT MAX(EFFECTIVE_DATE)
FROM
PHYSICAL_INVENTORY B WHERE B.WAREHOUSE_ID = :b1 AND B.STOCK_NUMBER = :b2
AND B.STORAGE_CODE = :b3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 8 0.00 0.01 0 0 0 0
Fetch 8 0.00 0.00 0 16 0 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.00 0.01 0 16 0 8

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
8 SORT AGGREGATE
32 INDEX RANGE SCAN (object id 16394)

********************************************************************************

SELECT QUANTITY
FROM
PHYSICAL_INVENTORY A WHERE A.WAREHOUSE_ID = :b1 AND A.STOCK_NUMBER = :b2
AND A.STORAGE_CODE = :b3 AND A.EFFECTIVE_DATE = :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 8 0.00 0.01 0 0 0 0
Fetch 8 0.00 0.00 0 24 0 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.00 0.01 0 24 0 8

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
8 TABLE ACCESS BY INDEX ROWID PHYSICAL_INVENTORY
16 INDEX UNIQUE SCAN (object id 16394)

********************************************************************************

SELECT SUM(QUANTITY_STORED)
FROM
STORAGE_RECEIPT A,RECEIPT B WHERE A.RECEIPT_ID = B.RECEIPT_ID AND
A.STOCK_NUMBER = :b1 AND A.WAREHOUSE_ID = :b2 AND A.STORAGE_CODE = :b3
AND B.RECEIPT_DATE >= :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 8 0.01 0.01 0 0 0 0
Fetch 8 0.04 0.04 0 318 32 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.05 0.05 0 318 32 8

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
8 SORT AGGREGATE
0 NESTED LOOPS
10 TABLE ACCESS FULL STORAGE_RECEIPT
0 TABLE ACCESS BY INDEX ROWID RECEIPT
4 INDEX UNIQUE SCAN (object id 16400)

********************************************************************************

SELECT SUM(QUANTITY_PICKED)
FROM
STORE_SHIPMENT A,SHIPMENT B WHERE A.SHIPMENT_ID = B.SHIPMENT_ID AND
A.STOCK_NUMBER = :b1 AND A.WAREHOUSE_ID = :b2 AND A.STORAGE_CODE = :b3
AND B.SHIPMENT_DATE >= :b4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 8 0.00 0.00 0 0 0 0
Fetch 8 0.26 0.30 1344 1442 40 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.26 0.30 1344 1442 40 8

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 12 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
8 SORT AGGREGATE
0 NESTED LOOPS
38 TABLE ACCESS FULL STORE_SHIPMENT
0 TABLE ACCESS BY INDEX ROWID SHIPMENT
60 INDEX UNIQUE SCAN (object id 16414)

********************************************************************************

Tom Kyte
October 30, 2006 - 12:51 pm UTC

It would appear to be using a different index wouldn't it:

SELECT max(effective_date) FROM physical_inventory b
WHERE b.warehouse_id = :b3 AND
b.stock_number = :b2 AND
b.storage_code = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6436 0.30 0.31 0 0 0 0
Fetch 6436 0.21 0.27 0 13187 0 6436
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12873 0.51 0.58 0 13187 0 6436

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
6436 SORT AGGREGATE (cr=13187 r=0 w=0 time=252546 us)
19764 INDEX RANGE SCAN PK_PHYSICAL_INVENTORY (cr=13187 r=0 w=0 time=200554
us)(object id 26808)


Rows Row Source Operation
------- ---------------------------------------------------
8 SORT AGGREGATE
32 INDEX RANGE SCAN (object id 16394)


Actually I CANNOT TELL, what is object id 16,394

please edit in the future, we only need relevant information - we are not talking about most of the queries there - just post relevant details. save space, make it easier to understand.

query

sam, October 30, 2006 - 3:11 pm UTC

Tom:

Does this help you,

8iSQL>  exec print_table('select * from user_objects where object_id in (16394,26808) ');
OBJECT_NAME                   : PK_PHYSICAL_INVENTORY
SUBOBJECT_NAME                :
OBJECT_ID                     : 16394
DATA_OBJECT_ID                : 16394
OBJECT_TYPE                   : INDEX
CREATED                       : 19-aug-2003 13:58:34
LAST_DDL_TIME                 : 19-aug-20032 13:58:34
TIMESTAMP                     : 2003-08-19:13:58:34
STATUS                        : VALID
TEMPORARY                     : N
GENERATED                     : N
SECONDARY                     : N
-----------------

PL/SQL procedure successfully completed.
 

Tom Kyte
October 30, 2006 - 3:21 pm UTC

since that is the same index, my answer is now:

WELL YOU OBVIOUSLY HAVE RADICALLY DIFFERENT DATA in 9i then you did in 8i.

Look at the numbers, same plan - entirely different results. Data is radically different.

So why do you think they should be even close in performance????

8i, 8 rows, 9i, over 6,400 rows - ummm??

query

sam, October 30, 2006 - 4:12 pm UTC

Tom:

No, the 9i data came orignally from 8i. That number does not seem to represent a recod count in a table.  

We added a few records for testing. TO confirm I am listing the record count of the tables involved in 8i versus 9i.

8iSQL> select count(*) from physical_inventory
  COUNT(*)
----------
      8660
9iSQL> select count(*) from physical_inventory
  COUNT(*)
----------
      8502
8iSQL> select  count(*) from shipment
  COUNT(*)
----------
     12053
9iSQL> select count(*) from shipment
  COUNT(*)
----------
     11704
8iSQL> select count(*) from shipped_item
  COUNT(*)
----------
     25772
9iSQL> select count(*) from shipped_item
  COUNT(*)
----------
     25032
8iSQL> select count(*) from store_shipment
  COUNT(*)
----------
     21985
9iSQL> select count(*) from store_shipment
  COUNT(*)
----------
     21348
 

Tom Kyte
October 30, 2006 - 4:19 pm UTC

do you see that the tkprof from 8i - the tkprof that SHOWS WHAT actually happens - shows clearly:

8 records returned from query.


In 9i, over 6,400

Please explain why there is this difference?

query

sam, October 30, 2006 - 5:07 pm UTC

Tom:

1.  If I knew that the MYSTERY would be solved. You have same query with same data set running in 8i and 9i producing two different results. Does not seem the CBO is creating execution plan differently in 9i from 8i?


2.  That query you were looking it is coming from the query inside the ref cursor. When I run both in sql*plus with autotrace I see 6445 records processed in both.  DO yo usee something wrong in the query. I think I am calculating inventory for every stock item using the inside query and then selecting from that the stock item passed in the outside query? DO you see that. 

SQL> select a.warehouse_id, a.stock_number,
  2                              max(effective_date) over
  3                              (partition by a.warehouse_id, a.stock_number, storage_code) max_eff_dt,
  4                              effective_date,storage_code,
  5                              compute_qty_stored(a.warehouse_id,a.stock_number,a.storage_Code) qty_available,
  6                              birth_date,b.class,c.orgcd
  7                              from physical_inventory a, stock_item b ,warehouse c where
  8                              a.stock_number = b.stock_number and
  9                              a.warehouse_id = c.warehouse_id  and
 10                              a.warehouse_id in
 11  (select warehouse_id from warehouse where orgcd='ABC')  

Tom Kyte
October 30, 2006 - 6:47 pm UTC

if you are saying

a) data is the same
b) inputs are the same
c) answers are different

please - you are using the wrong site, time for metalink.

A tricky one....its urgent....

Biswadip, October 30, 2006 - 11:34 pm UTC

Hello Tom,

I have a table called test(ID Number, ParentID Number, Name Varchar2(20))
i have the following the data in a hierarchical structure

Root (0)
|
----LP-0 (1)
|
|
----LI-1 (2)
| |
| |--LP-1 (2.1)
| |--LP-2 (2.2)
| |--LP-3 (2.2)
|
|
----LO-1 (3)
| |
| |
| |--LP-4 (3.1)
| |--LP-5 (3.2)
| |--LO-2 (3.3)
| |
| |--LP-6 (3.3.1)
| |--LP-7 (3.3.2)
| |--LO-3 (3.3.3)
| |
| |--LP-8 (3.3.3.1)
| |--LP-9 (3.3.3.2)
|----LP-10 (4)


So The data in the table is looks like
LEVEL ID PARENTID NAME
==============================================
1 1 Root
2 2 1 LP-0
2 3 1 LI-1
3 4 3 LP-1
3 5 3 LP-2
3 6 3 LP-3
2 7 1 LO-1
3 8 7 LP-4
3 9 7 LP-5
3 10 7 LO-2
4 11 10 LP-6
4 12 10 LP-7
4 13 10 LO-3
5 14 13 LP-8
5 15 13 LP-9
2 16 1 LP-10

I need a output with another column say LevelNumber the value of which are displayed in the
tree structure adjacent to each node. Read the number from the right,1st number from before the
1st dot(.) indicates which child it is of it's parent. like 1st or 2nd or 3rd or so on. and rest of the number in the concatenated manner indicates again which level and which parent
it belongs to.
i have written a query to get the value of the LevelNumber as I want. The query is below

SELECT m.ID
, m.Name
, m.ParentID ParentID
, NVL(LTRIM(SYS_CONNECT_BY_PATH(m.Rank, '.'), '.'),'0') LevelNumber
FROM (SELECT ID,Name,ParentID,(CASE
WHEN PARENTID IS NULL THEN NULL
ELSE RANK() OVER (PARTITION BY ParentID ORDER BY ID)
END) Rank
FROM test) m
CONNECT BY m.ParentID = PRIOR m.ID
START WITH m.ParentID IS NULL
ORDER BY ID
/

LEVEL ID NAME PARENTID LEVELNUMBER
=====================================================
1 1 Root 0
2 2 LP-0 1 1
2 3 LI-1 1 2
3 4 LP-1 3 2.1
3 5 LP-2 3 2.2
3 6 LP-3 3 2.3
2 7 LO-1 1 3
3 8 LP-4 7 3.1
3 9 LP-5 7 3.2
3 10 LO-2 7 3.3
4 11 LP-6 10 3.3.1
4 12 LP-7 10 3.3.2
4 13 LO-3 10 3.3.3
5 14 LP-8 13 3.3.3.1
5 15 LP-9 13 3.3.3.2
2 16 LP-10 1 4


I am able to get the output as I wanted....but the problem is I am inserting huge number of rows like more than 10000 rows at a time and then I am generating this LevelNumber which are inserted to it's 4 child table. I need this LevelNumber in the child tables so I am querying this select statement for all the child tables for 10000 rows which in turn makes my application very slow


Is there any better way to create this kind of number automatically.
Any kind of help is highly appreciative.
Thanks.

With Regards
Biswadip Seth




To: Biswadip

Michel Cadot, October 31, 2006 - 4:20 am UTC


Avoid posting the same question in different threads.
If it's urgent you should first post the prerequisites: create and instrt into statements.

Michel

Scripts Related to "A tricky one....its urgent.... "

Biswadip, October 31, 2006 - 6:56 am UTC



The scripts are as follows

create table TEST
(
ID NUMBER,
PARENTID NUMBER,
NAME VARCHAR2(20)
)
/
Insert Scripts are as below
---------------------------
insert into TEST (ID, PARENTID, NAME) values (1, null, 'Root');
insert into TEST (ID, PARENTID, NAME) values (2, 1, 'LP-0');
insert into TEST (ID, PARENTID, NAME) values (3, 1, 'LI-1');
insert into TEST (ID, PARENTID, NAME) values (4, 3, 'LP-1');
insert into TEST (ID, PARENTID, NAME) values (5, 3, 'LP-2');
insert into TEST (ID, PARENTID, NAME) values (6, 3, 'LP-3');
insert into TEST (ID, PARENTID, NAME) values (7, 1, 'LO-1');
insert into TEST (ID, PARENTID, NAME) values (8, 7, 'LP-4');
insert into TEST (ID, PARENTID, NAME) values (9, 7, 'LP-5');
insert into TEST (ID, PARENTID, NAME) values (10, 7, 'LO-2');
insert into TEST (ID, PARENTID, NAME) values (11, 10, 'LP-6');
insert into TEST (ID, PARENTID, NAME) values (12, 10, 'LP-7');
insert into TEST (ID, PARENTID, NAME) values (13, 10, 'LO-3');
insert into TEST (ID, PARENTID, NAME) values (14, 13, 'LP-8');
insert into TEST (ID, PARENTID, NAME) values (15, 13, 'LP-9');
insert into TEST (ID, PARENTID, NAME) values (16, 1, 'LP-10');
commit;



WITH clause

Yoav, November 04, 2006 - 11:06 am UTC

Hi Tom,
Could you please explain how and when to use "WITH" clause?
Thanks.

Tom Kyte
November 04, 2006 - 12:32 pm UTC

Different between WHERE and ON

Ed, November 07, 2006 - 3:30 am UTC

Hi Tom,

Thanks for your answer that i've posted on October 27, 2006.

--Query 1
SELECT * FROM table1
LEFT JOIN table2 ON table1.cola = table2.cola
AND table1.cola is not null AND table2.cola is not null;

--Query 2
SELECT * FROM table1
LEFT JOIN table2 ON table1.cola = table2.cola
WHERE table1.cola is not null AND table2.cola is not null;

On the 2 queries above, may I know what are the hugh differences between the WHERE clause and ON clause?

Thanks

Tom Kyte
November 07, 2006 - 4:31 pm UTC

one does an outer join using set "X" of join criteria. There is no post filter on this data.

the other one OUTER JOINS using set "Y" of join criteria and then applies a predicate to the result of that outer join (which uses entirely different criteria to join with!!!!)




Index not shown in Execution Plan

Deepak, November 10, 2006 - 7:34 am UTC

Hi Tom,

I have a situation where is a query runs faster when I create an index on a particular column of a table. But the interesting fact is the usage of the index is not confirmed by the runtime execution plan as obtained from V$SQL_PLAN dyanamic performance view, i.e., the execution plan does not contain that index name in any of the steps.

Can you please explain how is it possible. Is there any way to find whether that index is being used by that particular query (other than index monitoring).

Tom Kyte
November 10, 2006 - 9:05 am UTC

show me

TO: Ed RE: diff between WHERE and ON

Duke Ganote, November 10, 2006 - 9:53 am UTC

Gotta love key words that are hard to search for...NOT! Try this:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6585774577187#56266225203647 <code>

Neet help in create sql query

Sandeep Jadhav, November 13, 2006 - 3:26 am UTC

I have two table 1)pwt table 2) claimed table.

Pwt table details:-
transactionid
drawdate
Terminalid
game name
Prizeamount
draw no

Claimtable details:-
Transactionid
Claimed date
terminalid
Claimed amount
draw no

I need to create query on following condition.
1)(pwt table) Pwt amount must be<=5000 only
2)drawdate is not greater than 30 days of claim date.
3) here transaction id is same in both table.

so on this condition i want output that how many tickets are umclaimed. (ticket unclaimed= pwt transactionid-claim transactionid)
plz solved my query



Tom Kyte
November 13, 2006 - 3:32 am UTC

join, filter - seems rather straightforward.

what do German postal codes have to do with anything? (plz?)

you have a join by transactionid, you have a filter (where clause) on two columns, you have a count(*) to count the records. Pretty basic query.

but one I won't write since I don't have your tables...

date function

Sandeep Jadhav, November 13, 2006 - 8:24 am UTC

Tom,
I need date function for counting days from specific date.
for exa. draw time is 01/01/2006, claimed date must not be more than 30 days from draw date so plz told me this type of date function is available and if yes than which is one.

Tom Kyte
November 14, 2006 - 3:59 am UTC

we call is "subtract"


ops$tkyte%ORA9IR2> select sysdate-to_date('01-jan-2006','dd-mon-yyyy')
  2  from dual;

SYSDATE-TO_DATE('01-JAN-2006','DD-MON-YYYY')
--------------------------------------------
                                  317.116609

 

To: Sandeep Jadhav

Michel Cadot, November 13, 2006 - 11:00 am UTC

Minus is a good option:

SQL> select trunc(sysdate-to_date('01/01/2006','DD/MM/YYYY')) "NbDay" from dual;
     NbDay
----------
       316

1 row selected.

Subtracting 2 dates gives the difference in unit of days.

Michel
 

query

sam, November 14, 2006 - 5:43 pm UTC

Tom:

Is there anything in terms of installation you can check for that can cause a slow running of pl/sql code using ref cursors in oracle 9i. I go the feeling that some kind of package is missing because the ode works fine in 8i.

Thanks

Tom Kyte
November 15, 2006 - 6:53 am UTC

no, that doesn't even make sense...

look at your query plan, it has changed, it is not "a ref cursor" issue, it is "my query is running slower" issue.

remove ref cursor from the example and you'll see the query itself is running slower.

Differencre in two different table

Sandeep Jadhav, November 21, 2006 - 7:21 am UTC

Hello tom,

I have two different in one database. and i want differerence between days which is date column in two different table.
(plz use datediff function)


Tom Kyte
November 22, 2006 - 3:28 pm UTC

plz - german postal codes? what does that have to do with Oracle and such?

if you want to use datediff, you would be using another database.

In Oracle, you just subtract.

select date1-date2
from ....


viola'

</code> http://asktom.oracle.com/Misc/DateDiff.html <code>

better yet ...

A reader, November 22, 2006 - 3:52 pm UTC

voila ...

calculate payments

A Reader, November 23, 2006 - 10:32 am UTC

Hi Tom, I am trying the solve write a code for calulating payments. I have an example below. Any help will be appreciated.

alter session set nls_date_format='yyyymmdd');


create table t1
(
amount1 number
,term1 number
,pay1 number(12,2)
,amount2 number
,term2 number
,pay2 number(12,2)
,amount3 number
,term3 number
,pay3 number(12,2)
,total_terms number
,start_date date
,end_date date
,next_pay_date date
);

create table t2
(
term number
,amount number
,pay_date date
);


insert into t1 values
(100,3,0
,200,3,0
,400,6,0
,12
,'20060815'
,'20070814'
,'20061201');


commit;



--Results into t2 should be


term amount pay_date
---- ------ --------
1 100 20060901
2 100 20061001
3 154.83 20061101
4 200 20061201

--Amount calculation for term 3(mix term):
( ((14*100) + (17*200))/31) ,
since start day is on 1st of the month.


--update counters

pay1 = 3
pay2 = 1
next_pay_date = '20070101'




--on next_pay_date '20070101', add following in table t2

5 200 20070101

--update counter in t1

pay2 = 2
next_pay_date = '20070201'


Thanks.



Can i ask a question

Ahmed, November 27, 2006 - 11:55 am UTC

i have lot of usef from this site

Can i ask question

Ahmed, November 27, 2006 - 12:11 pm UTC

i have lot of usef from this site

The first tabel Property

PROPERTY_ID STREET CITY PRICE TYPICAL_RENT EMPLOYEE_ID
200212 5657Court Street Dahan £150,000.00 £12,500.00 101
200213 5658 Maamoorah Street Maamoorah £200,000.00 £16,667.00 102
200314 5659 Al Khor Street Dafan £150,000.00 £12,500.00 103
200315 5660 Al Yayiz Street Al Hudeeba £500,000.00 £41,667.00 104
200316 5662 Hamza Street Digdagga £450,000.00 £37,500.00 105
200317 5663 Al Arabia Street Al Nakheel £350,000.00 £29,166.00 106
200318 5661 Al Seih Street Flayah £400,000.00 £33,333.00 107
200319 5662 Hamza Street Digdagga £250,000.00 £20,833.00 108
200320 5662 Hamza Street Digdagga £100,000.00 £83,333.00 109
200321 5664 Khuzam Street Khuzam £120,000.00 £10,000.00 110
200322 5662 Hamza Street Digdagga £450,000.00 £37,500.00 111
200323 5662 Hamza Street Digdagga £700,000.00 £58,333.00 112
200324 5662 Hamza Street Digdagga £600,000.00 £50,000.00 113


The second table Renter

RENTER_ID NAME PHONE MAX_RENT
10010 Khalid Hassan 050-9995522 £12,000.00
10011 Ahmed Nasser 050-8822666 £12,000.00
10012 Hamad Khalifa 050-3331155 £44,000.00
10013 Omar Ahmed 050-6700067 £20,000.00
10014 Hamad salem 050-5858585 £44,000.00
10015 Obaid Hassan 050-5856548 £13,000.00
10016 Rashed Khalid 050-1414141 £14,000.00
10017 Khalifa Ahmed 050-3647842 £19,000.00

The third table owns

PROPERTY_ID OWNER_ID PCNT_SHARE
200212 4666 100.00%
200314 4662 45.00%
200314 4663 55.00%
200316 4665 100.00%
200318 4664 100.00%

The fourth table
OWNER_ID NAME PHONE
4662 Ali Mohammed 050-2223322
4663 Rashed Ahmed 050-2461118
4664 Salem Saeed 050-2645050
4665 Adbulla Esmaeel 050-2212824
4666 Jasim Ali 050-2222105

the fifth table employee

EMPLOYEE_ID NAME PHONE SALARY MGR_ID
101 Ahmed Mohammed 050-4444555 £6,000.00 101
102 Ghayth Ahmed 050-2252525 £4,000.00 102
103 Anwar Amin 050-4545999 £3,500.00 103
104 Adel Aziz 050-3636360 £4,500.00 104
105 Basil Basim 050-9798787 £3,000.00 105
106 Abdul Rahman 050-7776966 £5,000.00 106
107 Essa Ali 050-6565425 £2,500.00 107
108 Hamad Mohammed 050-4551789 £2,000.00 108
109 Abdullah Saeed 050-7991452 £1,500.00 109
110 Obaid Salem 050-6589214 £3,500.00 110
111 Khalid Hassan 050-3636215 £4,500.00 111
112 Jassim Ahmed 050-6272766 £3,500.00 112
113 Abdul Aziz 050-4851236 £6,000.00 113

The sixth table

PROPERTY_ID RENTER_ID BEGIN_DATE END_DATE ACTUAL_RENT
200212 10011 07/01/05 08/01/2005 £3,500.00
200213 10015 07/07/06 07/08/2006 £2,500.00
200314 10013 03/04/05 04/04/2005
200315 10010 01/12/05 02/11/2005 £4,000.00
200315 10016 06/05/05 06/06/2005 £4,500.00
200316 10017 06/04/05 06/05/2005 £2,000.00
200317 10012 13/05/06 13/06/2006 £2,000.00
200318 10014 01/08/06 01/08/2007 £4,000.00



1- how can i decrease by 5% the pricein specific ciy that has less thn two agreements in 2006 (obtain the year component from begin_date

2- display the name and phone of the first renter of property that is located in 5662 Hamza street within Digdaga

3- Display the property_id, street, city, typical rent and difference between actual rent and typical rent for the latest agreement of all properties in cities beginning with the letter "D"

4- display the name of renter and total rent collected from respective renter for all agrements started and ended in 2005

Display the property_id, street, city and price in hieh to low order og price of all properties that are currently being rented out


Pleases can you answer me as soon as possiple

because ihave an examination and these kind of questions will be included

I hope you answer me as soon as possiple

Tom Kyte
November 27, 2006 - 7:38 pm UTC

come on, I don't do homework. really.


You do your homework, so you can learn how to do this, so when you graduate - your degree means something.

THE QUETION ABOVE

AHMED, November 27, 2006 - 12:28 pm UTC

The QUETION ABOVE IS ABOUT AQL

SO I WANT THE CODE FOR THE QUETIONS

I WANT THE ANSWER TODAY IF YOU CAN




Tom Kyte
November 27, 2006 - 7:39 pm UTC

I'm going to break my own cardinal rule:

OMG, give me a break. I'll never cease to be amazed at the audacity. TTFN.

Count No of Days

Venkat, November 27, 2006 - 5:56 pm UTC

Iam trying to get the count of week days between two dates. Iam trying this way and iam getting the result as 14. I do not want to count weekend days.

select (sysdate + 14) - sysdate from dual where TO_CHAR(sysdate,'DAY') <> 'SATURDAY' OR TO_CHAR(sysdate,'DAY') <> 'SUNDAY'

WTF is AQL?

A reader, December 02, 2006 - 3:51 pm UTC

Tom -

Can you teach me AQL also?

Sounds interesting . . .

The Best!!

Rahul, December 04, 2006 - 1:37 am UTC

U are the best TOM!!
Ur way to find out the solution is the most simplified and yet the most reliable as far as the output is concerned.
Would like to achieve that simplicity!!! Hope one day i will :)
Thanks.

question about update statement

Sara, December 04, 2006 - 10:24 am UTC

I need to update an aggregate table based on a calculation from a detail table.  When I run the query as a select statement, I get back the expected results.  However, when I try the same calculation in an update statement, I get an error about parenthesis.

Here is an simplified example:

SQL> create table test_table1
(key_field varchar2(10),
 quantity number,
 ship_ind varchar2(2))

Table created.

SQL> insert into test_table1 values ('A',100,'SE')
1 row created.
SQL> insert into test_table1 values ('A',200,'SW')
1 row created.
SQL> insert into test_table1 values ('B',300,'SE')
1 row created.

SQL> create table test_table2
(key_field varchar2(10),
 summarized_qty number)

Table created.

SQL> insert into test_table2 values ('A',0)
1 row created.
SQL> insert into test_table2 values ('B',0)
1 row created.

SQL> commit
Commit complete.

SQL> select t1.key_field,
((select count(1) from test_table1 t2 where key_field = t1.key_field and ship_ind = 'SE') - 
       (select count(1) from test_table1 t2 where key_field = t1.key_field and ship_ind = 'SW')) sum_amt
from test_table2 t1

KEY_FIELD     SUM_AMT
---------- ----------
A                   0
B                   1

2 rows selected.

SQL> update test_table2 t1
set summarized_amt  = ((select count(1) from test_table1 t2 where key_field = t1.key_field and ship_ind = 'SE') - 
       (select count(1) from test_table1 t2 where key_field = t1.key_field and ship_ind = 'SW'))

ORA-00907: missing right parenthesis


What am I doing wrong in the update statement? 

Tom Kyte
December 04, 2006 - 11:10 am UTC

you have:

update t
   set c = ((select x from x)-(select y from y))


you need:

update t 
   set c = (select (select x from x)-(select y from y) from z)


you need a well formed subquery - but don't do that, do this instead:




ops$tkyte%ORA10GR2> select * from test_table2;

KEY_FIELD  SUMMARIZED_QTY
---------- --------------
A                       0
B                       0

ops$tkyte%ORA10GR2> merge into test_table2 t2
  2  using
  3  (
  4  select key_field,
  5         sum(case when ship_ind='SE' then 1
  6                  when ship_ind='SW' then -1
  7                          end) summed
  8    from test_table1
  9   where ship_ind in ( 'SW', 'SE' )
 10   group by key_field
 11  ) t1
 12  on (t1.key_field=t2.key_field)
 13  when matched then update set t2.summarized_qty = t1.summed
 14  /

2 rows merged.

ops$tkyte%ORA10GR2> select * from test_table2;

KEY_FIELD  SUMMARIZED_QTY
---------- --------------
A                       0
B                       1
 

Convert Varchar to Datetime

SAndeep Jadhav, December 05, 2006 - 6:39 am UTC

hello sir,


I have one table, in that one claimdate column in varchar format. but i want this in datetime formate what i will do?




Tom Kyte
December 05, 2006 - 10:01 pm UTC

you will read the sql reference manual and discover the WEALTH of functions you have available to you!!!!!

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/functions183.htm#i1003589 <code>

Group by Issue

Danny J, December 05, 2006 - 3:43 pm UTC

Hi Tom,

I am using ORACLE 9.2 and the Optimizer mode is CHOOSE

I have doubt with group by clause that the way it is geting executed in ORACLE.

if I execute the whole query below it is working fine.

Please note that the select clause in the 'IN LINE view'. has 'CHANNEL_DESCRIP' column.
I didnt add the 'CHANNEL_DESCRIP' column in the group by clause in the IN LINE view.
(basicI forgot to add the column in the group by clasue while writing the query )

But, if I execute the whole query it is working fine.


SELECT
DISTINCT cus.region_short
FROM
(
SELECT
SUM(
CASE WHEN f.cncl_dt_key = -1
THEN f.act_rvn_amt
WHEN f.entry_dt_key = f.cncl_dt_key
THEN 0
WHEN t.day_key = f.entry_dt_key
THEN f.act_rvn_amt
ELSE -f.act_rvn_amt
END
) act_rvn_amt,
t.day_key entry_cncl_dt_key,
f.cust_id,
f.item_id,
CHANNEL_DESCRIP
FROM
hbctec.FISCAL_TIME_DIM t,
hbctec.SLSORD_MEGA_FACT f,
dist_chnl_dim d
WHERE
(t.day_key = f.entry_dt_key OR
t.day_key = f.cncl_dt_key) AND
-- t.year_offset_current in (0,-1) AND
f.daily_load_flg BETWEEN 0 AND 1 AND
d.SALES_CODE_ID = f.SALES_CODE_ID
GROUP BY
t.day_key
, f.cust_id
, f.item_id
) sls,
FISCAL_TIME_DIM ftd,
ITM_DIM itm,
CUST_DIM cus
WHERE
sls.entry_cncl_dt_key = ftd.day_key AND
sls.item_id = itm.itm_id AND
sls.cust_id = cus.site_use_id AND
-- detail_status_id >= 1 AND
-- daily_load_flg IN (0,1) AND
ftd.year_offset_current IN (0,-1) AND
itm.coe_key IN (21,22,23,101) AND
-- cus.region_short = 'T'
CHANNEL_DESCRIP IN ( 'Trade')


if I execute the IN LINE view separetly, it is giving an error 'CHANNEL_DESCRIP' is not a group by function. this is normal.

if I use the same qyery in the IN LINE view, its working

My question is,
What is the funda behind that?
or
is ORACLE re writing the query?
if ORACLE re writes the query will it gives proper result set or should I add 'CHANNEL_DESCRIP' column in GROUP BY clause.

Thanks for your help

Dan

Tom Kyte
December 06, 2006 - 9:32 am UTC

make the example smaller and make it so we can actually run it, you know

no create
no inserts
no lookie


but make it smaller, I'm sure you can make the query a lot smaller in order to demonstrate whatever issue you are hitting.

Sub Query Not Working

Gutkha, December 05, 2006 - 8:15 pm UTC

I have the following sql

SELECT EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT
FROM PS_LEAVE_ACCRUAL
WHERE EMPLID IN ('XXXXXXX')
AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
FROM PS_LEAVE_ACCRUAL X
WHERE X.EMPLID = EMPLID
AND X.PLAN_TYPE = PLAN_TYPE
AND X.ACCRUAL_PROC_DT <=sysdate)
GROUP BY EMPLID,PLAN_TYPE,ACCRUAL_PROC_DT


SELECT EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT
FROM PS_LEAVE_ACCRUAL
WHERE EMPLID IN ('XXXXXXX')
AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
FROM PS_LEAVE_ACCRUAL X
WHERE X.EMPLID = 'XXXXXXX'
AND X.PLAN_TYPE = PLAN_TYPE
AND X.ACCRUAL_PROC_DT <=sysdate)

when i hardcode the value of emplid IN the sub query it returns rows or else no rows returned. Why the
X.EMPLID = EMPLID is not working in the 1st query.
Thanks for your time.


Tom Kyte
December 06, 2006 - 9:34 am UTC

no tables
no inserts
NO LOOK

for Gutkha

Riaz, December 06, 2006 - 12:20 am UTC

Try:

SELECT EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT
FROM PS_LEAVE_ACCRUAL t
WHERE EMPLID IN ('XXXXXXX')
AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
FROM PS_LEAVE_ACCRUAL X
WHERE X.EMPLID = t.EMPLID
AND X.PLAN_TYPE = t.PLAN_TYPE
AND X.ACCRUAL_PROC_DT <=sysdate)
GROUP BY EMPLID,PLAN_TYPE,ACCRUAL_PROC_DT


Thanks

Gutkha, December 06, 2006 - 6:57 am UTC

Thanks Riaz,
I have a IN list of employees that returns rows except for one employee in the list without having the alias for the main select. The employee having issue is working with alias as you mentioned or if i hardcode the employee id in the sub query. Why is the query without the alias returning rows for some values in the list?

Tom Kyte
December 07, 2006 - 8:34 am UTC

no clue

why no clue?

no EXAMPLE demonstrating the ISSUE.

I hope iam clear with this info

Gutkha, December 07, 2006 - 12:24 pm UTC

Thanks in advance.
The following is the data in the table for the following employee ids
000142665,000085588,000003410,000002314

1 SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID = '000142665'
6* ORDER BY 1,2,3

EMPLID PL ACCRUAL_P
----------- -- ---------
000142665 50 21-OCT-06
000142665 51 21-OCT-06
000142665 52 21-OCT-06
000142665 5Z 21-OCT-06

1 SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID IN ('000085588','000003410','000002314')
6* ORDER BY 1,2,3


EMPLID PL ACCRUAL_P
----------- -- ---------
000002314 50 28-OCT-06
000002314 51 28-OCT-06
000002314 52 28-OCT-06
000002314 5R 28-OCT-06
000002314 5Z 28-OCT-06
000003410 50 28-OCT-06
000003410 51 28-OCT-06
000003410 52 28-OCT-06
000003410 5R 28-OCT-06
000003410 5Z 28-OCT-06
000085588 50 28-OCT-06

EMPLID PL ACCRUAL_P
----------- -- ---------
000085588 51 28-OCT-06
000085588 52 28-OCT-06
000085588 5R 28-OCT-06
000085588 5Z 28-OCT-06


In the following SQL with the sub query the employee id
is not returned 000142665. But returns data for the other 3 emp ids.

SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID IN ('000142665','000085588','000003410','000002314')
6 AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
7 FROM PS_LEAVE_ACCRUAL X
8 WHERE X.EMPLID = EMPLID
9 AND X.PLAN_TYPE = PLAN_TYPE
10 AND X.ACCRUAL_PROC_DT <= SYSDATE)
11 ORDER BY EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT
12 ;

EMPLID PL ACCRUAL_P
----------- -- ---------
000002314 50 28-OCT-06
000002314 51 28-OCT-06
000002314 52 28-OCT-06
000002314 5R 28-OCT-06
000002314 5Z 28-OCT-06
000003410 50 28-OCT-06
000003410 51 28-OCT-06
000003410 52 28-OCT-06
000003410 5R 28-OCT-06
000003410 5Z 28-OCT-06
000085588 50 28-OCT-06

EMPLID PL ACCRUAL_P
----------- -- ---------
000085588 51 28-OCT-06
000085588 52 28-OCT-06
000085588 5R 28-OCT-06
000085588 5Z 28-OCT-06

15 rows selected.

SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID = '000142665'
6 AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
7 FROM PS_LEAVE_ACCRUAL X
8 WHERE X.EMPLID = EMPLID
9 AND X.PLAN_TYPE = PLAN_TYPE
10 AND X.ACCRUAL_PROC_DT <= SYSDATE)
11 ORDER BY EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT;

no rows selected

If i hardcode the value for emp id 000142665 in the sub query
then it returns this employee.

SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID = '000142665'
6 AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
7 FROM PS_LEAVE_ACCRUAL X
8 WHERE X.EMPLID = '000142665'
9 AND X.PLAN_TYPE = PLAN_TYPE
10 AND X.ACCRUAL_PROC_DT <= SYSDATE)
11 ORDER BY EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT;

EMPLID PL ACCRUAL_P
----------- -- ---------
000142665 50 21-OCT-06
000142665 51 21-OCT-06
000142665 52 21-OCT-06
000142665 5Z 21-OCT-06


But if an alias is given for the main table
it returns all rows. following is the SQL.

Just curious why it returns all employees when a alias is given and only
return some employees when an alias is not given. The only difference
i see form the employee id 000142665 and the rest of three
000085588,000003410,000002314 is the ACCRUAL_PROC_DT.

emp id 000142665 has ACCRUAL_PROC_DT as 21-OCT-06
emp ids 000085588,000003410,000002314 have ACCRUAL_PROC_DT as 28-OCT-06

SELECT A.EMPLID
2 , A.PLAN_TYPE
3 , A.ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL A
5 WHERE EMPLID IN ('000142665','000085588','000003410','000002314')
6 AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
7 FROM PS_LEAVE_ACCRUAL X
8 WHERE X.EMPLID = A.EMPLID
9 AND X.PLAN_TYPE = A.PLAN_TYPE
10 AND X.ACCRUAL_PROC_DT <= SYSDATE)
11 ORDER BY A.EMPLID, A.PLAN_TYPE, A.ACCRUAL_PROC_DT;

EMPLID PL ACCRUAL_P
----------- -- ---------
000002314 50 28-OCT-06
000002314 51 28-OCT-06
000002314 52 28-OCT-06
000002314 5R 28-OCT-06
000002314 5Z 28-OCT-06
000003410 50 28-OCT-06
000003410 51 28-OCT-06
000003410 52 28-OCT-06
000003410 5R 28-OCT-06
000003410 5Z 28-OCT-06
000085588 50 28-OCT-06

EMPLID PL ACCRUAL_P
----------- -- ---------
000085588 51 28-OCT-06
000085588 52 28-OCT-06
000085588 5R 28-OCT-06
000085588 5Z 28-OCT-06
000142665 50 21-OCT-06
000142665 51 21-OCT-06
000142665 52 21-OCT-06
000142665 5Z 21-OCT-06

Tom Kyte
December 07, 2006 - 1:22 pm UTC

sigh


no create
no inserts
no lookie

Thanks

Gutkha, December 07, 2006 - 8:02 pm UTC

Hi Tom,

The table PS_LEAVE_ACCRUAL gets updated/inserted by a COBOL program to which i do not have access. Iam not COBOL programmer. But iam just doing SELECTs against this table in SQL*Plus. That is the reason why iam not able give the creates/inserts info. All the info i have given here is from the results returned when i ran the SQLs thru SQL*Plus. I wanted to know why, if an alias is given the query returns all rows and returns only a subset if an alias is not given.

Tom Kyte
December 08, 2006 - 7:27 am UTC

my point:

if you do not give me a test case, I cannot answer your question. you need to supply a sqlplus script anyone on the planet could run to reproduce your issue.

I am *not* a sql compiler, I need to use one to see what is happening. in order to do that, I sort of need - well - A TEST CASE.

you don't need cobol to produce a test case to demonstrate your issue.

Guthka ... you've been answered already

Gabe, December 08, 2006 - 2:19 pm UTC

<quote>
SELECT EMPLID
2 , PLAN_TYPE
3 , ACCRUAL_PROC_DT
4 FROM PS_LEAVE_ACCRUAL
5 WHERE EMPLID = '000142665'
6 AND ACCRUAL_PROC_DT = (SELECT MAX(X.ACCRUAL_PROC_DT)
7 FROM PS_LEAVE_ACCRUAL X
8 WHERE X.EMPLID = EMPLID
9 AND X.PLAN_TYPE = PLAN_TYPE
10 AND X.ACCRUAL_PROC_DT <= SYSDATE)
11 ORDER BY EMPLID, PLAN_TYPE, ACCRUAL_PROC_DT;
</quote>

In the subquery, "WHERE X.EMPLID = EMPLID" is really the same as
"WHERE X.EMPLID = X.EMPLID" which is always TRUE (for NOT NULLs).

That unqualified EMPLID gets resolved to X.

And that changes the semantic of the entire query ... the subquery now returns the max ACCROUAL_PROC_DT across many more rows than just those with EMPLID='000142665'.


Thanks Gabe

Gutkha, December 08, 2006 - 2:50 pm UTC

That was a nice explanation. You are right the max(ACCRUAL_PROC_DT) in table PS_LEAVE_ACCRUAL is 28-OCT-06. As i explained the three emps returned had this date and the other employee '000142665' had a date of 21-OCT-06.

Thanks for the insight..

order by date type issue

Yong Wu, January 03, 2007 - 5:33 pm UTC

Tom,

I have a query here

select sid,schema,to_char(connect_date,'mm/dd/yy hh24:mi:ss') connect_date
from a order by connect_date desc

but the result shows

ORACLE oracle 09/28/06 15:10:34
ORACLE oracle 09/28/06 14:52:30
ORACLE oracle 09/28/06 14:39:27
ORACLE oracle 09/27/06 17:19:24
ORACLE oracle 09/27/06 17:04:56
ORACLE oracle 01/03/07 13:59:41
ORACLE oracle 01/03/07 13:44:25
ORACLE oracle 01/03/07 13:42:26
ORACLE oracle 01/03/07 13:17:30
SJPROD oaprod 01/03/07 12:40:26
SJPROD oaprod 01/03/07 12:40:21

how to fix this problem?

thanks in advance
Tom Kyte
January 05, 2007 - 8:53 am UTC

you are sorting by a string - the strings name is connect_date

alias your column name differently.

you want to order by CONNECT_DATE

you want to select .... ) connect_date_string


or something to that effect.

Require to display the data in a particular format

Hemal Deshmukh, January 09, 2007 - 5:58 am UTC

Hello Tom,
Consider the following query and its corresponding output:-

SQL> SELECT invc_number,SUM(effort_hours * hour_rate)
total_sum
FROM temp_effort_tran
WHERE model_dtl_id = '37'
AND status ='ACT'
GROUP BY invc_number;

INVC_NUMBER TOTAL_SUM
----------- ----------
1 10
3 20
4 83

SQL>

I want to re-write this query to display the above data in the following format :-

INVC_NUMBER TOTAL_SUM
------------------- ----------
1/3/4 113


Can you please advice how should I re-write the above query.

Thanks and Best Regards
Hemal
Tom Kyte
January 11, 2007 - 9:20 am UTC

search site for stragg

select stragg(invc_number), sum(x)
from (your query here, sum(effort_hours*hour_rate) aliased as X)


How to join tables when row value of a table is colname of other

Shailesh, January 10, 2007 - 7:35 am UTC

Hi Tom,

I am working on a query :->

Table 1 :->

Node XYZ Colname
--------------------------
A 1 C1
B 1 C2
C 1 C3

Table 2:->
XYZ C1 C2 C3
------------------------------
1 2.0 3.0 4.0


I want output as

Node ColName Value
-----------------------
A C1 2.0
B C2 3.0
C C3 4.0


Colud you please help me to generate this output?
Thanks in advance!!!!!!!

Try this

Anwar, January 11, 2007 - 10:43 am UTC

Shailesh, since you have a finite number of columns, you will probably not mind a little hardcoding.

select node,
colname,
(select decode(a.colname,
'C1',c1,
'C2',c2,
'C3',c3)
from table2
where xyz=a.xyz) "VALUE"
from table1 a

sql query response

padmaja, January 13, 2007 - 3:59 am UTC

we have a table of 30 lakh records record length is 600 . it stores bom information . if we query the table with non primary key columns response is very slow ( to retrieve 35 records it is taking 2 hrs time ). on what does the response depend . table is a partition table . partioned on updation date . all the query columns are indexed . we cannot reduce the size of the table .

thanks in advance .


range on row number

Ashok Balyan, January 29, 2007 - 6:45 am UTC

Hi Tom
I have a query(it have join of two tables), which returns 200 rows.
I want range on row number means 1-20 rows,21-40 rows etc.
I have used rank() and Dense_rank() analytical functions.
But here is performence problem.
How can I solved this problem.
Thanks
Tom Kyte
January 31, 2007 - 1:28 pm UTC

if it returns a mere 200 rows - I don't see why you would have a "performance" problem.

you'd need to provide a tad more detail - like showing the query returning 200 rows fast, but being really slow with dense_rank()

year, month, day between two data

Ronald, February 01, 2007 - 12:31 am UTC

SQL query for list the number of year, month and day between two dates input ? (leap year also counted)

Thanks
Tom Kyte
February 01, 2007 - 1:01 pm UTC

eh?

Version 8.1.7.0.0

Star, February 01, 2007 - 5:00 pm UTC

SQL> select * from v$version;
clsBANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for 32-bit Windows: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

SQL>

select * from wh_main_trans_m
WHERE DATE_CODE = to_date('01/01/2007','dd/mm/yyyy') AND
ms_code = '303' AND
rss_code = 'PJ60513' AND
product_code = 'S24841' AND
type_code = 'INY'

Date Ms_Code  Rss_Code Product Code Type Code Qty Amt
 Tax Amt Depth Qty Pkd Date
 01/01/2007
 134
 124
 S15271
 STRL
 0
 0
 0
 86025
 30/01/2007 6:05:41 PM
 
<code>


Pls. check the script, it seems that there is some bug. I am not able to get the desired output. Would be glad if you provide an online help.

Regards,
Star
Tom Kyte
February 02, 2007 - 10:14 am UTC

this is very amusing.

One wonders..... What your "desired output" might actually be.

think about it.... I see nothing wrong with the provided output.

Desired output

Star Nirav, February 05, 2007 - 5:53 pm UTC

HI...

Actually there is no data with this conditions so it should appear as no data/record found.

Here, I am seeing wrong output as compare the conditions and output.

I am using Oracle 8.1.7.0.0...

Regards,
Star Nirav
Tom Kyte
February 05, 2007 - 6:45 pm UTC

actually, if you think you are hitting a bug - please use support.

if you can give us a test case to look at - we'd be glad to.

SQL Query on multiple columns of a table

Smeer, February 06, 2007 - 7:49 am UTC

I have been asked to tuning SQL query generated by an application which is dynamically chosen columns of a table T1 in the WHERE clause. The table has nearly 68 columns one of which is a primary key. There are about 8 million rows in the tables and is subject to regular DML operations.

My question is, whether it is necessary to have index on all 60 columns of the table and where should I focus to remove performance bottleneck and reduce response time.

I want your general guidance on this issue.

Thanks Tom in advance!

Tom Kyte
February 06, 2007 - 11:19 am UTC

there is no way anyone can give you a reasonable answer to this question.

Oracle Error Message

Vinod, February 07, 2007 - 1:22 pm UTC

Tom,

I am creating a view in scott scheema and by mistake the view script contains the wronge scheema name.

create or replace view scote.employees (employee_id) as
(select employee_id from employees);

My question is why oracle is giving meesage as
"View created with compilation error"

Is this view is created? If yes then where?

Tom Kyte
February 07, 2007 - 6:53 pm UTC

the view is created

the view cannot be selected

the view is created where all views are - in the dictionary.

Oracle Error message

Vinod, February 08, 2007 - 2:01 am UTC

Tom,

But this "scote" user is not present in the database itself.

Then where can I find this view?


Tom Kyte
February 08, 2007 - 8:21 am UTC

actually, you'll have to help us reproduce this

ops$tkyte%ORA9IR2> create or replace view x.y (x) as select dummy from dual;
create or replace view x.y (x) as select dummy from dual
*
ERROR at line 1:
ORA-01917: user or role 'X' does not exist




Error

A reader, February 08, 2007 - 6:11 am UTC

You should get following error while creating view

ORA-01917: user or role 'SCOTE' does not exist




Nothing

thiagu, February 08, 2007 - 8:58 am UTC

hi how r u. pls go and sleep

SQL

Car Elcaro, February 22, 2007 - 12:19 am UTC

drop table t;

create table t
(
c1 number(2),
c2 varchar2(10),
c3 number(3)
);


insert into t values (1,'value',10);
insert into t values (1,'status',1);
insert into t values (2,'value',20);
insert into t values (2,'status',0);
insert into t values (3,'value',8);
insert into t values (3,'status',1);

select * from t;

C1 C2 C3
---------- ---------- ----------
1 value 10
1 status 1
2 value 20
2 status 0
3 value 8
3 status 1


6 rows selected.

Description
Each data contain two record (value and status) and identified with the same c1 column. Status represent the validity of the value corresponding to it (with the same c1). Zero mean invalid status dan one mean valid status.

Question
I want to calculate the average of all data (c3 where c2 = 'value') with valid status (c2 = 'status' and c3 = 1). Help me please to write the query.

Output expected

RESULT
----------
9

Thanks Tom.
Tom Kyte
February 22, 2007 - 8:44 am UTC

gosh, I hate this data model. Words cannot tell you how much. And yet, we see it over and over and over and over again.

And always the same results: "why is this so slow and how can I get answers to my questions from it"

flexible: not really
hard to query: absolutely
as fast as a glacier: definitely

ops$tkyte%ORA10GR2> select c1,
  2         max( decode( c2, 'value', c3 ) ) v,
  3             max( decode( c2, 'status', c3 ) ) s
  4    from t
  5   where c2 in ( 'value', 'status' )
  6   group by c1
  7  /

        C1          V          S
---------- ---------- ----------
         1         10          1
         2         20          0
         3          8          1

ops$tkyte%ORA10GR2> select avg(v)
  2    from (
  3  select c1,
  4         max( decode( c2, 'value', c3 ) ) v,
  5             max( decode( c2, 'status', c3 ) ) s
  6    from t
  7   where c2 in ( 'value', 'status' )
  8   group by c1
  9         )
 10   where s = 1
 11  /

    AVG(V)
----------
         9

ops$tkyte%ORA10GR2>

Thanks Tom

A reader, February 22, 2007 - 9:37 am UTC

I know that this is really bad data model. But in fact I need this to handle the dynamic spesification (often change - I wrote the example above are two - status and value from N other spesifications)

So do you have an advice to build good, well tuned data model to face the system with dynamic spesifications.

Thanks Again.

Different approch to get above result

A reader, February 22, 2007 - 11:35 pm UTC

select avg(a.c3) from
(select c1,c2,c3 from t where c2='value')a,
(select c1,c2,c3 from t where c2='status' and c3=1)b
where a.c1=b.c1

Tom can you pls comment on this how is this approch from performance prospective.
Tom Kyte
February 23, 2007 - 7:43 am UTC

two passes on the table and a join

versus a single pass

I'd prefer the single pass personally.

want to find at where it failed

Sean, February 23, 2007 - 11:05 pm UTC

we have a complex report which runs on the local db and also access the data on the remote db. it fails with ora-7445. the SR with Oracle has gone no way so far....

i want to find out where it fails exact. the 7445 dump file would not show that as told by the support. i wonder if use the sql_trace can find out. does the raw file show the report failing point? what would be the trace level?

TIA

Sean
Tom Kyte
February 26, 2007 - 12:48 pm UTC

the 7445 should show the active SQL statement - are you saying it does not?

Sean, February 24, 2007 - 8:36 am UTC

forgot the oracle version. it's 9iR2.

Sean

want to find out at where it failed.

Sean, February 26, 2007 - 2:05 pm UTC

My problem is that the view "CMI_PROJECT_MILESTONES_VIEW" failed with 7445. the view is very complex one, which calls functions and accessing remote database's data via db link.

The dump file does have the statements of relating the the view but i am not able to sort those statements out in terms that where 7445 failing at from the view.

from 7445 dump file:
ORA-07445: exception encountered: core dump [opidsi()+1064] [SIGSEGV] [Address not mapped to objec
t] [0xFE9DB66C] [] []
Current SQL statement for this session:
SELECT * FROM "P2RP"."CMI_PROJECT_MILESTONES_VIEW"
Tom Kyte
February 26, 2007 - 3:31 pm UTC

when you get a 7445, it is crashing in our code - there is nothing for you to "see" further really - you need to get that trace to support, they will diagnose it.

you know exactly where in YOUR code we are failing. When you query that view. Now it is their turn to take that trace file and figure out where in OUR code we are failing.

A tricky bit of SQL

A reader, March 05, 2007 - 9:22 am UTC

Hi Tom,

can you help with some SQL i am having trouble with? Maybe it can't be done in pure SQL but i have to ask before i go down that route.

I have a table of products. I have some products which are basic components that are combined to create other products.

The example below uses the idea of fruit baskets. I sell baskets (type = B) rather than the components themselves (type = C). If an order is made for a basket and then the user wants to change their mind, they can only UPGRADE the basket. An upgrade means the new basket must contain whatever is already in the initial basket.

I am trying to return a list of upgrade baskets. I have written some sql scripts to create this scenario:

drop table product_test;
drop table product_component_test;

CREATE TABLE product_test
(
product_ID NUMBER(9, 0) NOT NULL primary key,
product_desc Varchar2(50) not null,
product_type varchar2(1) constraint chk1 check (product_type IN ('C','B')));


insert into product_test values (1,'GRAPES','C');
insert into product_test values (2,'APPLES','C');
insert into product_test values (3,'ORANGES','C');
insert into product_test values (4,'BANANAS','C');

insert into product_test values (100,'Basket of Grapes and Apples','B');
insert into product_test values (101,'Basket of Oranges','B');
insert into product_test values (102,'Basket of Bananas','B');
insert into product_test values (103,'Basket of Oranges and Bananas','B');
insert into product_test values (104,'Basket of Grapes, Apples and Oranges','B');
insert into product_test values (105,'Basket of Grapes, Apples, Oranges and Bananas','B');

create table product_component_test
(
parent_product_id number(9,0) not null,
child_product_id number(9,0) not null);

insert into product_component_test values (1,100);
insert into product_component_test values (2,100);

insert into product_component_test values (3,101);

insert into product_component_test values (4,102);

insert into product_component_test values (3,103);
insert into product_component_test values (4,103);

insert into product_component_test values (1,104);
insert into product_component_test values (2,104);
insert into product_component_test values (3,104);

insert into product_component_test values (1,105);
insert into product_component_test values (2,105);
insert into product_component_test values (3,105);
insert into product_component_test values (4,105);

commit;
/


So, heres what I am expecting to see for a given input:

100 --> 104, 105
101 --> 103, 104, 105
102 --> 103, 105
103 --> 105
104 --> 105
105 --> none

Any help appreciated!

Cheers

R
Tom Kyte
March 05, 2007 - 2:03 pm UTC

needs a tad more explanation - pretend you were explaining this to your mom, use details. don't expect us to reverse engineer your output and deduce what it is.

To: A reader

Michel Cadot, March 06, 2007 - 3:07 am UTC


I think you want for each basket the list of baskets that contain it.
One (not very efficient) way to do it is (just explaining this sentence in SQL):
SQL> select distinct a.child_product_id c1, b.child_product_id c2
  2  from product_component_test a, product_component_test b
  3  where ( select count(*) from product_component_test c
  4          where c.child_product_id = a.child_product_id )
  5        =
  6        ( select count(*) from product_component_test d
  7          where d.child_product_id = b.child_product_id
  8            and d.parent_product_id in 
  9                  ( select parent_product_id from product_component_test e
 10                    where e.child_product_id = a.child_product_id )
 11        )
 12    and b.child_product_id != a.child_product_id 
 13  order by 1, 2
 14  /
        C1         C2
---------- ----------
       100        104
                  105
       101        103
                  104
                  105
       102        103
                  105
       103        105
       104        105

9 rows selected.

Regards
Michel

A reader, March 06, 2007 - 12:46 pm UTC

Tom,

The tables here are complicated so I do not have insert statements to give you as an example. I also wanted to know if this change is possible with the same sql.

I have this query which outputs the result shown below it.


SELECT substr(tac.account, 1, 6)||' '||
  pp.AID||' '||
  DECODE(tr.tc_id, 1, 'R', 13, 'O', 22, 'C')||' '||
  SUBSTR(ma.ac,4,1)||ma.bd||' '||
  DECODE(ma.isc, 1, '00C', 0, '00A')||' '||
  TRIM(NVL(TO_CHAR( SUM( TRUNC((tr.w1+ tr.w2)/60,2)),'000.00'), '000.00'))
FROM     (SELECT * FROM xyz) pp,  
  (SELECT * FROM abc) tr,
  (SELECT * FROM def) ma,
  TA_AC tac
WHERE    pp.emp_id = tr.emp_id   
AND  tr.acct_id = tac.acct_id
AND      tr.ta_id = ma.ta_id
AND   tr.tc_id in (1, 13, 22)
GROUP BY substr(tac.account, 1, 6),  
     pp.AID, 
   tr.tc_id,
     SUBSTR(ma.ac,4,1),
     ma.bd,
     DECODE(ma.isc, 1, '00C', 0, '00A')


Current result:

330115 61619 R 704 00A 000.00
330115 61619 O 704 00A 000.00
330115 70132 R 704 00A 004.00
330115 70131 R 704 00A 000.00
330115 70131 C 704 00A 004.00
330115 70131 O 704 00A 003.00

I am now trying to sum up the O and C for records that have both O and C (ex: 70131) and display it as O only. As shown in the results

330115 70131 R 704 00A 000.00
330115 70131 C 704 00A 004.00
330115 70131 O 704 00A 003.00

should become

330115 70131 R 704 00A 000.00
330115 70131 O 704 00A 007.00

i.e C and O should be summed up as a single record for all reords that are grouped by a common fields 330115 70131 R 704 00A .

Could it be done with the same sql with minor changes? Thanks.






Tom Kyte
March 06, 2007 - 1:09 pm UTC

use decode( column, 'C', 'O', column) instead of just column

turn C into O

SQL Query

Baiju_P, March 06, 2007 - 9:08 pm UTC

Sir,

Please help in this query:

Create table t (seqno number(2), name varchar2(10), price number(7,2));

insert into t values (1,'Ramesh',190);
insert into t values (2,'Ramesh',230);
insert into t values (3,'Baiju',102);
insert into t values (4,'Baiju',384);

The output required is:

1. query for finding the product of Price for Ramesh i.e (190*230*¿¿)
2. query for finding the product of Price for Baiju i.e(102*384*¿..)


I can make more than 1000 rows for a given name

Rgds

Baiju_P

One solution to the above question from Baiju

Ravi, March 06, 2007 - 11:06 pm UTC

select name,exp(s) from
(
select name,sum(ln(price)) s
from t
group by name
)
/

To Baiju_P: Just for fun

Michel Cadot, March 07, 2007 - 8:17 am UTC


SQL> with
  2    reseq as (
  3      select name, price,
  4             row_number() over (partition by name order by seqno) seqno
  5      from t
  6    ),
  7    compute as (
  8      select name, seqno, prod
  9      from reseq
 10      model
 11        partition by (name)
 12        dimension by (seqno)
 13        measures (price, 1 prod)
 14        rules (prod[ANY] = decode(cv(seqno),1,1,prod[CV()-1]) * price[CV()])
 15    ),
 16    data as (
 17      select name, prod, 
 18             row_number() over (partition by name order by seqno desc) rn
 19      from compute
 20    )
 21  select name, prod
 22  from data
 23  where rn = 1
 24  order by name
 25  /
NAME             PROD
---------- ----------
Baiju           39168
Ramesh          43700

2 rows selected.

Regards
Michel

Assignment-Resouce Query

Jitender, March 22, 2007 - 12:31 am UTC

Thanks for your quick response...! to my query submitted earlier...

Since you asked for the create and insert statements,here they are :

CREATE TABLE ASSIGNMENT ( ASSIGNMENT_NUM NUMBER, RES_ID VARCHAR2(2));

insert into asgn values (1,'AJ');
insert into asgn values (1,'JA');
insert into asgn values (2,'AJ');
insert into asgn values (2,'JA');
insert into asgn values (3,'AJ');
insert into asgn values (3,'JA');
insert into asgn values (4,'AJ');
insert into asgn values (5,'JA');
insert into asgn values (6,'JA');

- 2 is not the min # of res_ids, there could be more.
- The min and max used in the initial post are not the actual limits, res_ids could be more and also be number.

I'm trying to get a query that provides results as follows :

Assignment_num Res_id other_id
1 AJ JA
2 AJ JA
3 AJ JA
4 AJ
5 JA
6 JA

- please help by providing the query.

Thanks in advance.

JA


Tom Kyte
March 22, 2007 - 7:42 am UTC

... 2 is not the min # of res_ids, there could be more. ...

ok, what then - make your example HAVE MORE and describe to us in painstaking detail what happens....

JA - Try this one...

jojopogi29, March 22, 2007 - 10:13 am UTC

CREATE OR REPLACE PROCEDURE PROCEDURE2 AS
BEGIN
FOR x IN (SELECT distinct asgn_num FROM ASGN ORDER BY asgn_num) LOOP
dbms_output.put(x.asgn_num || ' ' );

FOR y IN (SELECT asgn_num, res_id FROM ASGN) LOOP
IF x.asgn_num = y.asgn_num THEN
dbms_output.put(y.res_id || ' ' );
END IF;
END LOOP;
dbms_output.put_line('');

END LOOP;
END PROCEDURE2;



Let me know if it works ;)
Tom Kyte
March 22, 2007 - 10:27 am UTC

why would you want to do that???

JA - Try this one...

jojopogi29, March 22, 2007 - 10:31 am UTC

Ooops..sorry, umm I think what JA wants to accomplish is something that this PL/SQL procedure does. Now, is there a way to do it using just SQL ?

create or replace PROCEDURE PROCEDURE2 AS
BEGIN
FOR x IN (SELECT distinct asgn_num FROM ASGN ORDER BY asgn_num) LOOP
dbms_output.put(x.asgn_num || ' ' );

FOR y IN (SELECT asgn_num, res_id FROM ASGN) LOOP
IF x.asgn_num = y.asgn_num THEN
dbms_output.put(y.res_id || ' ' );
END IF;
END LOOP;
dbms_output.put_line('');

END LOOP;
END PROCEDURE2;



1 AJ JA
2 AJ JA
3 AJ JA
4 AJ
5 JA
6 JA


;)
Tom Kyte
March 22, 2007 - 10:46 am UTC

you would never want to double loop like that in real life - you would only need a single query to accomplish your procedural code.

Yes, I can do it all in a single query without ANY procedural code, I want to hear from the original poster however what they think should happen when there are more than two - the number of COLUMNS in a query are fixed, they need to tell us what they really want.

their choices are:

a) do they want a fixed number of columns
b) a string with the values concatenated together (limit of 4000 bytes!! sure you could use a clob too but that is getting silly)
c) a collection - an array - of values


Assignment-Resource Query

JA, March 23, 2007 - 12:27 am UTC

Hi Tom,

Thanks for your comments.

If there are more than two resources on one assignment, they should be displayed in a single row. The number of columns need not be fixed it can vary to as many resources are working on a single assignment. For argument sake, we can consider five resources working on a single assignment.

Also, we are ok with concatenation of string values ( with spaces between them )

The objective is to produce a report that shows :

Assignment_num Resources

1 JA AJ RK MA NS
2 AJ RK
3 NS MA
4 LR
5 KC
6 SS JS TP RR

However, please let me know if you need any further information.

Thanks a lot for your help.

Jojopogi, thanks for your inputs too.

JA
Tom Kyte
March 23, 2007 - 9:45 am UTC

the number of COLUMNS needs to be fixed - that is a sql requirement.


if you just want a concatenated string, this works:

ops$tkyte%ORA9IR2> select deptno,
  2         max(sys_connect_by_path(ename, ' ' )) scbp
  3    from (select deptno, ename, row_number() over (partition by deptno order by ename) rn
  4            from emp
  5             )
  6  start with rn = 1
  7  connect by prior rn = rn-1 and prior deptno = deptno
  8  group by deptno
  9  order by deptno
 10  /

    DEPTNO SCBP
---------- ----------------------------------------
        10  CLARK KING MILLER
        20  ADAMS FORD JONES SCOTT SMITH
        30  ALLEN BLAKE JAMES MARTIN TURNER WARD


Excellent!!!

jojopogi29, March 23, 2007 - 9:58 am UTC

Excellent!!!

Assignment-Resource Query

JA, March 25, 2007 - 2:02 am UTC

Thanks a million Tom...!!
You're more than a genius...!!

Thanks

A reader, April 10, 2007 - 2:32 pm UTC

Thanks this is very helpful reference.. may be better then otn docs !!

How to find table being used in a PLSQL?

sridhar.s, May 06, 2007 - 6:13 am UTC

Dear Tom,

I would like to know is there any way to find out that whether a table is being used in stored procedure or function or any PLSQL package. Whenever there is structural change of my tables, I need to check if any PLSQL changes required.

Thanks.
Tom Kyte
May 08, 2007 - 10:23 am UTC

that is done automagically for you - if you alter a table, all dependent plsql will be invalidated

user_dependencies, all_dependencies, dba_dependencies may be queried as well.

SQL

K P Ratnaker, May 07, 2007 - 6:18 am UTC

Hi tom,

i have one table

empid empname sex
----- ------- ----
1 Joe M
2 Nisha F
3 Raj M
4 Celina F

i want to count who is female in one column and who is male in second column
Tom Kyte
May 08, 2007 - 10:47 am UTC

select count( case when sex = 'M' then 1 end ) m,
       count( case when sex = 'F' then 1 end ) f
  from table

Can analytics work here?

A reader, May 08, 2007 - 4:44 pm UTC

I need to group all the sales that an employee does for each customer into 60 day buckets starting with their first sale. However once an item is in a bucket, if cannot be grouped in any other bucket. The 60 days cannot overlap and when sales can have more than 60 days between then, the second sale starts a new bucket.

My table definition has been simplified to:
 CREATE TABLE C4_TEST
(
  SESSION_DATE  DATE,
  EMP_ID        VARCHAR2(10),
  CUSTOMER_ID   VARCHAR2(20),
  SESSION_ID    VARCHAR2(20)
)

These are the inserts
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES (  TO_Date( '05/16/2007', 'MM/DD/YYYY'), '11111', '123456789', '1'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES ( TO_Date( '08/01/2007', 'MM/DD/YYYY'), '11111', '123456789', '2'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES ( TO_Date( '09/15/2007', 'MM/DD/YYYY'), '11111', '123456789' , '3'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES ( TO_Date( '09/20/2007', 'MM/DD/YYYY'), '22222', '123456789', '4'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES (TO_Date( '10/18/2007', 'MM/DD/YYYY'), '11111', '123456789', '5'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES ( TO_Date( '10/20/2007', 'MM/DD/YYYY'), '11111', '123456789', '6'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES (  TO_Date( '12/25/2007', 'MM/DD/YYYY'), '11111', '123456789', '7'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID ) 
VALUES ( TO_Date( '09/26/2007', 'MM/DD/YYYY'), '22222', '123456789', '8'); 
INSERT INTO C4_TEST ( SESSION_DATE, EMP_ID, CUSTOMER_ID, SESSION_ID )
VALUES ( TO_Date( '11/01/2007', 'MM/DD/YYYY'), '22222', '123456789', '9');


The output with the bucket should be:
SESSION_DATE   EMP_ID   CUSTOMER_ID SESSION_ID 60DayBucket
05/16/2007 11111 123456789 1          1
08/01/2007 11111 123456789 2          2
09/15/2007 11111 123456789 3          2
10/18/2007 11111 123456789 5          3
10/20/2007 11111 123456789 6          3
12/25/2007 11111 123456789 7          4
09/20/2007 22222 123456789 4          1
09/26/2007 22222 123456789 8          1
11/01/2007 22222 123456789 9          1

I've tried
select c.*,  per_start,per_end
FROM
(select e.emp_id, 
  e.customer_id,
  min(session_date) per_start, 
  max(session_date) per_end
 from c4_test e,
 (select  emp_id, customer_id, min(session_date) ms 
from c4_test
group by emp_id, customer_id) s
 where e.emp_Id= s.emp_id 
 and e.customer_id=s.customer_id
 group by e.emp_id, e.customer_id, floor((session_date-ms)/59)) d,
c4_test c
where c.emp_Id= d.emp_id
and c.customer_id=d.customer_id
and c.session_date between per_start and per_end
order by c.emp_id, c.customer_id, c.session_date, per_start

SESSION_DATE EMP_ID CUSTOMER_ID SESSION_ID PER_START PER_END
05/16/2007  11111 123456789 1   05/16/2007 05/16/2007
08/01/2007  11111 123456789 2  08/01/2007 08/01/2007
09/15/2007  11111 123456789 3  09/15/2007 10/20/2007
10/18/2007  11111 123456789 5  09/15/2007 10/20/2007
10/20/2007  11111 123456789 6  09/15/2007 10/20/2007
12/25/2007  11111 123456789 7  12/25/2007 12/25/2007
09/20/2007  22222 123456789 4  09/20/2007 11/01/2007
09/26/2007  22222 123456789 8  09/20/2007 11/01/2007
11/01/2007  22222 123456789 9  09/20/2007 11/01/2007
 

But that groups the 9/15, 10/18 and 10/20 records together when 9/15 and 9/18 should go in the 8/1 bucket and 10/18 and 10/20 should be the 3rd bucket.
I've tried some queries using lag, but I'm still not getting the distinct groupings right. Since this is a moving window can analytics even be used or do I need PL/SQL with a loop to work through each possible 60 day buckets so they don't overlap within emp_id and customer_id?
Tom Kyte
May 11, 2007 - 9:02 am UTC

ops$tkyte%ORA10GR2> select emp_id, customer_id, session_id, session_date,
  2         max_grp || '.' ||
  3         trunc( (session_date-min(session_date) over (partition by emp_id, customer_id, max_grp) )/60 ) bin
  4    from (
  5  select emp_id, customer_id, session_id, session_date,
  6         max(grp) over (partition by emp_id, customer_id order by session_date) max_grp
  7    from (
  8  select emp_id, customer_id, session_id, session_date,
  9         case when session_date-lag(session_date) over (partition by emp_id, customer_id order by session_date) > 60
 10                   or lag(session_date) over (partition by emp_id, customer_id order by session_date) is null
 11                  then row_number() over (partition by emp_id, customer_id order by session_date)
 12                  end grp
 13    from c4_test
 14         )
 15         )
 16   order by emp_id, customer_id, session_date
 17  /

EMP_ID     CUSTOMER_ID          SE SESSION_D BIN
---------- -------------------- -- --------- ---------------------------------------------------------------------------------
11111      123456789            1  16-MAY-07 1.0
11111      123456789            2  01-AUG-07 2.0
11111      123456789            3  15-SEP-07 2.0
11111      123456789            5  18-OCT-07 2.1
11111      123456789            6  20-OCT-07 2.1
11111      123456789            7  25-DEC-07 6.0
22222      123456789            4  20-SEP-07 1.0
22222      123456789            8  26-SEP-07 1.0
22222      123456789            9  01-NOV-07 1.0

9 rows selected.


is one approach.

SQL Statement

Ratnaker, May 09, 2007 - 3:48 am UTC

Hi tom,

thank you very much.for immediately response for my query. Its very usefule for me.

Ratnaker

Opening & Closing Balance

Dynes, May 09, 2007 - 6:46 am UTC

Dear Tom,

Please suggest me how can I write a query (on SQL*Plus) that contain opening and closing balance for each row as similar state of account for a saving bank a/c.

Regards

Dynes
Tom Kyte
May 11, 2007 - 9:41 am UTC

insufficient data

you don't give a schema
you don't tell us how to find opening or closing balances.

Which SQL to USE

john, May 09, 2007 - 3:02 pm UTC

I want all the records from tableA which are not in Table B

Here are the 2 sql's ... Which one is better

select count(*)
from tablea a, tableb b
where a.id = b.id(+)
and b.rowid is null

second query:
select count(*)
from tablea a
where a.id not in (select b.id from tableb)

Thank you
Tom Kyte
May 11, 2007 - 10:27 am UTC

the second one seems to say more, doesn't it. You read it and it is immediately obvious what you are doing.

close_balance

Dynes, May 11, 2007 - 7:49 am UTC

Dear john

suppose a table that contain data as

DATE client amount($) DR/CR
09-MAY-2007 X 100.00 DR
10-MAY-2007 X 25.00 CR
11-MAY-2007 X 35.00 DR

we need a query that report like

DATE client amount($) DR/CR CLOSE BALANCE
09-MAY-2007 X 100.00 CR 100.00 CR
10-MAY-2007 X 25.00 DR 75.00 CR
11-MAY-2007 X 35.00 DR 40.00 CR

The close_balance field create on SQL

Re: Close_balance

Michel CADOT, May 11, 2007 - 9:28 am UTC


SQL> select * from t order by client, dt;
DT          CLIENT                   AMOUNT DRCR
----------- -------------------- ---------- ----
09-MAY-2007 X                           100 CR
10-MAY-2007 X                            25 DR
11-MAY-2007 X                            35 DR
09-MAY-2007 Y                           100 CR
10-MAY-2007 Y                            25 DR
11-MAY-2007 Y                            50 CR

6 rows selected.

SQL> select client, dt, amount, drcr, 
  2         sum(decode(drcr,'CR',1,-1)*amount) 
  3           over (partition by client order by dt) close
  4  from t
  5  /
CLIENT               DT              AMOUNT DRCR      CLOSE
-------------------- ----------- ---------- ---- ----------
X                    09-MAY-2007        100 CR          100
X                    10-MAY-2007         25 DR           75
X                    11-MAY-2007         35 DR           40
Y                    09-MAY-2007        100 CR          100
Y                    10-MAY-2007         25 DR           75
Y                    11-MAY-2007         50 CR          125

6 rows selected.

Regards
Michel

Is there better way to write such query ?

Parag J Patankar, May 18, 2007 - 1:57 am UTC

Hi,

In a table I am having one column with varchar2(1000) storing data for e.g.

!!:FA:/Field Value A1!!:FB:/Field Value of B1!!:FC:/Field Value of C1!!:FD:/Field Value of D1!!:FE:/Field Value of E1!!
!!:FA:/Field Value of A2:FE:/Field Value of E2!!
!!:FC:/Field Value of C3!!:FD:/Field Value of D3!!:FE:/Field Value of E3!!
!!::FE:/Field Value of E4!!
!!:FA:/Field Value A5!!

My desired output is

FA                 FB                 FC                FD                 FE
Field Value of A1  Field Value of B1  Field Value of C1 Field Value of D1  Field Value of E1
Field Value of A2                                                          Field Value of E2
                                      Field Value of C3 Field value of D3
                                                                           Field Value of E4
Field value of A5


In short, I want to break variable length data into columns starting with !!:<field Name>:/ and end with immediate "!!" character.

I am doing something like case when instr(message, '!!:FA:/') > 0 then
substr(message,instr(message, '!!:FA/:')+7, (instr(message,'!!',instr(message, '!!:FA:/')+7) -
(instr(message, '!!:FA:/')+7)))
end FA,

Is there any better way to do this in 9i and 10g ? ( I am using 9.2 database )

thanks & regards
PJP



Tom Kyte
May 18, 2007 - 3:57 pm UTC

looks fine to me.

are there other ways - sure, but this works and builtins are rather speedy.


the better way to do this is to correctly insert the data in the first place.

Update SQL Statement

K P Ratnaker, May 21, 2007 - 1:41 am UTC

Hi,

I designed one table :

Contact

contID Code1
------ -----
1 A
2 A
3 B
4 B

i want to update A to B and B to A in One Update Statement. Please give solution
Tom Kyte
May 21, 2007 - 10:25 am UTC

update t set code1 = decode( code1, 'A', 'B', 'B', 'A' ) where code1 in ( 'A', 'B' );

Update Sql

K P Ratnaker, May 22, 2007 - 1:40 am UTC

hi tom,

this Sql Statement is very usefule for me

Thank you,

Ratnaker

A Strange Query...

A reader, May 23, 2007 - 8:19 am UTC

Hi Tom,

Consider a table T1 with data.

drop table t1;

create table t1
(c1 number, c2 date, c3 number);

insert into t1
values (1, to_date('01-jan-2007','dd-mon-yyyy'), 1305);

insert into t1
values (2, to_date('12-jan-2007','dd-mon-yyyy'), 1305);

insert into t1
values (3, to_date('25-jan-2007','dd-mon-yyyy'), 1305);

insert into t1
values (4, to_date('05-jan-2007','dd-mon-yyyy'), 1307);

insert into t1
values (5, to_date('18-jan-2007','dd-mon-yyyy'), 1307);

insert into t1
values (6, to_date('14-jan-2007','dd-mon-yyyy'), 1307);


How can we find c1 for each c3 having maximum of c2.
The desired rows are as under.

c1 c2          c3
-- ----------- ----
3  25-jan-2007 1305
5  18-jan-2007 1307


Regards.

and a strange response too...

A reader, May 24, 2007 - 12:53 am UTC

Hi Tom,

Thank you very much for your response. I read the whole thread that you mentioned in the follow up.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:228182900346230020

But I couldn't get the solution of my query!
Would you like to give me a little of your valuable time.

Regards.
Tom Kyte
May 26, 2007 - 11:45 am UTC

really - you could not get there from here with that information. hmmm. That bothers me.

since that page shows three ways to do precisely what you ask.

I just took those queries and basically replaced the column names.... This is a big problem.

I would encourage you to not use any of this sql. Not until after you understand how it works.

ops$tkyte%ORA10GR2> select *
  2    from (select t1.*, row_number() over (partition by c3 order by c2 DESC nulls last) rn
  3            from t1
  4         )
  5   where rn = 1;

        C1 C2                C3         RN
---------- --------- ---------- ----------
         3 25-JAN-07       1305          1
         5 18-JAN-07       1307          1

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select to_number( substr( data, 15 ) ) c1,
  2         to_date( substr( data, 1, 14 ), 'yyyymmddhh24miss' ) c2,
  3             c3
  4    from (
  5  select c3, max( to_char(c2,'yyyymmddhh24miss') || c1 )  data
  6    from t1
  7   group by c3
  8         );

        C1 C2                C3
---------- --------- ----------
         3 25-JAN-07       1305
         5 18-JAN-07       1307

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select max(c1) KEEP (dense_rank first order by c2 desc nulls last) c1,
  2         max(c2),
  3         c3
  4    from t1
  5   group by c3;

        C1 MAX(C2)           C3
---------- --------- ----------
         3 25-JAN-07       1305
         5 18-JAN-07       1307

query

sam, May 24, 2007 - 5:07 pm UTC

Tom:

Can you make modify a query in implicit cursor based on variable

For example.

If (v_type='Contractor') THEN
FOr x in (Select * from table where flag='Y')
LOOP
...
elsif (v_type='Person') THEN
FOr x in (Select * from table where flag='N')
LOOP
...


Can i stick the v_type variable into the impilict cursor query and just have one query that will run properly.

THanks,



Tom Kyte
May 26, 2007 - 11:47 am UTC

where flag = case when v_type = 'Contractor' then 'Y' else 'N' end

or

where flag = decode( v_type, 'Contractor', 'Y', 'N' )


just use a function.

A Reader, May 25, 2007 - 1:52 am UTC

Select *
from table
where (flag='Y' and v_type='Contractor') or
(flag='N' and v_type='Person')

How to get value from LONG datatype using LIKE?

A reader, May 28, 2007 - 5:18 pm UTC

Tom,
I want to get ALL check constraints which are not for checking NOT NULL(for example, checking for 'Y' or 'N'). The column search_condition in *_constraints is LONG datatype, if I say where search_condition not like upper('%NOT NULL'), it returns error message. I know we can create another table to store this column by inserting to_lob(search_condition) and then use NOT LIKE. But I still want to know is any other easy way to directly get
value from this column with LONG datatype?

Thanks for your help.

Im delighted...

A reader, May 29, 2007 - 1:33 am UTC

Hi Tom,

Thank you very much for responding again with examples as desired by me.

i do read the page

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:228182900346230020

but really couldn't get out of it as I'm new to analytic functions. You've provided three possible ways to do it. The first and third solution uses analytics while I can understand the second one, so I can use that one to solve my problem :) excluding 1st and 3rd.

The magic lies in analytic functions. Hard to learn and follow, but I'll try my best to get them.

Tom thanks again.

SQL Query

Shubhasree, May 29, 2007 - 9:58 am UTC

Hi Tom,

Can you please suggest some good technique to avoid TRUNC for a date field as while using this, the index on the column is not used and the table (which is one of the biggest ones) gets Full table scanned!

Like:

select col1, col2
from Tab1
where TRUNC(date_field) = <some date>;


Thanks in advance,
Shubha
Tom Kyte
May 30, 2007 - 10:54 am UTC

where date_field >= trunc(some-date) and date_field < trunc(some-date)+1

To Shubha:

Jay, May 29, 2007 - 4:40 pm UTC

Shubha:

Maybe this..

to_char(date_field, 'mm/dd/yyyy') = some_date

thanks!
Tom Kyte
May 30, 2007 - 11:06 am UTC

ouch, that is worse than trunc!!!

SQL Query

Shubhasree, May 30, 2007 - 3:36 am UTC

Hello Jay,

The to_char is again a function like TRUNC which when operates on an indexed column will suppress the use of it.
Is there any other way to write the query in a different manner to avoid suppressing index on date field?

Regards,
Shubha

SQL Query

Shubhasree, June 01, 2007 - 2:00 am UTC

Thanks for the tip Tom.

Regards,
Shubha

sum of level 2

abz, June 13, 2007 - 12:06 pm UTC

Consider the emp table.

I want a query to display
COL1= MANAGER's emp_id
COL2= sum of salaries of all the
employees under this mananger (and also include managers
salary).

The query should display only those managers who are at
the level of 2 from top to down.

Tom Kyte
June 13, 2007 - 2:30 pm UTC

ops$tkyte%ORA10GR2> select empno, rpad('*',2*level,'*')||ename nm, sal
  2    from emp
  3  start with mgr is null
  4  connect by prior empno = mgr
  5  /

     EMPNO NM                          SAL
---------- -------------------- ----------
      7839 **KING                     5000
      7566 ****JONES                  2975
      7788 ******SCOTT                3000
      7876 ********ADAMS              1100
      7902 ******FORD                 3000
      7369 ********SMITH               800
      7698 ****BLAKE                  2850
      7499 ******ALLEN                1600
      7521 ******WARD                 1250
      7654 ******MARTIN               1250
      7844 ******TURNER               1500
      7900 ******JAMES                 950
      7782 ****CLARK                  2450
      7934 ******MILLER               1300

14 rows selected.

ops$tkyte%ORA10GR2> select e1.empno,
  2        (select sum(e2.sal)
  3               from emp e2
  4                  start with e2.mgr = e1.empno
  5            connect by prior e2.empno = e2.mgr)+e1.sal sumsal
  6    from emp e1
  7   where e1.empno in (select e3.mgr from emp e3)
  8     and e1.mgr is not null
  9  /

     EMPNO     SUMSAL
---------- ----------
      7902       3800
      7698       9400
      7566      10875
      7788       4100
      7782       3750

didnt understand

abz, June 15, 2007 - 6:04 am UTC

there can be empnos whose mgr is not null
and they are managers of some employess, but that doesnt
guarantee that they are always at level 2.
Cant understand how you filtered the level 2.

Anyway, what if I say that the query should return
records only for level n, where n could be from 1 to n.


Tom Kyte
June 15, 2007 - 7:51 am UTC

add where level = 2 and connect by level <= 2 to the query then.

and if you do not understand what the query does - please do not use it, study it, research it, learn it, understand it until you yourself could write it.

SQL Query

Raj, June 21, 2007 - 8:37 am UTC

select BUNDLE_KEY,
COHORT_KEY,
CELL_KEY,
STATE_KEY,
PRODUCT_KEY,
sum(decode(tcf.TREATMENT_COST_TYPE_KEY,1,tcf.COST_AMT)) r_6,
sum(decode(tcf.TREATMENT_COST_TYPE_KEY,2,tcf.COST_AMT)) r_7,
sum(decode(tcf.TREATMENT_COST_TYPE_KEY,3,tcf.COST_AMT)) r_8,
sum(decode(tcf.TREATMENT_COST_TYPE_KEY,4,tcf.COST_AMT)) r_9,
sum(decode(tcf.TREATMENT_COST_TYPE_KEY,5,tcf.COST_AMT)) r_10
from whr2.campaign_participation_fct cpf
join whr2.timing_dim tdim using(timing_key)
join whr2.cohort_dim cdim using(cohort_key)
join whr2.bundle_item_xref bxref using(bundle_key)
join whr2.serialized_item_dim using(si_key)
join vpat.treatment_cost_fct tcf using(message_key)
where DROP_DT BETWEEN EFFECTIVE_START_DATE_KEY + DATE '1989-12-31' -- c_conv_dt
AND EFFECTIVE_END_DATE_KEY + DATE '1989-12-31' -- c_conv_dt
and cohort_key is not null
and BUNDLE_KEY = 2604735--2621841
group by BUNDLE_KEY,
COHORT_KEY,
CELL_KEY,
STATE_KEY,
PRODUCT_KEY


This is what I need to do..

Based on the above said query, for example If I retrieve 3 rows then I should divide the column r_6, r_7.. r_10 by 3.

I am trying to achieve this thru SQL.. Is it feasible in SQL.

SQL Analytic Funtion

RAJ, June 21, 2007 - 10:06 am UTC

I got the solution.. I am supposed to use the analytic function ie., count(*) over(partition by bundle_key)

Tom Kyte
June 21, 2007 - 11:06 am UTC

well, no, given your text above, it would be count(*) over ()

you did not say "by something", but if it is - you would partition "by that" thing

Query

A reader, July 06, 2007 - 11:12 am UTC

Tom,

I have a table with two column:

pa_seq cd --> column name

100 OO --> data
100 BB
100 OO

200 CC
200 BB
200 CC
200 AA

I want the following result

100 OO/BB
200 CC/BB/AA


Tom Kyte
July 06, 2007 - 1:11 pm UTC

see other place you posted the same thing

Top Rows which contribute 50% of the data

HItesh Bajaj, July 08, 2007 - 2:01 pm UTC

Hi Tom,

I need to write a query to show n orders placed by the customers which sum of which contribute to more 50% of the total value of the orders for a week.

Table Orders

Order_No Order_Value
1 100
2 200
3 300
4 400
------
1000
Result should be

Order No Value %of Total Val
4 400 40%
3 300 30%
-----------------
70%

Thanks

Tom Kyte
July 08, 2007 - 2:06 pm UTC

sounds like a neat query

if only we had a create table and some inserts - and inserts of interesting data (like for more than one customer) we could see what it looks like :)

Top most rows contributing to > 50% of the value

Hitesh Bajaj, July 08, 2007 - 2:34 pm UTC

Hi Tom,

Sorry about that for not posting create table and insert statements.

Create table orders (Order_No Number, Customer_Id NUMBER,Order_Value NUMBER);

Insert into Orders Values(1,1001,100);
Insert into Orders Values(2,2045,200);
Insert into Orders Values(3,6753,300);
Insert into Orders Values(4,4323,400);

Commit;

Thanks,

To Hitesh Bajaj

Michel Cadot, July 08, 2007 - 3:05 pm UTC


Assuming you want the greatest orders (in your exemple you didn't give 400+200 or 100+200+300...), you can use:
SQL> with
  2    step1 as (
  3      select order_no, order_value,
  4             100*ratio_to_report(order_value) over () percent,
  5             rank() over (order by order_value desc, order_no) rk
  6      from orders
  7    ),
  8    step2 as (
  9      select order_no, order_value, percent,
 10             sum(percent) over (order by rk) cursum 
 11      from step1
 12    )
 13  select order_no, order_value, percent
 14  from step2
 15  where cursum-percent < 50
 16  /
  ORDER_NO ORDER_VALUE    PERCENT
---------- ----------- ----------
         4         400         40
         3         300         30

2 rows selected.

If 2 orders have the same last value (here 300) I choose to take those with the lowest order_no but it's up to you to define which have to be kept.

Regards
Michel

Thanks

hitesh, July 10, 2007 - 10:33 am UTC

Michel, Thanks a lot for your help.


Slight confusion: count(*) over partition by(

Sanji, July 10, 2007 - 4:25 pm UTC

Tom,

The scenario is

create table t1 (invoice number, po_number number) ;
insert into t1 values (60449,356748);
insert into t1 values (60449,356749);
insert into t1 values (31652,487563);
insert into t1 values (31652,487564);
insert into t1 values (31652,487565);
insert into t1 values (31652,487566);
insert into t1 values (31652,487567);

I need to calculate invoices that have multiple po_numbers.

15:18:12 OPEN:SANJI:XFIN@DRLAWSON>select * from t1;

INVOICE PO_NUMBER
---------- ----------
60449 356748
60449 356749
31652 487563
31652 487564
31652 487565
31652 487566
31652 487567

15:18:14 OPEN:SANJI:XFIN@DRLAWSON>select * from
(select invoice, count(po_number) from t1
group by invoice
having count(po_number) > 1)
/

INVOICE COUNT(PO_NUMBER)
---------- ----------------
31652 5
60449 2

Elapsed: 00:00:00.01

15:18:14 OPEN:SANJI:XFIN@DRLAWSON>select *
from (select invoice, count(po_number) over (partition by invoice) cnt
from t1
group by invoice, po_number)
where cnt > 1
/
INVOICE CNT
---------- ----------
31652 5
31652 5
31652 5
31652 5
31652 5
60449 2
60449 2

7 rows selected.

In the above query, if i do not put a "distinct" clause, i get invoices and their total counts as against the desired result from the 1st query.

Confusion is, isn't count(*) over (partition by invoices) supposed to group the invoices and respective counts ?

Thanks
Sanji
Tom Kyte
July 10, 2007 - 8:14 pm UTC

analytics do not group, do not aggregate

in fact, they are there specifically to NOT aggregate.

Sorry for the typo

Sanji, July 10, 2007 - 4:27 pm UTC

Excuse me for the typo, the 2nd query is

select *
from (select invoice, count(po_number) over (partition by invoice) cnt
from t1 )
where cnt > 1
/

Divide by 0 is success????

Hector Gabriel Ulloa Ligarius, July 13, 2007 - 9:58 am UTC

Hi Tom

SQL> select 1/0 from dual;
select 1/0 from dual
                 *
ERROR at line 1:
ORA-01476: divisor is equal to zero


Logic...


  1* select 1 from dual where exists (select 1/0 from dual)
SQL> /

         1
----------
         1


Not logic

Because????'

Can oracle divide by zero?

regards
Hector Gabriel Ulloa Ligarius
Santiago of Chile
http://ligarius.wordpress.com
Tom Kyte
July 13, 2007 - 10:41 am UTC

because where exists (select anything from whatever)

is the same as the more correct and proper:

where exists ( select null from whatever)


the optimizer would remove the column list from the select, replacing it with NULL.

Note how in the following - if you select object_name from t where object_id=42, there is an index range scan PLUS table access by index rowid to get the data.

However, plug that into a where exists and poof - table access goes away, we KNOW we don't need to go to the table, that object_name is a red herring in the where exists - that it doesn't belong there (should be select NULL from t where...)


ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> alter table t add constraint t_pk primary key(object_id);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select object_name from t where object_id = 42;

Execution Plan
----------------------------------------------------------
Plan hash value: 1303508680

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    30 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |    30 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | T_PK |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=42)

ops$tkyte%ORA10GR2> select * from dual where exists (select object_name from t where object_id = 42 );

Execution Plan
----------------------------------------------------------
Plan hash value: 2180734921

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     3   (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| T_PK |     1 |    13 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter( EXISTS (SELECT /*+ */ 0 FROM "T" "T" WHERE
              "OBJECT_ID"=42))
   3 - access("OBJECT_ID"=42)

ops$tkyte%ORA10GR2> set autotrace off

Converting columns data into rows

Dulal, August 08, 2007 - 4:21 am UTC

Hi Tom,
How it is possible to convert columns values into rows values as follows.

I have a table with data as follows:

DEPTNO DNAME COL1 COL2 COL3 COL4 COL5
------ ---------- ---- ---- ---- ---- ----
10 ACCOUNTING 101 102 103 104 105
20 RESEARCH 201 202 203 204 205

And I want output as follows:

DEPTNO DNAME COL_VALUE
------ ---------- ---------
10 ACCOUNTING 101
10 ACCOUNTING 102
10 ACCOUNTING 103
10 ACCOUNTING 104
10 ACCOUNTING 105
20 RESEARCH 201
20 RESEARCH 202
20 RESEARCH 203
20 RESEARCH 204
20 RESEARCH 205

Please help me with a sample.

Best regards.
Tom Kyte
August 14, 2007 - 9:49 am UTC

I would love to help you out with a sample

unfortunately, you gave no create table
no inserts
so I do not really look too hard at these questions since they would require me to do a lot of setup....


in general:
with five_rows as (select level l from dual connect by level <= 5)
select deptno, name, 
       decode( l, 1, col1, 2, col2, ..., 5, col5 ) col_val
  from your_table, five_rows
/

SQL QUERY

Shyam, August 10, 2007 - 11:55 am UTC

Hi Tom,

Thanks for your help in giving us most effective solutions.

I have a situation as below,

The flow of a case StateIn : 'LICENSE_APP_REQUESTED' and StateOut : 'LICENSE_APP_FWD_TO_QCC' then it is treated as case Received
and the case StateIn : 'LICENSE_APP_FWD_TO_QCC' and StateOut : LICENSE_APP_REJECTED_BY_QCC' then it is treated as case Rejected

I need a count of Received cases in given time frame (01-Jan-2007 thru 31-Jan-2007)
and out of only those received cases how many of them are Rejected in the time frame (01-Jan-2007 thru 31-Jan-2007 + 7 moredays)

I am using the following query and it is giving me the desired result, but taking good time.

I need your help in having effective query and also any different way of writing the above query to get the desired result.

Query :
SELECT COUNT(cnt) -- Recived Cases
,COUNT(rcnt) -- Rejected Cases
,TRUNC((COUNT(rcnt)/COUNT(cnt))* 100 ,2) AS "%_Rejects"
FROM
(
SELECT 1 cnt,
(SELECT DISTINCT(b.wfoid) FROM WORTEXFLEET.TRXACTIVITYLOG_VIEW b -- rejected cases
WHERE a.wfoid=b.wfoid
AND b.ACTIVITYENDTIME BETWEEN TO_DATE('01/01/2007','mm/dd/yyyy') AND (TO_DATE('01/31/2007','mm/dd/yyyy')+7)
AND b.stateid=a.outcomeid AND b.outcomeid='LICENSE_APP_REJECTED_BY_QCC') rcnt
FROM WORTEXFLEET.TRXACTIVITYLOG_VIEW a, -- total cases
WHERE ACTIVITYENDTIME BETWEEN TO_DATE('01/01/2007','mm/dd/yyyy') AND TO_DATE('01/31/2007','mm/dd/yyyy')
AND ( a.stateid='LICENSE_APP_REQUESTED'
AND a.outcomeid= 'LICENSE_APP_FWD_TO_QCC')
)



Thanks for all your timely support
- Shyam



to Shyam from Hyderabad, India

Etbin Bras, August 15, 2007 - 6:15 am UTC

Assuming this is some kind of flow (in a specified time frame each RECEIVED (i.e. REQUESTED & FORWARDED) wfoid is either REJECTED or NOT then I see it like:

select count(received),
count(rejected),
trunc(count(received)/count(rejected) * 100,2)
from (select wfoid,
sum(case when stateid = 'LICENSE_APP_REQUESTED' and
outcomeid = 'LICENSE_APP_FWD_TO_QCC' and
activityendtime between to_date('01/01/2007','mm/dd/yyyy')
and to_date('01/31/2007','mm/dd/yyyy')
then 1
end) received,
sum(case when stateid = 'LICENSE_APP_FWD_TO_QCC' and
outcomeid = 'LICENSE_APP_REJECTED_BY_QCC'
then 1
end) rejected
from wortexfleet.trxactivitylog_view
where activityendtime between to_date('01/01/2007','mm/dd/yyyy')
and to_date('01/31/2007','mm/dd/yyyy') + 7
group by wfoid
)
where received is not null

not tested, just a suggestion trying not to access the view more than once.

Regards

Etbin

Changing operator with input value

Maverick439, August 17, 2007 - 11:56 am UTC

Tom, is it possible to change the operator <, between and > depending on the input value?
eg: if input value is 1 then
select * from emp
where salary < 2500
elsif input value is 2 then
select * from emp
where salary between 2501 and 3500
else
select * from emp
where salary >3500
end;

using one query. is this possible [using CASE may be] and without dynamic query?
Dynamic Query is my last option.

Thanks,
Tom Kyte
August 22, 2007 - 9:34 am UTC

You'd want to do it like this if you want a single sql statement:

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create index t_idx on t(object_id);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable p_what number
ops$tkyte%ORA10GR2> @at
ops$tkyte%ORA10GR2> column PLAN_TABLE_OUTPUT format a72 truncate
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select *
  2    from t
  3   where :p_what = 1 and object_id < 2500
  4  union all
  5  select *
  6    from t
  7   where :p_what = 2 and object_id between 2501 and 3500
  8  union all
  9  select *
 10    from t
 11   where :p_what = 3 and object_id > 3500
 12  /

Execution Plan
----------------------------------------------------------
Plan hash value: 208085751

------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%C
------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       | 50152 |  4554K|   263  (
|   1 |  UNION-ALL                    |       |       |       |
|*  2 |   FILTER                      |       |       |       |
|   3 |    TABLE ACCESS BY INDEX ROWID| T     |   883 | 82119 |    24
|*  4 |     INDEX RANGE SCAN          | T_IDX |   883 |       |     3
|*  5 |   FILTER                      |       |       |       |
|   6 |    TABLE ACCESS BY INDEX ROWID| T     |   356 | 33108 |    11
|*  7 |     INDEX RANGE SCAN          | T_IDX |   356 |       |     2
|*  8 |   FILTER                      |       |       |       |
|*  9 |    TABLE ACCESS FULL          | T     | 48913 |  4442K|   228
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_NUMBER(:P_WHAT)=1)
   4 - access("OBJECT_ID"<2500)
   5 - filter(TO_NUMBER(:P_WHAT)=2)
   7 - access("OBJECT_ID">=2501 AND "OBJECT_ID"<=3500)
   8 - filter(TO_NUMBER(:P_WHAT)=3)
   9 - filter("OBJECT_ID">3500)

ops$tkyte%ORA10GR2> set autotrace off



that way, each of the predicates can be optimized properly (sometimes full scan, sometimes index) and the filter steps will cause only 1 bit of the UNION ALL to execute.


short of that, you can also use ref cursors in your sql
is
  l_cursor sys_refcursor;
begin
  if (p_what = 1)
  then
    open l_cursor for select * from emp where salary < 2500;
  elsif (p_what = 2)
  then
    open l_cursor for select * from emp where salary between .....
  elsif (p_what = 3)
  then
    open ....
  end if;
  loop
      fetch l_cursor into ....;

SQL QUERY

A reader, August 20, 2007 - 9:35 am UTC

Etbin,

The query you framed is really quicker and very helpful to me, I would also like to have the following condition met:

The count of rejected cases during 01/01/2007 and 02/07/2007 (01/31/2007 +7) should be only of those received cases during 01/01/2007 and 01/31/2007.

i.e.
1) Let us suppose there are Cases B,C and D received during 01/01/2007 and 01/31/2007 (Jan 2007) and a case A received in previous month (Dec 2007)

2) and B got forwarded success fully and A & C got rejected and D is not touched at all during 01/01/2007 and 02/07/2007 (01/31/2007 +7)

3) Then in the Rejected count I should get only C, because it is the only case which got created during Jan 2007 and got rejected and A should not be considered here, because it got created in Dec 2006.

Could you please help me in reframing the query to get the result in the above manner.

Thanks
- Shyam

to Shyam from Hyderabad, India

Etbin Bras, August 20, 2007 - 8:25 pm UTC

Shyam, I think the query already correctly handles Your example (count is now replaced by sum and aliased plus some indentation was changed)

select sum(received) received,
       sum(rejected) rejected,
       trunc(sum(rejected)/sum(received) * 100,2) percentage
  from (select wfoid,
               sum(case when stateid = 'LICENSE_APP_REQUESTED' and
                             outcomeid = 'LICENSE_APP_FWD_TO_QCC' and
                             activityendtime between to_date('01/01/2007','mm/dd/yyyy') 
                                                 and to_date('01/31/2007','mm/dd/yyyy')
                        then 1
                   end) received,
               sum(case when stateid = 'LICENSE_APP_FWD_TO_QCC' and
                             outcomeid = 'LICENSE_APP_REJECTED_BY_QCC'
                        then 1
                   end) rejected
          from wortexfleet.trxactivitylog_view
         where activityendtime between to_date('01/01/2007','mm/dd/yyyy') 
                                   and to_date('01/31/2007','mm/dd/yyyy') + 7
        group by wfoid
       )
 where received is not null

Being on vacation I can only provide "paper & pencil" evidence

Your data (according to Your example) should include some records like below:
================================================================================
wfoid  | stateid                | outcomeid                   | activityendtime
================================================================================
case A | LICENSE_APP_REQUESTED  | LICENSE_APP_FWD_TO_QCC      | 12/10/2006
case B | LICENSE_APP_REQUESTED  | LICENSE_APP_FWD_TO_QCC      | 01/05/2007
case C | LICENSE_APP_REQUESTED  | LICENSE_APP_FWD_TO_QCC      | 01/06/2007
case D | LICENSE_APP_REQUESTED  | LICENSE_APP_FWD_TO_QCC      | 01/07/2007
case A | LICENSE_APP_FWD_TO_QCC | LICENSE_APP_REJECTED_BY_QCC | 01/20/2007
case C | LICENSE_APP_FWD_TO_QCC | LICENSE_APP_REJECTED_BY_QCC | 02/05/2007
================================================================================


the inner query should produce:
=============================
wfoid  | received | rejected
=============================
case A |     null |        1
case B |        1 |     null
case C |        1 |        1
case D |        1 |     null 
=============================

the result should look like:
=================================
received | rejected | percentage
=================================
       3 |        1 |      33.33 
=================================

anyway, You can replace the wortexfleet.trxactivitylog_view in the inner query with

(
select 'case A' wfoid, 'LICENSE_APP_REQUESTED' stateid, 'LICENSE_APP_FWD_TO_QCC' outcomeid, 
       to_date('12/10/2006','MM/DD/YYYY') activityendtime 
  from dual
union all
select 'case B' wfoid, 'LICENSE_APP_REQUESTED' stateid, 'LICENSE_APP_FWD_TO_QCC' outcomeid, 
       to_date(01/05/2007','MM/DD/YYYY') activityendtime 
  from dual
union all
select 'case C' wfoid, 'LICENSE_APP_REQUESTED' stateid, 'LICENSE_APP_FWD_TO_QCC' outcomeid, 
       to_date(01/06/2007','MM/DD/YYYY') activityendtime 
  from dual
union all
select 'case D' wfoid, 'LICENSE_APP_REQUESTED' stateid, 'LICENSE_APP_FWD_TO_QCC' outcomeid, 
       to_date(01/07/2007','MM/DD/YYYY') activityendtime 
  from dual
union all
select 'case A' wfoid, 'LICENSE_APP_FWD_TO_QCC' stateid, 'LICENSE_APP_REJECTED_BY_QCC' outcomeid, 
       to_date(01/20/2007','MM/DD/YYYY') activityendtime 
  from dual
union all
select 'case C' wfoid, 'LICENSE_APP_FWD_TO_QCC' stateid, 'LICENSE_APP_REJECTED_BY_QCC' outcomeid, 
       to_date(02/05/2007','MM/DD/YYYY') activityendtime 
  from dual
)

and see yourself what comes out

regards

Etbin

p.s. Maybe I shouldn't have interfered, but it was stronger than me. It immediately occurred to me it could be done in a single pass. You provided no create table, no inserts, no create index (by the way: having an index on wfoid should greatly speed your join oriented version, but unfortunately the name suggests it's a view You are getting data from) ... very small chance Tom would have considered it.

Shyam, August 22, 2007 - 4:07 am UTC

Etbin,
The query helped me and it is kind of one that I am looking for, thanks a lot.
Even the response time is also very good.

Thanks and Regards
- Shyam

create multiple records based on date range

chandra, August 23, 2007 - 5:40 am UTC

Tom,

I have a table that stores the member information.

create table mbr(mbr_no number, start_date date, end_date date);

insert into mbr values(1, '01/10/2006', null);
insert into mbr values(2, '07/10/2006', '12/31/2006');
insert into mbr values(3, '01/10/2005', null);
insert into mbr values(4, '01/10/2006', '10/15/2006);
insert into mbr values(4, '01/10/2007', null);

mbr start_date end_date
----------- ----------- -----------
1 01/10/2006
2 07/10/2006 12/31/2006
3 01/10/2005
4 01/10/2005 10/15/2006
4 01/10/2007


In the above data member 1 has his benefits started in 2006 and as of today it is current [ end date is null ]. I need to generate two records for member 1. similarly for member 3 whose benefits are active since 2005. I have to break the data based on the years. for member 2 the benefits are only for 2006 so just one record. But for member 4 the benefits are between 2005 and 2006 so 2 records[ 2005, 2006] and also benefits for 2007 so 1 record for 2007.


mbr start_date end_date ben_start_dt ben_end_dt
----- ----------- ----------- ------------ -----------
1 01/10/2006 01/01/2006 12/31/2006
1 01/10/2006 01/01/2007 12/31/2007

2 07/10/2006 12/31/2006 01/01/2006 12/31/2006

3 01/10/2005 01/01/2005 12/31/2005
3 01/10/2005 01/01/2006 12/31/2006
3 01/10/2005 01/01/2007 12/31/2007

4 01/10/2005 10/15/2006 01/01/2005 12/31/2005
4 01/10/2005 10/15/2006 01/01/2006 12/31/2006
4 01/10/2007 01/01/2007 12/31/2007

Can you please suggest on how to achieve this. Is it possible to do using sql query or do we need to use Plsql.

Also if possible can we restrict the data to 2 years that is 2007 and 2006. records for 2005 are not needed.

Thanks in advance.
Tom Kyte
August 23, 2007 - 11:46 am UTC

database version?

and ben_start-dt/ben_end_dt seem "constant" -always jan-1 and dec-31, are they just "the year" (eg: one column and 2005, 2006, 2007 would be the values)

chandra, August 23, 2007 - 5:42 am UTC

Tom,

Sorry. Forgot to provide oracle version for the above question.
It is 817.
Tom Kyte
August 23, 2007 - 12:09 pm UTC

well, we can start with this - that makes some assumptions (about multiple rows in the mbr table for a given year - eg: there is just one row per year)

ops$tkyte%ORA10GR2> select * from mbr order by 1, 2;

    MBR_NO START_DATE END_DATE
---------- ---------- ----------
         1 01/10/2006

         2 07/10/2006 12/31/2006

         3 01/10/2005

         4 01/10/2005 10/15/2006
           01/10/2007


ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select *
  2    from (
  3  select mbr_no,
  4         (select mbr.start_date || ' - ' || mbr.end_date
  5            from mbr
  6           where mbr.mbr_no = mbrs_years.mbr_no
  7             and mbrs_years.dt between trunc( mbr.start_date, 'y' )
  8                                    and nvl(mbr.end_date,sysdate) ) start_end_date,
  9         dt
 10    from ( select *
 11             from ( select distinct mbr_no
 12                      from mbr ) members,
 13                  (select add_months( to_date(:start_year,'mm/dd/yyyy'), (rownum-1)*12 ) dt
 14                     from all_objects
 15                    where rownum <= :nyears)
 16         ) mbrs_years
 17         )
 18   where start_end_date is not null
 19   order by mbr_no, dt
 20  /

    MBR_NO START_END_DATE          DT
---------- ----------------------- ----------
         1 01/10/2006 -            01/01/2006
           01/10/2006 -            01/01/2007

         2 07/10/2006 - 12/31/2006 01/01/2006

         3 01/10/2005 -            01/01/2005
           01/10/2005 -            01/01/2006
           01/10/2005 -            01/01/2007

         4 01/10/2005 - 10/15/2006 01/01/2005
           01/10/2005 - 10/15/2006 01/01/2006
           01/10/2007 -            01/01/2007


9 rows selected.

Unable to get the required output in 817

chandra, August 23, 2007 - 3:37 pm UTC

Tom,

Thanks for the information. I however could not produce the output you generated. The output I am getting is as follows
mbr_no start_end_date dt
------- -------------- ---------
1 10-JAN-06 - 1/1/2007
3 10-JAN-05 - 1/1/2007
4 10-JAN-07 - 1/1/2007


Here is the query I am using
select *
from (
select mbr_no,
(select mbr.start_date || ' - ' || mbr.end_date
from mbr
where mbr.mbr_no = mbrs_years.mbr_no
and mbrs_years.dt between trunc( mbr.start_date, 'y' )
and nvl(mbr.end_date,sysdate) ) start_end_date,
dt
from ( select *
from ( select distinct mbr_no
from mbr ) members,
(select add_months( to_date('01/01/2007','mm/dd/yyyy'), (rownum-1)*12 ) dt
from all_objects
where rownum <= 2)
) mbrs_years
)
where start_end_date is not null
order by mbr_no, dt

Your help is highly appreciated.

Thanks

Tom Kyte
August 24, 2007 - 1:54 pm UTC

I'm not going to verify your query is the same as mine.

so, is your query the same as mine?

bear in mind your sample INSERTS were not what was shown in the example.

POST A FULL EXAMPLE - from start to finish, like I do

Hitesh, August 29, 2007 - 11:43 am UTC

Hi Tom,

I have two tables Request_data and Impression_data and I want to find out for a given request_tag and Request_terms what is the MAX no of impressions served. I have written down the query but is taking too much time on 30 million records for Request and 70 million records for impression.

I think because of double pass of both joins ( a and b) it is doing twice the task, how this query can be optimized.

Request_data 
scott@ADAM.DOMAIN> select request_tag, request_terms from request_data;

REQUEST_TAG      REQUEST_TERMS
---------------- ------------------------------------------
kpXL43EzvTO/FPPF sony NP-F100
kpXL43EzvTO/FPPG puma black italia
AaQ6rb47ek9+pPp4 Chuck Berry
ejbhIXLXuR1hUa0j john harriss reinventing india stuart
YQO2nkZg8tjFtzvC football tv on+goals+streaming 
t0DuShRWpHXcw7qx mass transfer exam
BHbhwW8JxK0MmGHU Small Stakes Hold 'em: Winning Big With 
05bkLZIiImcsemSm Chuck Berry
Rkk+o7LuIMkCOPvF dell photo 924
bgozMcJdQtf/pR9K "Dating+services+are+a+fun+way+to+meet+new

scott@ADAM.DOMAIN> select request_tag from impression_data;

REQUEST_TAG
----------------
05bkLZIiImcsemSm
05bkLZIiImcsemSm
05bkLZIiImcsemSm
AaQ6rb47ek9+pPp4
AaQ6rb47ek9+pPp4
zz
t0DuShRWpHXcw7qx
t0DuShRWpHXcw7qx
t0DuShRWpHXcw7qx
t0DuShRWpHXcw7qx
t0DuShRWpHXcw7qx
t0DuShRWpHXcw7qx

Select a.Request_tag, Request_terms, a.Cnt 
from
(
select r.request_tag, request_terms , count(*) CNT from request_data r, impression_data i 
where r.request_tag = i.request_tag 
group by r.request_tag, request_terms
) a Where CNT = (Select MAX(CNT) from 
(
select r.request_tag, request_terms , count(*) CNT from request_data r, impression_data i 
where r.request_tag = i.request_tag 
group by r.request_tag, request_terms
) b 
Where a.Request_terms = b.Request_Terms
)
/

REQUEST_TAG      REQUEST_TERMS                   CNT
---------------- ------------------------------- ----------
05bkLZIiImcsemSm Chuck Berry                     3
t0DuShRWpHXcw7qx mass transfer exam              6      

Is there any better  way to do this specially when the volumes are quite high.

Appreciate an answer.

Tom Kyte
September 04, 2007 - 4:07 pm UTC

sorry, you'll really need to phrase your question better. Your text description:

I want to find out for a
given request_tag and Request_terms what is the MAX no of impressions served.

does not at all match the query you wrote.

not sure what you are going for here.

query

Sam, September 12, 2007 - 5:02 pm UTC

Tom:

I have a table for books, contracts and vendors.

How do you write the query that gives you all the books with same contract number but have different vendors assigned.

Based on this data i should get two rows for Book 7064.


BKNO CNTR VEN
------ ------- ----
7062 J10017 APP
7062 J10017 APP
7063 J40019 ASB
7063 J40019 ASB
7064 J40019 ASB
7064 J40019 BTT
7065 J90019 ASB
Tom Kyte
September 15, 2007 - 7:10 pm UTC

no create
no inserts
NO LOOK

analytics will do this...

scott%ORA9IR2> select deptno, count(distinct ename) over (partition by deptno) from emp;

    DEPTNO COUNT(DISTINCTENAME)OVER(PARTITIONBYDEPTNO)
---------- -------------------------------------------
        10                                           3
        10                                           3
        10                                           3
        20                                           5
        20                                           5
        20                                           5
        20                                           5
        20                                           5
        30                                           6
        30                                           6
        30                                           6
        30                                           6
        30                                           6
        30                                           6

14 rows selected.


you can get a count of distinct vendors assigned to each book, wrap that in an inline view and use a "where cnt > 1"

Select Query problem....

Maverick, November 14, 2007 - 3:00 pm UTC

Table A has Zip_code

Table B has zip_code

Table C is Zip code Master Table

I want to get all those customers from table a and table B whose Zip_code is not in Master zip code table.

like
Select a.name,b.city
from table a,table b
where a.id=b.id
and a.zip_code not in (select zip_code from table c)
and b.zip_code not in (select zip_code from table c);

Is there any other way to combine those two last criteria into one,so that select zip_code from master table will run only once?


Thanks,
Tom Kyte
November 19, 2007 - 6:32 pm UTC

table c is not a master table, else you would not expect to get anything, the foreign keys would NOT allow a row to exist in A or B.

I don't know why you are joining A and B, your query as written is really:

find all rows in A and B such that no mate exists in C and the ID is in both A and B.

very different from your stated question which would simply be:

select * from b where zip_code not in (select zip_code from c)
union all
select * from b where zip_code not in (select zip_code from c);


or

select *
from (select * from a union all select * from b)
where zip_code not in (select zip_code from c);


Select Query problem..

Maverick, November 21, 2007 - 9:51 am UTC

Tom, thanks for your response on this. Let me explain a little more clearly what I need.

Table A: Company

id integer primary key,
last_name varchar2(20),
first_name varchar2(20),
address1 varchar2(20),
city varchar2(10),
state varchar2(2),
zip_code integer

Table B: CompanyJobs

id integer primary key,
parentId integer references a(id),
job_address varchar2(20),
job_city varcahr2(10),
job_state varchar2(2),
job_zip_code integer

Table C: --just list of all zip codes for each state
state varchar2(2),
zip_code integer

No relation between table A and C or B and C.

Now, i need to find all the the jobs for each company whose company zip_code and job zipcode doesn't exist in Table C.

I cannot use union as I need to get only those jobs for a company I need. So I need to join those two company and jobs to get related information ..but to test both zip_codes not in Table C, how can I go ahead with it?

I guess your solutions
doesn't work becaue I'll be having two zip codes in the columns not just a zip_code..

Any other solutions?

Hope I'm clear about explaining my problem..Thanks for your help..



Tom Kyte
November 21, 2007 - 2:55 pm UTC

... No relation between table A and C or B and C. ....
of course there is, come on. company jobs has a zip code, a zip code MUST BE valid. And the state is implied by the zip code.

of course there is a relationship and state doesn't really belong where you have it.


read this:

select *
from (select * from a union all select * from b)
where zip_code not in (select zip_code from c);

and the answer should be pretty clear.

JOIN a to b, select from that join and then say "where not in"


Raj, November 21, 2007 - 10:32 am UTC

Tom,

Is there any particular reason you have chose to use a "Not in " instead of Outer join ?

Regards

Raj

Tom Kyte
November 20, 2007 - 1:40 pm UTC

because we wanted stuff that "was not in"

not the results of an outer join????


Same question again

Raj, November 21, 2007 - 11:07 am UTC

Sorry to be a pain (but this time with some more explanation why I am repeating the question again)

SQL> select deptno from dept where deptno not in (select deptno from emp);

    DEPTNO
----------
        40


SQL> select dept.deptno from dept, emp
  2  where dept.deptno = emp.deptno (+)
  3  and emp.deptno is null;

    DEPTNO
----------
        40


If you see both these queries return the same results or I don't know whether am I missing something really obvious.  First query if I understand it correct, get me all the deptno which are not in emp table.  

Second query I am telling oracle to do an outer join emp table with dept table on deptno and get me the rows where deptno is null on emp table.  

So I am asking the same question again is there any particular question you opted for Not In rather than outer join or "Am I comparing Apple with Oranges?" (Your famous phase) ?

Regards

Raj

Tom Kyte
November 21, 2007 - 3:04 pm UTC

Let me ask you this

so what?

So what if you could write this as an outer join in this case - why would I write it as an outer join.

we wanted "stuff not in"

to me that says "hey, I want to use not in"


not only is:

select deptno from dept where deptno not in (select deptno from emp);

easier to code, but it says something. read it, it speaks to you. It is pretty darn clear what it does.

now, read this:

SQL> select dept.deptno from dept, emp
2 where dept.deptno = emp.deptno (+)
3 and emp.deptno is null;

what does that say - it does not say the same thing.

and in general, they are NOT equivalent.

ops$tkyte%ORA10GR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte%ORA10GR2> create table dept as select * from scott.dept;

Table created.

ops$tkyte%ORA10GR2> insert into emp(empno) values(1);

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select deptno from dept where deptno not in (select deptno from emp);

no rows selected

ops$tkyte%ORA10GR2> select dept.deptno from dept, emp
  2    where dept.deptno = emp.deptno (+)
  3    and emp.deptno is null;

    DEPTNO
----------
        40


I use the construct that reads the best, that most closely says what I mean, that is most intuitive.

And take care to not use things interchangeably that are not technically interchangable.

NOT IN and the outer join approach are only interchangable if the NOT IN column is NOT NULL.

Muddling the water ...

S, November 21, 2007 - 3:23 pm UTC

I prefer to do it this way:

(
select zip_code from b
union
select zip_code from a
)
minus
select zip_code from c


This will give me the missing zip codes as well as a null if there are records in a or b where zip_code is null.

S

select query problem..

Maverick, November 21, 2007 - 4:08 pm UTC

Well, when i said no relation I meant no Physical relation like Foreign key. we'll just use table C as a look up table for all zip codes [table C is not used in application anywhere].

Now, coming to your solution:

select * from a union select * from b

How's this possibly correct? I have different columns in both tables and only zip code is common. I need to send all the possible information for each company->company address and Company job details [including address] to front end [Application]. so, by union'ing like that what am I acheiving?

select a.last_name,a.first_name,a.city,a.zip_code,b.job_address,b.job_city,b.job_state,b.zip_code from a,b
where a.id=b.parentId

No this query will give me what I need [Joining a and b]. But again, how do I get only those companies, to front end application whose company zip_code and job zip code, both are not valid [not in table c]???

That's the question.

Sorry If I am missing your point and keep repeating the same thing. But I am not able to understand "Union". can you expand your query using column names I 've shown above?

Thanks for all your help.

Tom Kyte
November 21, 2007 - 4:36 pm UTC

not knowing your model, I didn't realize these were different zips.

you'll need two "anti-joins" - two not in's. That is all. Each row in your join of A and B need to be joined to two different rows potentionally in C

and all of this could be avoided if you used foreign keys - the bad data would never have gotten in there in the first place.

it should be two nice full scans and a hash join of a to b and then two hash anti joins to c.

if any indexes are used, that would be a problem.


so, your original query is perfection. it is correct and efficient - given the question you need to answer.

query

A reader, January 11, 2008 - 9:51 pm UTC

Tom:

I have a query that finds duplicate names

select first_name,last_name,count(*) from EMP
group by first_name,last_name
having count(*) > 1

Some names have many duplicates. I want to extract the duplicates and store it into another table. However I need to leave one record of the name in the orignal table.

How do you do that?
Tom Kyte
January 14, 2008 - 3:23 pm UTC

search this site for

delete duplicates


ORA-00923 error on 10g

Smita E, January 18, 2008 - 11:02 am UTC

The below procedure gives ORA-00923 error on 10g on execution whereas it works fine on 9i. Please help.

SELECT * FROM -- PREVIOUS TRANSACTION VALUES
-- If the transaction is Forward processed
(SELECT pv_cvg_id ,
NVL(pa_face_Amt,0) pv_face_Amt1,
NVL(pv_base_face_component,0) pv_base_face_component1,
NVL(pv_addl_face_component,0) pv_addl_face_component1,
pa_orig_date
FROM t_ulpv_policy_value, t_lipa_policy_value_common
WHERE pv_txn_num = vprevtxnnum
AND pv_pol_num = vpolnum
AND pv_pol_num = pa_pol_num
AND pv_value_date = vprevvaldate
AND pv_value_date = pa_value_date
AND pv_txn_num = pa_txn_num
AND pv_cvg_id = pa_cvg_id
AND pv_cvg_id >= vcvgstart
AND pv_cvg_id <= vcvgend
UNION
-- If the transaction is Reversed
SELECT pr_cvg_id pv_cvg_id ,
NVL(pr_face_Amt,0) pv_face_Amt1,
NVL(pr_base_face_component,0) pv_base_face_component1,
NVL(pr_addl_face_component,0) pv_addl_face_component1,
pr_orig_date pa_orig_date
FROM t_ulpr_polchg_rev_txn
WHERE pr_txn_num = vprevtxnnum
AND pr_pol_num = vpolnum
AND pr_value_date = vprevvaldate
AND pr_cvg_id >= vcvgstart
AND pr_cvg_id <= vcvgend ) c full outer join
-- CURRENT TRANSACTION VALUES
-- If the transaction is Forward processed
(SELECT pv_cvg_id ,
NVL(pa_face_Amt,0) pv_face_Amt1,
NVL(pv_base_face_component,0) pv_base_face_component1,
NVL(pv_addl_face_component,0) pv_addl_face_component1,
pa_orig_date
FROM t_ulpv_policy_value, t_lipa_policy_value_common
WHERE pv_txn_num = vcurrtxnnum
AND pv_pol_num = vpolnum
AND pv_pol_num = pa_pol_num
AND pv_value_date = vcurrvaldate
AND pv_value_date = pa_value_date
AND pv_txn_num = pa_txn_num
AND pv_cvg_id = pa_cvg_id
AND pv_cvg_id >= vcvgstart
AND pv_cvg_id <= vcvgend
UNION
-- If the transaction is Reversed
SELECT pr_cvg_id pv_cvg_id ,
NVL(pr_face_Amt,0) pv_face_Amt1,
NVL(pr_base_face_component,0) pv_base_face_component1,
NVL(pr_addl_face_component,0) pv_addl_face_component1,
pr_orig_date pa_orig_date
FROM t_ulpr_polchg_rev_txn
WHERE pr_txn_num = vcurrtxnnum
AND pr_pol_num = vpolnum
AND pr_value_date = vcurrvaldate
AND pr_cvg_id >= vcvgstart
AND pr_cvg_id <= vcvgend ) d on c.pv_cvg_id = d.pv_cvg_id ;

Tom Kyte
January 19, 2008 - 10:36 pm UTC

funny, when I run it I get:

                        FROM  t_ulpv_policy_value, t_lipa_policy_value_common
                                                   *
ERROR at line 44:
ORA-00942: table or view does not exist



you must have some tables I don't I guess....


do you seriously think I could just sit here and compile that in my head??

ORA-00923

Smita E, January 21, 2008 - 5:45 am UTC

Sorry. Here are the create and insert statements:

create table T_ULPR_POLCHG_REV_TXN
(
PR_VALUE_DATE DATE not null,
PR_POL_NUM NUMBER(22) not null,
PR_CVG_ID NUMBER(6) not null,
PR_TXN_NUM NUMBER(22) not null,
PR_BASE_FACE_COMPONENT NUMBER(38,16),
PR_ADDL_FACE_COMPONENT NUMBER(38,16),
PR_FACE_AMT NUMBER(38,16),
PR_ORIG_TXN_NUM NUMBER(22),
PR_ORIG_DATE DATE,
TSTAMP DATE
);

create table T_LIPA_POLICY_VALUE_COMMON
(
PA_VALUE_DATE DATE not null, PA_POL_NUM NUMBER(22) not null,
PA_CVG_ID NUMBER(6) not null, PA_PREM NUMBER(38,16),
PA_VANISH NUMBER(6), PA_TERM NUMBER(6),
PA_DEDUCTIONS NUMBER(38,16), PA_FACE_AMT NUMBER(38,16),
PA_CUM_CVG_FACE NUMBER(38,16), PA_GNTY_PERD NUMBER(38,16),
PA_NSP NUMBER(38,16), PA_USE_EFT VARCHAR2(1),
PA_MIS_CODE1 VARCHAR2(6), PA_MIS_CODE2 VARCHAR2(6),
PA_PAY_YEARS NUMBER(6), PA_MODE_PREM NUMBER(38,16),
PA_PAY_MODE VARCHAR2(1), PA_RATE_CODE VARCHAR2(2),
PA_FLAT_EXTRA NUMBER(38,16), PA_COI_FACTOR NUMBER(38,16),
PA_RATE_YEAR NUMBER(6), PA_CVG_GI_DIFF_AMT NUMBER(38,16),
PA_CVG_AMT NUMBER(38,16), PA_EXS_TERM_PREM_AMT NUMBER(38,16),
PA_SAL_MULT NUMBER(38,16), PA_PAYROLL_DED_TYPE VARCHAR2(1),
TSTAMP DATE, PA_WTHLD_MAR_STATUS NUMBER(6),
PA_WTHLD_ALLOW NUMBER(6), PA_GEN_POL_SWITCH NUMBER(6),
PA_SALARY NUMBER(38,16), PA_TXN_NUM NUMBER(22) not null,
PA_FE_AMT_JOINT NUMBER(38,16), PA_FE_DUR_JOINT NUMBER(6),
PA_SUBSTD_TBL_JOINT VARCHAR2(2), PA_BASIS_POINT_TOTAL NUMBER(38,16),
PA_FIRST_USER_ID NUMBER(22), PA_FIRST_DATE DATE,
PA_LAST_USER_ID NUMBER(22), PA_LAST_DATE DATE,
PA_CVG_ISSUE_AGE NUMBER(6), PA_CVG_ISSUE_AGE_JT NUMBER(6),
PA_CVG_ISSUE_JEA NUMBER(38,16), PA_MIS_CODE3 VARCHAR2(6),
PA_MIS_CODE4 VARCHAR2(6), PA_DEATH_PROC_STATUS NUMBER(6),
PA_PERM_FE_AMT NUMBER(38,16), PA_PERM_FE_DUR NUMBER(6),
PA_7PAY_LOW_FACE NUMBER(38,16), PA_ORIG_DATE DATE,
PA_ORIG_TXN_NUM NUMBER(22), PA_TOT_FACE_AMT_AT_INCREASE NUMBER(38,16) default 0.0
);

create table T_ULPV_POLICY_VALUE
(
PV_VALUE_DATE DATE not null, PV_POL_NUM NUMBER(22) not null,
PV_CVG_ID NUMBER(6) not null, PV_TGT_AMT NUMBER(38,16),
PV_GSP NUMBER(38,16), PV_GLP NUMBER(38,16),
PV_GMP NUMBER(38,16), PV_TAMRA_AMT NUMBER(38,16),
PV_MIN_PREM NUMBER(38,16), PV_TAMRA_DB NUMBER(38,16),
PV_SEC NUMBER(38,16), PV_LIFE_NSP NUMBER(38,16),
PV_ANN_NSP NUMBER(38,16), PV_POLCHG_AV NUMBER(38,16),
PV_QAB_FACTOR NUMBER(38,16), PV_MDBG_PERD NUMBER(38,16),
TSTAMP DATE, PV_NLP_STATUTORY NUMBER(38,16),
PV_NLP_TAX NUMBER(38,16), PV_COMM_TGT_PREM NUMBER(38,16),
PV_INITIAL_FACE NUMBER(38,16), PV_PRIMARY_CLASS NUMBER(6),
PV_JOINT_CLASS NUMBER(6), PV_TXN_NUM NUMBER(22) not null,
PV_BASE_FACE_COMPONENT NUMBER(38,16), PV_ADDL_FACE_COMPONENT NUMBER(38,16),
PV_GCR_MIN_ANNUAL_PREM_AMT NUMBER(38,16), PV_GCR_BP_PREM_AMT NUMBER(38,16),
PV_YEAR_1_7_PAY_PREMIUM NUMBER(38,16), PV_PRE_MAT_CHG_DCV NUMBER(38,16),
PV_7PAY_NSP NUMBER(38,16), PV_7PAY_NSP_FACTOR NUMBER(38,16),
PV_7PAY_PREMIUM_FACTOR NUMBER(38,16));

Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 76184.08,
17674.01, 0, 24408.35, 2331.4953858, 900000,
0, 0.155422761875678, 6.22524254282205, 0, 0.0030140362860648,
0, TO_DATE('01/14/2008 08:35:45', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 15105, 900000,
0, 0, 0, NULL, 0,
139880.49, 0.155422766666667, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 02:47:28', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, -999, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 76184.08,
17674.01, 0, 24408.35, 2331.4953858, 900000,
0, 0.155422761875678, 6.22524254282205, 0, 0.0030140362860648,
0, TO_DATE('01/14/2008 08:35:37', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 15101, 900000,
0, 0, 0, NULL, 0,
139880.49, 0.155422766666667, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 76184.08,
17674.01, 0, 24408.35, 2331.4953858, 900000,
0, 0.155422761875678, 6.22524254282205, 0, 0.0030140362860648,
0, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 15102, 900000,
0, 0, 0, NULL, 0,
139880.49, 0.155422766666667, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('07/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 76184.08,
17674.01, 0, 24408.35, 2331.4953858, 900000,
0, 0.155422761875678, 6.22524254282205, 0, 0.0030140362860648,
0, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 15104, 900000,
0, 0, 0, NULL, 0,
139880.49, 0.155422766666667, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 04:27:32', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 13702, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 04:27:34', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 13703, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 04:27:35', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 13704, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 04:27:36', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 13705, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 04:39:45', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 13701, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_ulpv_policy_value
(pv_value_date, pv_pol_num, pv_cvg_id, pv_tgt_amt, pv_gsp, pv_glp, pv_gmp, pv_tamra_amt, pv_min_prem, pv_tamra_db, pv_sec, pv_life_nsp, pv_ann_nsp, pv_polchg_av, pv_qab_factor, pv_mdbg_perd, tstamp, pv_nlp_statutory, pv_nlp_tax, pv_comm_tgt_prem, pv_initial_face, pv_primary_class, pv_joint_class, pv_txn_num, pv_base_face_component, pv_addl_face_component, pv_gcr_min_annual_prem_amt, pv_gcr_bp_prem_amt, pv_year_1_7_pay_premium, pv_pre_mat_chg_dcv, pv_7pay_nsp, pv_7pay_nsp_factor, pv_7pay_premium_factor)
Values
(TO_DATE('05/15/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 9283.42, 84720.9,
19673.41, 0, 26905, 2502.8553858, 1000000,
0, 0.155422761875678, 6.22524254282205, 0, 0,
0, TO_DATE('01/14/2008 05:23:57', 'MM/DD/YYYY HH24:MI:SS'), 0, 0, 9283.4209212,
1000000, NULL, NULL, 14232, 1000000,
0, 0, 0, NULL, 0,
155422.76, 0.15542276, 0.0249665391840654);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 900000, 1000000, 0,
0.155422761875678, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 08:35:45', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 15105, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 08:35:45', 'MM/DD/YYYY HH24:MI:SS'), 2, TO_DATE('01/14/2008 08:35:45', 'MM/DD/YYYY HH24:MI:SS'),
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 900000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0.15542276, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 02:47:28', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, -999, 0, 0, NULL,
0, 6, TO_DATE('01/14/2008 02:47:28', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 900000, 1000000, 0,
0.155422761875678, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 08:35:37', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 15101, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 08:35:37', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 900000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('06/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 900000, 1000000, 0,
0.155422761875678, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 15102, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 900000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('07/16/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 900000, 1000000, 0,
0.155422761875678, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 15104, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 08:35:46', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 900000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 04:27:32', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 13702, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 04:27:32', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 04:27:34', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 13703, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 04:27:34', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 04:27:35', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 13704, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 04:27:35', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 04:27:36', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 13705, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 04:27:36', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 04:39:45', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 13701, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 04:27:36', 'MM/DD/YYYY HH24:MI:SS'), 2, TO_DATE('01/14/2008 04:39:45', 'MM/DD/YYYY HH24:MI:SS'),
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);
Insert into t_lipa_policy_value_common
(pa_value_date, pa_pol_num, pa_cvg_id, pa_prem, pa_vanish, pa_term, pa_deductions, pa_face_amt, pa_cum_cvg_face, pa_gnty_perd, pa_nsp, pa_use_eft, pa_mis_code1, pa_mis_code2, pa_pay_years, pa_mode_prem, pa_pay_mode, pa_rate_code, pa_flat_extra, pa_coi_factor, pa_rate_year, pa_cvg_gi_diff_amt, pa_cvg_amt, pa_exs_term_prem_amt, pa_sal_mult, pa_payroll_ded_type, tstamp, pa_wthld_mar_status, pa_wthld_allow, pa_gen_pol_switch, pa_salary, pa_txn_num, pa_fe_amt_joint, pa_fe_dur_joint, pa_substd_tbl_joint, pa_basis_point_total, pa_first_user_id, pa_first_date, pa_last_user_id, pa_last_date, pa_cvg_issue_age, pa_cvg_issue_age_jt, pa_cvg_issue_jea, pa_mis_code3, pa_mis_code4, pa_death_proc_status, pa_perm_fe_amt, pa_perm_fe_dur, pa_7pay_low_face, pa_orig_date, pa_orig_txn_num, pa_tot_face_amt_at_increase)
Values
(TO_DATE('05/15/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, 0, 0,
1, 0, 1000000, 1000000, 0,
0.155422761875678, NULL, NULL, NULL, 1,
0, 'A', '00', 0, 0,
0, 0, 0, 0, 0,
NULL, TO_DATE('01/14/2008 05:23:57', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL, -999,
0, 14232, 0, 0, NULL,
0, 2, TO_DATE('01/14/2008 05:23:57', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL,
NULL, NULL, 0, NULL, NULL,
0, 0, 73, 1000000, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
NULL, 1000000);

Insert into t_ulpr_polchg_rev_txn
(pr_value_date, pr_pol_num, pr_cvg_id, pr_txn_num, pr_base_face_component, pr_addl_face_component, pr_face_amt, pr_orig_txn_num, pr_orig_date, tstamp)
Values
(TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 424, 0, -999, 1000000,
0, 1000000, NULL, TO_DATE('11/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/14/2008 02:47:27', 'MM/DD/YYYY HH24:MI:SS'));
COMMIT;


And these are the corresponding values for the variables in the select statement in my earlier post:

vpolnum := 424;
vcurrtxnnum := 15101;
vprevtxnnum := -999;
vcurrvaldate := '16-Nov-2008';
vprevvaldate := '16-Jun-2007';
vcvgstart := 0;
vcvgend := 0;
vtxntype := 29;
vsortind := 'D';
Tom Kyte
January 21, 2008 - 8:24 am UTC

arg, I do not need these large inserts to simply try to EXECUTE YOUR QUERY.

what I need are the table creates, which I now have

AND A SQL STATEMENT I can execute, yours is littered with vxxxxxxxx variables.


I have no problem running your query in 10gr2

ops$tkyte%ORA10GR2> variable vcvgstart varchar2(30)
ops$tkyte%ORA10GR2> variable vcvgend varchar2(30)
ops$tkyte%ORA10GR2> variable vcurrtxnnum number
ops$tkyte%ORA10GR2> variable vcurrvaldate varchar2(30)
ops$tkyte%ORA10GR2> variable vprevtxnnum varchar2(30)
ops$tkyte%ORA10GR2> variable vprevvaldate varchar2(30)
ops$tkyte%ORA10GR2> variable vpolnum varchar2(30)
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>                    SELECT * FROM -- PREVIOUS TRANSACTION VALUES
  2                  -- If the transaction is Forward processed
  3                          (SELECT  pv_cvg_id  ,
  4                                  NVL(pa_face_Amt,0) pv_face_Amt1,
  5                                  NVL(pv_base_face_component,0) pv_base_face_component1,
  6                                  NVL(pv_addl_face_component,0) pv_addl_face_component1,
  7                                  pa_orig_date
  8                          FROM  t_ulpv_policy_value, t_lipa_policy_value_common
  9                          WHERE pv_txn_num = :vprevtxnnum
 10                          AND   pv_pol_num = :vpolnum
 11                          AND   pv_pol_num = pa_pol_num
 12                          AND   pv_value_date = :vprevvaldate
 13                          AND   pv_value_date = pa_value_date
 14                          AND   pv_txn_num = pa_txn_num
 15                          AND   pv_cvg_id = pa_cvg_id
 16                          AND   pv_cvg_id >= :vcvgstart
 17                          AND   pv_cvg_id <= :vcvgend
 18                          UNION
 19                          -- If the transaction is Reversed
 20                          SELECT  pr_cvg_id pv_cvg_id  ,
 21                                  NVL(pr_face_Amt,0) pv_face_Amt1,
 22                                  NVL(pr_base_face_component,0) pv_base_face_component1,
 23                                  NVL(pr_addl_face_component,0) pv_addl_face_component1,
 24                                  pr_orig_date pa_orig_date
 25                          FROM  t_ulpr_polchg_rev_txn
 26                          WHERE pr_txn_num = :vprevtxnnum
 27                          AND   pr_pol_num = :vpolnum
 28                          AND   pr_value_date = :vprevvaldate
 29                          AND   pr_cvg_id >= :vcvgstart
 30                          AND   pr_cvg_id <= :vcvgend ) c full outer join
 31                         -- CURRENT TRANSACTION VALUES
 32                          -- If the transaction is Forward processed
 33                          (SELECT  pv_cvg_id ,
 34                                  NVL(pa_face_Amt,0) pv_face_Amt1,
 35                                  NVL(pv_base_face_component,0) pv_base_face_component1,
 36                                  NVL(pv_addl_face_component,0) pv_addl_face_component1,
 37                                  pa_orig_date
 38                          FROM  t_ulpv_policy_value, t_lipa_policy_value_common
 39                          WHERE pv_txn_num = :vcurrtxnnum
 40                          AND   pv_pol_num = :vpolnum
 41                          AND   pv_pol_num = pa_pol_num
 42                          AND   pv_value_date = :vcurrvaldate
 43                          AND   pv_value_date = pa_value_date
 44                          AND   pv_txn_num = pa_txn_num
 45                          AND   pv_cvg_id = pa_cvg_id
 46                          AND   pv_cvg_id >= :vcvgstart
 47                          AND   pv_cvg_id <= :vcvgend
 48                          UNION
 49                         -- If the transaction is Reversed
 50                          SELECT  pr_cvg_id pv_cvg_id ,
 51                                  NVL(pr_face_Amt,0) pv_face_Amt1,
 52                                  NVL(pr_base_face_component,0) pv_base_face_component1,
 53                                  NVL(pr_addl_face_component,0) pv_addl_face_component1,
 54                                  pr_orig_date pa_orig_date
 55                          FROM  t_ulpr_polchg_rev_txn
 56                          WHERE pr_txn_num = :vcurrtxnnum
 57                          AND   pr_pol_num = :vpolnum
 58                          AND   pr_value_date = :vcurrvaldate
 59                          AND   pr_cvg_id >= :vcvgstart
 60                          AND   pr_cvg_id <= :vcvgend ) d on c.pv_cvg_id  = d.pv_cvg_id ;

no rows selected

ops$tkyte%ORA10GR2> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
PL/SQL Release 10.2.0.2.0 - Production
CORE    10.2.0.2.0      Production
TNS for Linux: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production



If you want me to look at this, you will provide a 100% standalone, SMALL, yet 100% complete test case to execute and reproduce.

ORA-00923

Smita E, January 22, 2008 - 1:20 am UTC

This is the procedure that I am trying to execute:

create or replace procedure test as
cursor c1 (vprevtxnnum    IN NUMBER,
                                       vprevvaldate   IN DATE,
                                       vcurrtxnnum    IN NUMBER,
                                       vcurrvaldate   IN DATE,
                                       vpolnum        IN NUMBER,
                                       vcvgstart      IN NUMBER,
                                       vcvgend        IN NUMBER,
                                       vtxntype       IN NUMBER,
                                       vsortind       IN VARCHAR2
                                      ) IS
SELECT   NVL(c.pv_cvg_id,d.pv_cvg_id) pa_cvg_id,
                  DECODE(vtxntype, 133, DECODE(NVL(d.pv_base_face_component1,c.pv_base_face_component1),
                                               0,0,
                                               (NVL(d.pv_face_Amt1,0)- NVL(c.pv_face_Amt1,0)))
                         ,0 ) dif_face_feature_BSI ,
                  DECODE(vtxntype, 133, DECODE(NVL(d.pv_addl_face_component1,c.pv_addl_face_component1),
                                               0,0,
                                               (NVL(d.pv_face_Amt1,0)- NVL(c.pv_face_Amt1,0)))
                         ,0 ) dif_face_feature_ASI ,
                  (NVL(d.pv_base_face_component1,0)- NVL(c.pv_base_face_component1,0)) dif_base_face ,
                  (NVL(d.pv_addl_face_component1,0) - NVL(c.pv_addl_face_component1,0)) dif_addl_face,
                  NVL(c.pa_orig_date,d.pa_orig_date ) pa_orig_date
         FROM  (-- PREVIOUS TRANSACTION VALUES
                -- If the transaction is Forward processed
                SELECT  pv_cvg_id  ,
                        NVL(pa_face_Amt,0) pv_face_Amt1,
                        NVL(pv_base_face_component,0) pv_base_face_component1,
                        NVL(pv_addl_face_component,0) pv_addl_face_component1,
                        pa_orig_date
                FROM  t_ulpv_policy_value, t_lipa_policy_value_common
                WHERE pv_txn_num = vprevtxnnum
                AND   pv_pol_num = vpolnum
                AND   pv_pol_num = pa_pol_num
                AND   pv_value_date = vprevvaldate
                AND   pv_value_date = pa_value_date
                AND   pv_txn_num = pa_txn_num
                AND   pv_cvg_id = pa_cvg_id
                AND   pv_cvg_id >= vcvgstart
                AND   pv_cvg_id <= vcvgend
                UNION
                -- If the transaction is Reversed
                SELECT  pr_cvg_id pv_cvg_id  ,
                        NVL(pr_face_Amt,0) pv_face_Amt1,
                        NVL(pr_base_face_component,0) pv_base_face_component1,
                        NVL(pr_addl_face_component,0) pv_addl_face_component1,
                        pr_orig_date pa_orig_date
                FROM  t_ulpr_polchg_rev_txn
                WHERE pr_txn_num = vprevtxnnum
                AND   pr_pol_num = vpolnum
                AND   pr_value_date = vprevvaldate
                AND   pr_cvg_id >= vcvgstart
                AND   pr_cvg_id <= vcvgend   ) c full outer join
               (-- CURRENT TRANSACTION VALUES
                -- If the transaction is Forward processed
                SELECT  pv_cvg_id ,
                        NVL(pa_face_Amt,0) pv_face_Amt1,
                        NVL(pv_base_face_component,0) pv_base_face_component1,
                        NVL(pv_addl_face_component,0) pv_addl_face_component1,
                        pa_orig_date
                FROM  t_ulpv_policy_value, t_lipa_policy_value_common
                WHERE pv_txn_num = vcurrtxnnum
                AND   pv_pol_num = vpolnum
                AND   pv_pol_num = pa_pol_num
                AND   pv_value_date = vcurrvaldate
                AND   pv_value_date = pa_value_date
                AND   pv_txn_num = pa_txn_num
                AND   pv_cvg_id = pa_cvg_id
                AND   pv_cvg_id >= vcvgstart
                AND   pv_cvg_id <= vcvgend
                UNION
               -- If the transaction is Reversed
                SELECT  pr_cvg_id pv_cvg_id ,
                        NVL(pr_face_Amt,0) pv_face_Amt1,
                        NVL(pr_base_face_component,0) pv_base_face_component1,
                        NVL(pr_addl_face_component,0) pv_addl_face_component1,
                        pr_orig_date pa_orig_date
                FROM  t_ulpr_polchg_rev_txn
                WHERE pr_txn_num = vcurrtxnnum
                AND   pr_pol_num = vpolnum
                AND   pr_value_date = vcurrvaldate
                AND   pr_cvg_id >= vcvgstart
                AND   pr_cvg_id <= vcvgend    ) d
         on (c.pv_cvg_id  = d.pv_cvg_id );
   begin
   for i in c1 (-999, '16-Nov-07',15101, '16-Jun-08',424,0,0,29, 'D')

   loop
    dbms_output.put_line('in loop');
   end loop;
   exception when others then
   dbms_output.put_line(SQLERRM||'error');
   end;

And here is the output on SQLPlus:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
PL/SQL Release 10.2.0.2.0 - Production
CORE    10.2.0.2.0      Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.2.0 - Productio
NLSRTL Version 10.2.0.2.0 - Production

SQL> begin
  2    -- Call the procedure
  3    test;
  4  end;
  5  /
ORA-00923: FROM keyword not found where expectederror

PL/SQL procedure successfully completed.

SQL> 

Tom Kyte
January 22, 2008 - 7:06 am UTC

arg, when others then NOTHING -

I do not get it
I will never get it
Nothing frustrates me more than seeing that

STOP CODING WHEN OTHERS, unless and until you put either

a) RAISE after it
b) RAISE_APPLICATION_ERROR() after it


I'm not sure where people are being taught error handling, but it sure isn't working.

I cannot reproduce, please utilize support

 87     begin
 88     for i in c1 (-999, '16-Nov-07',15101, '16-Jun-08',424,0,0,29, 'D')
 89     loop
 90      dbms_output.put_line('in loop');
 91     end loop;
 92  end;
 93  /

Procedure created.

ops$tkyte%ORA10GR2> set echo on
ops$tkyte%ORA10GR2> exec test

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
PL/SQL Release 10.2.0.2.0 - Production
CORE    10.2.0.2.0      Production
TNS for Linux: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production


ORA-00923

Smita E, January 22, 2008 - 8:11 am UTC

Thanks very much for all the help. But just wanted inform you that the piece of code that I sent is just being used to debug the error that I was getting. That is why it only has a dbms_output statement in the exception section.

The actual procedure has an error routine that inserts the exception details into an error log table.

BTW ... I just modified the query to remove the full outer join. Instead I did a union of left outer join and right outer join and its working.
Tom Kyte
January 22, 2008 - 6:21 pm UTC

... sent is just being used to debug the error that I was getting. ...

nope not good enough.

by doing that you know what you did? all you did was to HIDE THE ACTUAL LINE NUMBER causing the error, making debugging harder, if not impossible. Please don't tell me "it was for debugging", it only makes debugger harder or IMPOSSIBLE (impossible if you ask me)


do not every do that, you'll never be sorry if you don't, you'll only be sorry if you DO.

The actual routine BETTER HAVE A RAISE OR RAISE_APPLICATION_ERROR() after that when others, if not IT HAS A BIG BUG.

ORA-00923

Smita E, January 22, 2008 - 11:37 pm UTC

Got your point. Thanks.

This is the output now with the full outer join:

SQL> set serveroutput on
SQL> begin
  2  test;
  3  end;
  4  /
begin
*
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
ORA-06512: at "DBO.TEST", line 92
ORA-06512: at line 2

Tom Kyte
January 23, 2008 - 7:50 am UTC

it is strange that when I run it in the identical version - I do not get that error - might be optimizer related, as stated - please utilize support for this one.

sql query

vineela, January 22, 2008 - 11:53 pm UTC

I need Oracle pl/sql code to transpose a number of tables into one table. The original table structures has lots of columns and a few rows. I want a few columns and lots of rows. Sample tables are below. Maybe a procedure is best for this I'm not sure... thanks.

table1 structure and data
year variable_ID variable2_ID2, etc
1999 3433 333



New table structure and data
year variable ID Value
1999 variable ID 3433
1999 variable2 ID 333



Tom Kyte
January 23, 2008 - 7:52 am UTC

no plsql, this is a sql problem.

wish you would have given me a table and inserts to work with, then I could show you how trivial this is.

here is a non-executed "hint", just join to a table that has N rows (where N = number of columns) - that'll turn each row in your table into N rows. Use decode to get the i'th column and output it

with data as (select level id from dual connect by level <= NUM__COLUMNS)
select t.year,
       data.ID,
       decode( data.id, 1, variable1, 2, variable2, ... N, variableN ) variable
  from table t, data
/

 

How to improve this SQL

A reader, January 25, 2008 - 11:01 am UTC

create table metrics
(timestamp DATE NOT NULL,
task_type VARCHAR2(30) NOT NULL,
response_time NUMBER NOT NULL,
hostname VARCHAR2(30) NOT NULL);


insert into metrics values(to_date('20080125_08.11.16','YYYYMMDD_HH24.MI.SS'),'Task1',2.179000,'Host1');
insert into metrics values(to_date('20080125_08.30.16','YYYYMMDD_HH24.MI.SS'),'Task1',2.177000,'Host1');
insert into metrics values(to_date('20080125_09.11.17','YYYYMMDD_HH24.MI.SS'),'Task1',2.217000,'Host1');
insert into metrics values(to_date('20080125_09.40.19','YYYYMMDD_HH24.MI.SS'),'Task1',2.053000,'Host1');
insert into metrics values(to_date('20080125_10.15.19','YYYYMMDD_HH24.MI.SS'),'Task1',2.076000,'Host1');
insert into metrics values(to_date('20080125_10.40.21','YYYYMMDD_HH24.MI.SS'),'Task1',2.177000,'Host1');
insert into metrics values(to_date('20080125_08.05.07','YYYYMMDD_HH24.MI.SS'),'Task2',31.155000,'Host1');
insert into metrics values(to_date('20080125_08.06.03','YYYYMMDD_HH24.MI.SS'),'Task2',26.720000,'Host1');
insert into metrics values(to_date('20080125_09.10.33','YYYYMMDD_HH24.MI.SS'),'Task2',32.475000,'Host1');
insert into metrics values(to_date('20080125_09.11.11','YYYYMMDD_HH24.MI.SS'),'Task2',24.419000,'Host1');
insert into metrics values(to_date('20080125_10.15.51','YYYYMMDD_HH24.MI.SS'),'Task2',46.280000,'Host1');
insert into metrics values(to_date('20080125_10.16.06','YYYYMMDD_HH24.MI.SS'),'Task2',29.662000,'Host1');
insert into metrics values(to_date('20080125_08.15.16','YYYYMMDD_HH24.MI.SS'),'Task1',2.579000,'Host2');
insert into metrics values(to_date('20080125_08.11.16','YYYYMMDD_HH24.MI.SS'),'Task1',2.777000,'Host2');
insert into metrics values(to_date('20080125_09.21.17','YYYYMMDD_HH24.MI.SS'),'Task1',2.317000,'Host2');
insert into metrics values(to_date('20080125_09.35.19','YYYYMMDD_HH24.MI.SS'),'Task1',2.253000,'Host2');
insert into metrics values(to_date('20080125_10.17.19','YYYYMMDD_HH24.MI.SS'),'Task1',2.876000,'Host2');
insert into metrics values(to_date('20080125_10.45.21','YYYYMMDD_HH24.MI.SS'),'Task1',2.977000,'Host2');
insert into metrics values(to_date('20080125_08.05.07','YYYYMMDD_HH24.MI.SS'),'Task2',32.145000,'Host2');
insert into metrics values(to_date('20080125_08.06.03','YYYYMMDD_HH24.MI.SS'),'Task2',23.921000,'Host2');
insert into metrics values(to_date('20080125_09.34.39','YYYYMMDD_HH24.MI.SS'),'Task2',35.475000,'Host2');
insert into metrics values(to_date('20080125_09.51.22','YYYYMMDD_HH24.MI.SS'),'Task2',22.419000,'Host2');
insert into metrics values(to_date('20080125_10.11.51','YYYYMMDD_HH24.MI.SS'),'Task2',45.280000,'Host2');
insert into metrics values(to_date('20080125_10.29.06','YYYYMMDD_HH24.MI.SS'),'Task2',30.662000,'Host2');


I generate this output:


HOUR HOSTNAME TASK1AVG TASK1CNT TASK2AVG TASK2CNT
------- ---------- ---------- ---------- ---------- ----------
9:00 AM Host1 2.18 2 28.94 2

9:00 AM Host2 2.68 2 28.03 2

10:00 AM Host1 2.14 2 28.45 2

10:00 AM Host2 2.29 2 28.95 2

11:00 AM Host1 2.13 2 37.97 2

11:00 AM Host2 2.93 2 37.97 2


with this ugly and massive SQL:

col hostname for a10;
set feedback off;
set lines 200;
select '9:00 AM' hour, a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
)d;
set head off;
select '9:00 AM' hour, a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '08:00:00' and '08:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
)d;
select '10:00 AM' hour, a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
)d;
select '10:00 AM' hour, a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '09:00:00' and '09:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
)d;
select '11:00 AM' hour, a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task1'
and hostname = 'Host1'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task2'
and hostname = 'Host1'
group by hostname
)d;
select '11:00 AM', a.hostname, a.Task1Avg, b.Task1Cnt, c.Task2Avg, d.Task2Cnt
from
(
select hostname, round(avg(response_time),2) as Task1Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) a,
(
select hostname, count(*) as Task1Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task1'
and hostname = 'Host2'
group by hostname
) b,
(
select hostname, round(avg(response_time),2) as Task2Avg
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
) c,
(
select hostname, count(*) as Task2Cnt
from metrics
where to_char(timestamp,'HH24:MI:SS') between '10:00:00' and '10:59:59'
and task_type = 'Task2'
and hostname = 'Host2'
group by hostname
)d;



I've been trying to figure out a way to use analytics to improve this SQL, maybe do it all in one SQL.

Any ideas?

Thanks.

Tom Kyte
January 25, 2008 - 11:13 am UTC

no analytics, simple group by is all that is needed here.

ops$tkyte%ORA10GR2> select hostname, trunc(timestamp,'hh') ts, task_type, round(avg(response_time),2) task_avg
  2    from metrics
  3   group by hostname, trunc(timestamp,'hh'), task_type
  4   order by hostname, trunc(timestamp,'hh'), task_type
  5  /

HOSTNAME   TS        TASK_TYPE    TASK_AVG
---------- --------- ---------- ----------
Host1      25-JAN-08 Task1            2.18
Host1      25-JAN-08 Task2           28.94
Host1      25-JAN-08 Task1            2.14
Host1      25-JAN-08 Task2           28.45
Host1      25-JAN-08 Task1            2.13
Host1      25-JAN-08 Task2           37.97
Host2      25-JAN-08 Task1            2.68
Host2      25-JAN-08 Task2           28.03
Host2      25-JAN-08 Task1            2.29
Host2      25-JAN-08 Task2           28.95
Host2      25-JAN-08 Task1            2.93
Host2      25-JAN-08 Task2           37.97

12 rows selected.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select ts, hostname ,
  2         max(decode( task_type, 'Task1', task_avg ) ) task1_avg,
  3         max(decode( task_type, 'Task2', task_avg ) ) task2_avg
  4    from (
  5  select hostname, trunc(timestamp,'hh') ts, task_type, round(avg(response_time),2) task_avg
  6    from metrics
  7   group by trunc(timestamp,'hh'), hostname, task_type
  8         )
  9   group by ts, hostname
 10   order by ts, hostname
 11  /

TS        HOSTNAME    TASK1_AVG  TASK2_AVG
--------- ---------- ---------- ----------
25-JAN-08 Host1            2.18      28.94
25-JAN-08 Host2            2.68      28.03
25-JAN-08 Host1            2.14      28.45
25-JAN-08 Host2            2.29      28.95
25-JAN-08 Host1            2.13      37.97
25-JAN-08 Host2            2.93      37.97

6 rows selected.

Missing 2 columns

A reader, January 25, 2008 - 11:19 am UTC

Tom,

Thanks a lot for the quick reply but your solution is missing two columns: Task1Cnt and Task2Cnt. I also need to show counts along with averages.

Thanks!

Revised query

A reader, January 25, 2008 - 2:19 pm UTC

Here's the revised query with the missing columns:

select ts,
hostname ,
max(decode( task_type, 'Task1', task_avg ) ) task1_avg,
max(decode( task_type, 'Task2', task_avg ) ) task2_avg,
max(decode( task_type, 'Task1', task_cnt) ) task1_cnt,
max(decode( task_type, 'Task2', task_cnt) ) task2_cnt
from (
select hostname,
to_char(timestamp,'hh') ts,
task_type,
round(avg(response_time),2) task_avg,
count(*) task_cnt
from metrics
group by to_char(timestamp,'hh'), hostname, task_type
)
group by ts, hostname
order by ts, hostname



Thanks for pointing out analytics was not necessary. It pays to start off keeping it simple!




A reader, February 09, 2008 - 6:39 pm UTC

Tom,
Which SQL Char function should be used to get datafile name from v$datafile without path? For example:

/db/db01/test/system01.dbf

return only with system01.dbf

Thanks

Tom Kyte
February 11, 2008 - 10:11 pm UTC

ops$tkyte%ORA10GR2> select name, case when instr(name,'/',-1)>0 then substr(name,instr(name,'/',-1)+1) else name end from v$datafile;

NAME
------------------------------
CASEWHENINSTR(NAME,'/',-1)>0THENSUBSTR(NAME,INSTR(NAME,'/',-1)+1)ELSENAMEEND
-------------------------------------------------------------------------------
/home/ora10gr2/oracle/product/
10.2.0/oradata/ora10gr2/system
01.dbf
system01.dbf

get list of tables used by select statment

LUAY ZUBAIDY, February 17, 2008 - 4:25 am UTC

Dear,,
how i can get list of tables used by select statment
for example
query :
select col1,col2
from tab1,tab2
-> how i can get tab1 and tab2
need this case for complex query

Thanks alot
Tom Kyte
February 17, 2008 - 8:11 am UTC



you can SORT OF get this from v$sql_plan. But a plan might not always access the table (as I'll demonstrate) in which case you'll need to take the index named and turn it into a table...

ops$tkyte%ORA10GR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte%ORA10GR2> create table dept as select * from scott.dept;

Table created.

ops$tkyte%ORA10GR2> create or replace view v as select dname, empno from emp, dept where 1=0;

View created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table emp add constraint emp_pk primary key(empno);

Table altered.

ops$tkyte%ORA10GR2> alter table dept add constraint dept_pk primary key(deptno);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from emp, dept where ename like '%X%';

no rows selected

ops$tkyte%ORA10GR2> column sql_id format a15 new_val sql_id
ops$tkyte%ORA10GR2> column child_number format a15 new_val child_number
ops$tkyte%ORA10GR2> select ltrim(substr( plan_table_output, 8, instr( plan_table_output, ',')-8)) sql_id,
  2         substr( plan_table_output, instr( plan_table_output, 'child number ' )+13) child_number
  3    from table(dbms_xplan.display_cursor)
  4   where plan_table_output like 'SQL_ID %, child number %'
  5  /

SQL_ID          CHILD_NUMBER
--------------- ---------------
02dyrngfn1v77   0

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select object_owner, object_name, object_type
  2    from v$sql_plan
  3   where sql_id = '&SQL_ID'
  4     and child_number = &child_number
  5     and object_owner is not null
  6  /
old   3:  where sql_id = '&SQL_ID'
new   3:  where sql_id = '02dyrngfn1v77'
old   4:    and child_number = &child_number
new   4:    and child_number = 0

OBJECT_OWNER                   OBJECT_NAME                    OBJECT_TYPE
------------------------------ ------------------------------ --------------------
OPS$TKYTE                      EMP                            TABLE
OPS$TKYTE                      DEPT                           TABLE

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from emp;

  COUNT(*)
----------
        14

ops$tkyte%ORA10GR2> column sql_id new_val sql_id
ops$tkyte%ORA10GR2> column child_number new_val child_number
ops$tkyte%ORA10GR2> select ltrim(substr( plan_table_output, 8, instr( plan_table_output, ',')-8)) sql_id,
  2         substr( plan_table_output, instr( plan_table_output, 'child number ' )+13) child_number
  3    from table(dbms_xplan.display_cursor)
  4   where plan_table_output like 'SQL_ID %, child number %'
  5  /

SQL_ID          CHILD_NUMBER
--------------- ---------------
g59vz2u4cu404   0

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select object_owner, object_name, object_type
  2    from v$sql_plan
  3   where sql_id = '&SQL_ID'
  4     and child_number = &child_number
  5     and object_owner is not null
  6  /
old   3:  where sql_id = '&SQL_ID'
new   3:  where sql_id = 'g59vz2u4cu404'
old   4:    and child_number = &child_number
new   4:    and child_number = 0

OBJECT_OWNER                   OBJECT_NAME                    OBJECT_TYPE
------------------------------ ------------------------------ --------------------
OPS$TKYTE                      EMP_PK                         INDEX (UNIQUE)

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from v;

no rows selected

ops$tkyte%ORA10GR2> column sql_id new_val sql_id
ops$tkyte%ORA10GR2> column child_number new_val child_number
ops$tkyte%ORA10GR2> select ltrim(substr( plan_table_output, 8, instr( plan_table_output, ',')-8)) sql_id,
  2         substr( plan_table_output, instr( plan_table_output, 'child number ' )+13) child_number
  3    from table(dbms_xplan.display_cursor)
  4   where plan_table_output like 'SQL_ID %, child number %'
  5  /

SQL_ID          CHILD_NUMBER
--------------- ---------------
a8y3fszn78wn5   0

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select object_owner, object_name, object_type
  2    from v$sql_plan
  3   where sql_id = '&SQL_ID'
  4     and child_number = &child_number
  5     and object_owner is not null
  6  /
old   3:  where sql_id = '&SQL_ID'
new   3:  where sql_id = 'a8y3fszn78wn5'
old   4:    and child_number = &child_number
new   4:    and child_number = 0

OBJECT_OWNER                   OBJECT_NAME                    OBJECT_TYPE
------------------------------ ------------------------------ --------------------
OPS$TKYTE                      DEPT                           TABLE
OPS$TKYTE                      EMP_PK                         INDEX (UNIQUE)

Grouping Issue

Girish, March 14, 2008 - 7:10 am UTC

Hello Sir,
create table srm08(roll_no varchar2(7),region varchar2(2),stream varchar2(1),reg_pvt varchar2(1),gender varchar2(1));
insert into srm08 values ('22','03','4','1','1');
insert into srm08 values ('26','03','4','2','1');
insert into srm08 values ('23','03','4','1','2');
insert into srm08 values ('29','03','4','2','2');
insert into srm08 values ('22','03','3','1','1');
insert into srm08 values ('26','03','3','2','1');
insert into srm08 values ('23','03','3','1','2');
insert into srm08 values ('29','03','3','2','2');
insert into srm08 values ('22','03','1','1','1');
insert into srm08 values ('26','03','1','2','1');
insert into srm08 values ('23','03','1','1','2');
insert into srm08 values ('29','03','1','2','2');
insert into srm08 values ('22','03','2','1','1');
insert into srm08 values ('26','03','2','2','1');
insert into srm08 values ('23','03','2','1','2');
insert into srm08 values ('29','03','2','2','2');
insert into srm08 values ('22','02','4','1','1');
insert into srm08 values ('26','02','4','2','1');
insert into srm08 values ('23','02','4','1','2');
insert into srm08 values ('29','02','4','2','2');
insert into srm08 values ('22','02','3','1','1');
insert into srm08 values ('26','02','3','2','1');
insert into srm08 values ('23','02','3','1','2');
insert into srm08 values ('29','02','3','2','2');
insert into srm08 values ('22','02','1','1','1');
insert into srm08 values ('26','02','1','2','1');
insert into srm08 values ('23','02','1','1','2');
insert into srm08 values ('29','02','1','2','2');
insert into srm08 values ('22','02','2','1','1');
insert into srm08 values ('26','02','2','2','1');
insert into srm08 values ('23','02','2','1','2');
insert into srm08 values ('29','02','2','2','2');
create table secm08(roll_no varchar2(7),region varchar2(2),stream varchar2(1),reg_pvt varchar2(1),gender varchar2(1));
insert into secm08 values ('22','03','1','1','1');
insert into secm08 values ('26','03','1','2','1');
insert into secm08 values ('23','03','1','1','2');
insert into secm08 values ('29','03','1','2','2');
insert into secm08 values ('22','03','1','1','1');
insert into secm08 values ('26','03','1','2','1');
insert into secm08 values ('29','03','1','2','2');
insert into secm08 values ('22','03','1','1','1');
insert into secm08 values ('26','03','1','2','1');
insert into secm08 values ('23','03','1','1','2');
insert into secm08 values ('29','03','1','2','2');
insert into secm08 values ('22','02','1','1','1');
insert into secm08 values ('26','02','1','2','1');
insert into secm08 values ('23','02','1','1','2');
insert into secm08 values ('29','02','1','2','2');
insert into secm08 values ('22','02','1','1','1');
create table pravm08(roll_no varchar2(7),region varchar2(2),stream varchar2(1),reg_pvt varchar2(1),gender varchar2(1));
insert into pravm08 values ('22','03','2','1','1');
insert into pravm08 values ('26','03','2','2','1');
insert into pravm08 values ('23','03','2','1','2');
insert into pravm08 values ('29','03','2','2','2');
insert into pravm08 values ('22','03','2','1','1');
insert into pravm08 values ('26','03','2','2','1');
insert into pravm08 values ('23','03','2','1','2');
insert into pravm08 values ('29','03','2','2','2');
insert into pravm08 values ('22','02','2','1','1');
insert into pravm08 values ('26','02','2','2','1');
insert into pravm08 values ('23','02','2','1','2');
insert into pravm08 values ('29','02','2','2','2');
insert into pravm08 values ('22','02','2','1','1');
create table updhm08(roll_no varchar2(7),region varchar2(2),stream varchar2(1),reg_pvt varchar2(1),gender varchar2(1));
insert into updhm08 values ('22','03','3','1','1');
insert into updhm08 values ('26','03','3','2','1');
insert into updhm08 values ('22','03','3','1','1');
insert into updhm08 values ('26','03','3','2','1');
insert into updhm08 values ('29','03','3','2','2');
insert into updhm08 values ('22','03','3','1','1');
insert into updhm08 values ('26','03','3','2','1');
insert into updhm08 values ('23','03','3','1','2');
insert into updhm08 values ('29','03','3','2','2');
insert into updhm08 values ('22','02','3','1','1');
insert into updhm08 values ('26','02','3','2','1');
insert into updhm08 values ('23','02','3','1','2');
insert into updhm08 values ('29','02','3','2','2');
insert into updhm08 values ('22','02','3','1','1');
create table region(regcode varchar2(2),regname varchar2(5));
insert into region values ('01','EAST');
insert into region values ('02','WEST');
insert into region values ('03','NORTH');
insert into region values ('04','SOUTH');
COMMIT;

REGBOYS=Reg_Pvt='1' and Gender='1'
REGGIRLS=Reg_Pvt='1' and Gender='2'
TOTREG=Reg_Pvt='1'
PVTBOYS=Reg_Pvt='2' and Gender='1'
PVTGIRLS=Reg_Pvt='2' and Gender='2'
TOTPVT=Reg_Pvt='2'

STREAM:
AGRICULTURE=STREAM 1 IN TABLE SRM08
ARTS=STREAM 2 IN TABLE SRM08
COMMERCE=STREAM 3 IN TABLE SRM08
FINE ARTS=STREAM 4 IN TABLE SRM08
HOME SCI.=STREAM 5 IN TABLE SRM08
SCIENCE=STREAM 6 IN TABLE SRM08
PRAVESHIKA=ONLY 2 WILL BE IN ALL ROWS OF STREAM COLUMN OF PRAVM08 TABLE
SECONDARY=ONLY 1 WILL BE IN ALL ROWS OF STREAM COLUMN OF SECM08 TABLE
V.UPDH=ONLY 3 WILL BE IN ALL ROWS OF STREAM COLUMN OF UPDHM08 TABLE

Required Output:

Candidate Figures for Main Exam ¿ 2008
-------- -- ------------- --------- --------- --------- --------- --------- --------- ---------
REGION RC STREAM REGBOYS REGGIRLS TOTREG PVTBOYS PVTGIRLS TOTPVT TOTAL
-------- -- ------------- --------- --------- --------- --------- --------- --------- ---------
EAST 01 AGRICULTURE 1935 397 2332 81 7 88 2420
ARTS 18004 11710 29714 3645 2275 5920 35634
COMMERCE 8470 2992 11462 562 122 684 12146
FINE ARTS 52 43 95 0 0 0 95
HOME SCI. 1 50 51 0 1 1 52
SCIENCE 9715 2565 12280 476 88 564 12844
--------- --------- --------- --------- --------- --------- ---------
Total 12th Class 38177 17757 55934 4764 2493 7257 63191
PRAVESHIKA 1265 717 1982 66 24 90 2072
SECONDARY 68124 38723 106847 4543 3178 7721 114568
V.UPDH 759 247 1006 92 50 142 1148
--------- --------- --------- --------- --------- --------- ---------
Region Total 108325 57444 165769 9465 5745 15210 180979
--------- --------- --------- --------- --------- --------- ---------
WEST 02 AGRICULTURE 188 23 211 15 0 15 226
ARTS 2311 1227 3538 400 216 616 4154
COMMERCE 214 38 252 10 3 13 265
SCIENCE 481 156 637 30 10 40 677
--------- --------- --------- --------- --------- --------- ---------
Total 12th Class 3194 1444 4638 455 229 684 5322
PRAVESHIKA 59 27 86 3 7 10 96
SECONDARY 7947 3600 11547 1296 641 1937 13484
V.UPDH 28 15 43 2 1 3 46
--------- --------- --------- --------- --------- --------- ---------
Region Total 11228 5086 16314 1756 878 2634 18948
--------- --------- --------- --------- --------- --------- ---------
North .... .. ... ... . . ....
South .... ... .... .. ..... ... ....
G.Totals AGRICULTURE 188 23 211 15 0 15 226
ARTS 2311 1227 3538 400 216 616 4154
COMMERCE 214 38 252 10 3 13 265
SCIENCE 481 156 637 30 10 40 677
--------- --------- --------- --------- --------- --------- ---------
Total 12th Class 3194 1444 4638 455 229 684 5322
PRAVESHIKA 59 27 86 3 7 10 96
SECONDARY 7947 3600 11547 1296 641 1937 13484
V.UPDH 28 15 43 2 1 3 46
--------- --------- --------- --------- --------- --------- ---------
Total 11228 5086 16314 1756 878 2634 18948
--------- --------- --------- --------- --------- --------- ---------
Kind Regards
Tom Kyte
March 15, 2008 - 9:41 am UTC

go for it.

come on, this is not "write my report site".

there is NO question here.
there is a request to do your job.


and a poorly phrased request at that - no detail, nothing (don't add it, I'm not going to write your report - that is your job)

and if you ask someone else to do it, please - for their sake - tell them what you actually want. what is this???


REGBOYS=Reg_Pvt='1' and Gender='1'
REGGIRLS=Reg_Pvt='1' and Gender='2'
TOTREG=Reg_Pvt='1'
PVTBOYS=Reg_Pvt='2' and Gender='1'
PVTGIRLS=Reg_Pvt='2' and Gender='2'
TOTPVT=Reg_Pvt='2'

STREAM:
AGRICULTURE=STREAM 1 IN TABLE SRM08
ARTS=STREAM 2 IN TABLE SRM08
COMMERCE=STREAM 3 IN TABLE SRM08
FINE ARTS=STREAM 4 IN TABLE SRM08
HOME SCI.=STREAM 5 IN TABLE SRM08
SCIENCE=STREAM 6 IN TABLE SRM08
PRAVESHIKA=ONLY 2 WILL BE IN ALL ROWS OF STREAM COLUMN OF PRAVM08 TABLE
SECONDARY=ONLY 1 WILL BE IN ALL ROWS OF STREAM COLUMN OF SECM08 TABLE
V.UPDH=ONLY 3 WILL BE IN ALL ROWS OF STREAM COLUMN OF UPDHM08 TABLE

Required Output:

Candidate Figures for Main Exam ¿ 2008
-------- -- ------------- --------- --------- --------- --------- --------- 
--------- ---------
REGION   RC STREAM          REGBOYS  REGGIRLS    TOTREG   PVTBOYS  PVTGIRLS    
TOTPVT     TOTAL


that is meaningless stuff - maybe it means something to you because you have been looking at the data for years, but it means nothing to anyone else.


Please - look at what you've posted here - no one can even join your tables together, you described NOTHING, not a thing.


Grouping Issue

Girish, March 15, 2008 - 11:04 am UTC

Hello Sir,
I am sorry; i could'nt elaborated more; as not having much knowledge in english and less experience with posting the questions.
I am using Oracle 9i and windows XP.
There are four tables as above and i required output as mentioned. I mentioned repots/output heading names i.e. REGBOYS;means regular boys candidates. In the tables there are 2 fields REG_PVT (having value 1/2) and Gender (having value 1/2) and i want count of those records whose REG_PVT=1 and Gender=1 in REGBOYS like that.
for it i write following query:

break on region on rc on report on acp
SET LINESIZE 1000
column acp noprint
set numformat 99999999
compute sum of regboys on acp , regcode , report
compute sum of reggirls on acp , regcode , report
compute sum of totreg on acp , regcode , report
compute sum of pvtboys on acp , regcode , report
compute sum of pvtgirls on acp , regcode , report
compute sum of totpvt on acp , regcode , report
compute sum of Total on acp , regcode , report

Select *
from (
select rpad(b.regname,15) Region,a.* from
(select t.region as xx,t.stream
,sum(case when (t.Gender='1' and t.reg_pvt='1') then 1 else 0 end) Regboys
,sum(case when (t.Gender='2' and t.reg_pvt='1') then 1 else 0 end) RegGirls
,sum(case when (t.reg_pvt='1') then 1 else 0 end) TotReg
,sum(case when (t.Gender='1' and t.reg_pvt='2') then 1 else 0 end) Pvtboys
,sum(case when (t.Gender='2' and t.reg_pvt='2') then 1 else 0 end) PvtGirls
,sum(case when (t.reg_pvt='2') then 1 else 0 end) TotPvt
,sum(1) Total
,1 as acp
from srm08 t
group by region,stream) a,region b where a.xx=b.regcode
union
select rpad(b.regname,15),a.* from
(select t.region as xx,'SECONDARY' STREAM
,sum(case when (t.Gender='1' and t.reg_pvt='1') then 1 else 0 end) Regboys
,sum(case when (t.Gender='2' and t.reg_pvt='1') then 1 else 0 end) RegGirls
,sum(case when (t.reg_pvt='1') then 1 else 0 end) TotReg
,sum(case when (t.Gender='1' and t.reg_pvt='2') then 1 else 0 end) Pvtboys
,sum(case when (t.Gender='2' and t.reg_pvt='2') then 1 else 0 end) PvtGirls
,sum(case when (t.reg_pvt='2') then 1 else 0 end) TotPvt
,sum(1) Total
,2 as acp
from secm08 t
group by region,stream) a,region b where a.xx=b.regcode
union
select rpad(b.regname,15),a.* from
(select t.region as xx,¿PRAVESHIKA¿ STREAM
,sum(case when (t.Gender='1' and t.reg_pvt='1') then 1 else 0 end) Regboys
,sum(case when (t.Gender='2' and t.reg_pvt='1') then 1 else 0 end) RegGirls
,sum(case when (t.reg_pvt='1') then 1 else 0 end) TotReg
,sum(case when (t.Gender='1' and t.reg_pvt='2') then 1 else 0 end) Pvtboys
,sum(case when (t.Gender='2' and t.reg_pvt='2') then 1 else 0 end) PvtGirls
,sum(case when (t.reg_pvt='2') then 1 else 0 end) TotPvt
,sum(1) Total
,2 as acp
from pravm08 t
group by region) a,region b where a.xx=b.regcode
union
select rpad(b.regname,15),a.* from
(select t.region as xx,¿V.UPDH¿ STREAM
,sum(case when (t.Gender='1' and t.reg_pvt='1') then 1 else 0 end) Regboys
,sum(case when (t.Gender='2' and t.reg_pvt='1') then 1 else 0 end) RegGirls
,sum(case when (t.reg_pvt='1') then 1 else 0 end) TotReg
,sum(case when (t.Gender='1' and t.reg_pvt='2') then 1 else 0 end) Pvtboys
,sum(case when (t.Gender='2' and t.reg_pvt='2') then 1 else 0 end) PvtGirls
,sum(case when (t.reg_pvt='2') then 1 else 0 end) TotPvt
,sum(1) Total
,2 as acp
from updhm08 t
group by region) a,region b where a.xx=b.regcode)
order by region,acp,stream
/
It is giving following output:
REGION XX STREAM REGBOYS REGGIRLS TOTREG PVTBOYS PVTGIRLS TOTPVT TOTAL
--------------- -- ---------- --------- --------- --------- --------- --------- --------- ---------
NORTH 03 1 1 1 2 1 1 2 4
03 2 1 1 2 1 1 2 4
03 3 1 1 2 1 1 2 4
03 4 1 1 2 1 1 2 4
--------- --------- --------- --------- --------- --------- ---------
4 4 8 4 4 8 16
03 PRAVESHIKA 2 2 4 2 2 4 8
03 SECONDARY 3 2 5 3 3 6 11
03 V.UPDH 3 1 4 3 2 5 9
--------- --------- --------- --------- --------- --------- ---------
8 5 13 8 7 15 28
WEST 02 1 1 1 2 1 1 2 4
02 2 1 1 2 1 1 2 4
02 3 1 1 2 1 1 2 4
02 4 1 1 2 1 1 2 4
--------- --------- --------- --------- --------- --------- ---------
4 4 8 4 4 8 16
02 PRAVESHIKA 2 1 3 1 1 2 5
02 SECONDARY 2 1 3 1 1 2 5
02 V.UPDH 2 1 3 1 1 2 5
--------- --------- --------- --------- --------- --------- ---------
6 3 9 3 3 6 15
*************** --------- --------- --------- --------- --------- --------- ---------
sum 22 16 38 19 18 37 75

But there are little bit changes in required output.

1.Stream names should be like this:
AGRICULTURE=STREAM 1 IN TABLE SRM08
ARTS=STREAM 2 IN TABLE SRM08
COMMERCE=STREAM 3 IN TABLE SRM08
FINE ARTS=STREAM 4 IN TABLE SRM08
HOME SCI.=STREAM 5 IN TABLE SRM08
SCIENCE=STREAM 6 IN TABLE SRM08

2.Summing up the counts of records as mentioned my earlier post please; ie. summing of SRM08 at one time and summing of region total at the end of region i.e.
((SRM08)+(SECM08+PRAVM08+UPDHM08))

Now i tried my best to tell you my requirement; please tell me where i am wrong/missed/lacking.

Kind Regards
Girish
Tom Kyte
March 15, 2008 - 2:17 pm UTC

1.Stream names should be like this:
AGRICULTURE=STREAM 1 IN TABLE SRM08
ARTS=STREAM 2 IN TABLE SRM08
COMMERCE=STREAM 3 IN TABLE SRM08
FINE ARTS=STREAM 4 IN TABLE SRM08
HOME SCI.=STREAM 5 IN TABLE SRM08


that makes no sense. I have no idea what you mean by that.


but, you should read up on GROUP BY ROLLUP - it'll get you your aggregates at various levels (see the sql reference and or data warehouse guide, both available on otn.oracle.com if you don't have a copy for details out it)

or
http://asktom.oracle.com/pls/ask/search?p_string=%22group+by+rollup%22


Grouping Issue

Girish, March 17, 2008 - 2:10 am UTC

Hello Sir,
Thanks for your kind support and guidance. I was looking the link which you provided; but being a newbie; the solutions which you provided in those threads; i am not able to edit them to get the output. Please help me.

STREAM means faculty/area of studies. Arts stream, Commerce Stream etc. The Stream column in SRM08 table having the values as as mentioned above.


REGION XX STREAM A B C D E F TOTAL
------- -- ---------- ----- ----- ----- ----- ----- ----- -------
NORTH 03 1 <<AGRI.>> 1 1 2 1 1 2 4<<counts of SRM08
2 <<ARTS>> 1 1 2 1 1 2 4<<counts of SRM08
3 <<COMMERCE>> 1 1 2 1 1 2 4<<counts of SRM08
4 <<FINE.>> 1 1 2 1 1 2 4<<counts of SRM08
----- ----- ----- ----- ----- ----- -------
<<SUB TOTAL>> 4 4 8 4 4 8 16
PRAVESHIKA 2 2 4 2 2 4 8<<counts of PRAVM08
SECONDARY 3 2 5 3 3 6 11<<counts of SECM08
V.UPDH 3 1 4 3 2 5 9<<counts of UPDHM08
----- ----- ----- ----- ----- ----- -------
<<INSTEAD OF THIS>> 8 5 13 8 7 15 28 <<NOT REQUIRED
<<TOTAL OF REGION>> 12 9 21 12 11 23 44 <<REQUIRED

WEST
NORTH
SOUTH
<<AT THE END OF REGIONS; CUMMULATIVE TOTALS>>

G.TOTALS AGRICULTURE
ARTS
COMMERCE
...
----- ----- ----- ----- ----- ----- -------
TOTAL OF 12TH CLASS
PRAVESHIKA
SECONDARY
V.UPDH
----- ----- ----- ----- ----- ----- -------
TOTAL

<<My Requirements / Comments Please...!>> I am mentioned column name in short; so that may come in easy read.

I apologies; I am going to busy you with my requirement; but I am sure; your support and solution will guide me to learn for writing similar type of SQLs; which I need; time to time.

Warm Regards
Girish

To : Girish

A reader, March 17, 2008 - 9:53 am UTC

Girish,

As Tom already suggested, you need to know how rolloup command works. Try following SQL. You can play with "Grouping" to get required result.

with q_students as
(select stream stream_code
       , region
       ,case when (Gender='1' and reg_pvt='1') then 1 else 0 end Regboys
    ,case when (Gender='2' and reg_pvt='1') then 1 else 0 end RegGirls
    ,case when (reg_pvt='1') then 1 else 0 end TotReg
    ,case when (Gender='1' and reg_pvt='2') then 1 else 0 end Pvtboys
    ,case when (Gender='2' and reg_pvt='2') then 1 else 0 end PvtGirls
    ,case when (reg_pvt='2') then 1 else 0 end TotPvt
from
(select stream, region, reg_pvt, gender from srm08
union all
select '10002' stream, region, reg_pvt, gender from secm08
union all
select '10001' stream, region, reg_pvt, gender from pravm08
union all
select '10003' stream, region, reg_pvt, gender from updhm08
))
, q_streams as
(select '1'  stream_code, 1 stream_group, 'AGRICULTURE' stream from dual union all 
 select '2', 1 , 'ARTS' from dual union all
 select '3', 1 , 'COMMERCE' from dual union all
 select '4', 1, 'FINE ARTS' from dual union all
 select '5', 1, 'HOME SCI' from dual union all
 select '10001', 2, 'PRAVESHIKA' from dual union all
 select '10002', 2, 'SECONDARY' from dual union all
 select '10003', 2, 'UPDH' from dual)
select reg.regname
     , stu.region 
     , str.stream
  , sum(stu.regboys) regboys
  , sum(stu.reggirls) reggirls
  , sum(stu.totreg) totreg
  , sum(stu.pvtboys) pvtboys
  , sum(stu.pvtgirls) pvtgirls
  , sum(stu.totpvt) totpvt
  , sum(stu.totreg+totpvt) total
from q_students stu
   , q_streams  str
   , region reg
where stu.stream_code = str.stream_code
and   stu.region = reg.regcode
group by rollup(reg.regname, stu.region, stream_group, str.stream)
having (max(stream_group)+grouping(regname)+grouping(region)+grouping(stream)+grouping(stream_group) <> 3)
and    grouping(regname)+grouping(region) <> 1;


HTH

Grouping Issue

Girish, March 17, 2008 - 11:08 pm UTC

Thank you very much.
It will be more useful to me; if you please write the code for getting cummulative sums of all the streams at the end of all regions simultaneously.

Kind Regards

To : Girish

A reader, March 18, 2008 - 10:57 am UTC

Girish,

Thanks for *reading* above solution. I have written the SQL based on your original requirement. However, if the requirement is changed now, and if you are not able to tweak the query to change the order of SQL output, I suggest you don't implement it, as it is unlikely that you will be able to maintain it.
Tom Kyte
March 24, 2008 - 9:11 am UTC

welcome to my world

SQL Tuning

Shubhasree, March 19, 2008 - 1:58 am UTC

Thanks a lot TOM for your ealier suggestion. Also I was able to understand the requirement clearly and re-factor the query from scratch much better way than looking at smaller snippets to tune without understanding.

For the following Original query I'm currently working on:

SELECT                                     /* BS.CY.findEntlPosForDivReport */
         pos.sch_db_id, pos.sfk_acc_db_id, sfkacc.acc_nbr AS sfkaccid,
         sfkacc.NAME AS sfkaccname,
         sfkacc.real_acc_ownership AS sfkaccownership,
         s.record_date AS recorddate, s.ex_date AS exdate,
         NVL
            ((SELECT SUM (NVL (sf.firm_qty, 0)) AS firmqty
                FROM sec_pos_key sk, sec_pos_firm sf
               WHERE sf.sec_pos_firm_db_id IN (
                        SELECT   MAX (spf.sec_pos_firm_db_id)KEEP (DENSE_RANK LAST ORDER BY spf.pos_date)
                            FROM sec_pos_firm spf
                           WHERE spf.pos_date < s.entl_date
                             AND spf.sec_pos_key_db_id = sk.sec_pos_key_db_id
                        GROUP BY spf.sec_pos_key_db_id,
                                 spf.registration_type,
                                 spf.central_bank_collateral_ind,
                                 spf.revocability_ind,
                                 spf.blocking_type,
                                 blocked_sec_expiry_date)
                 AND sk.sec_pos_key_db_id = sf.sec_pos_key_db_id
                 AND sk.sec_db_id = s.sec_db_id
                 AND sk.sfk_acc_db_id = pos.sfk_acc_db_id),
             0
            ) AS settledentlpos
    FROM sfk_acc sfkacc,
         sch s,
         (SELECT sch_db_id, sfk_acc_db_id
            FROM sch_entl_pos
          UNION
          SELECT sch_db_id, sfk_acc_db_id
            FROM sch_entl_inx) pos
   WHERE pos.sch_db_id = s.sch_db_id
     AND pos.sfk_acc_db_id = sfkacc.sfk_acc_db_id
     AND s.status = 'VALI'
     AND EXISTS (
            SELECT 1
              FROM sch_option op, sch_type_option sto
             WHERE op.sch_db_id = s.sch_db_id
               AND sto.sch_type_option_db_id = op.sch_type_option_db_id
               AND op.db_status = 'ACTI'
               AND (sto.income_price_ap = 1 OR sto.income_sec_ap = 1)
               AND op.calculation_basis IS NOT NULL
               AND op.type_sch_option = 'OPTMAND')
     AND s.branch_db_id = 2
ORDER BY pos.sch_db_id, pos.sfk_acc_db_id;



I have tried modifying using ANALYTIC functions as follows:

SELECT                                     /* BS.CY.findEntlPosForDivReport */
         pos.sch_db_id, pos.sfk_acc_db_id, sfkacc.acc_nbr AS sfkaccid,
         sfkacc.NAME AS sfkaccname,
         sfkacc.real_acc_ownership AS sfkaccownership,
         s.record_date AS recorddate, s.ex_date AS exdate,
         NVL
            ((SELECT SUM (sf.firm_qty)
                FROM sec_pos_key sk,
                     (SELECT spf.firm_qty, spf.sec_pos_key_db_id,
                             spf.sec_pos_firm_db_id, blocked_sec_expiry_date,
                             revocability_ind, blocking_type,
                             registration_type, central_bank_collateral_ind,
                             MAX (spf.sec_pos_firm_db_id)KEEP (DENSE_RANK LAST ORDER BY spf.pos_date) OVER (PARTITION BY spf.sec_pos_key_db_id, spf.registration_type, spf.central_bank_collateral_ind, spf.revocability_ind, spf.blocking_type, blocked_sec_expiry_date)
                                                                       AS MAX
                        FROM sec_pos_firm spf) sf
               WHERE sf.sec_pos_key_db_id = sk.sec_pos_key_db_id
                 AND sf.MAX = sf.sec_pos_firm_db_id
                 AND sk.sec_db_id = s.sec_db_id
                 AND sk.sfk_acc_db_id = pos.sfk_acc_db_id),
             0
            ) AS settledentlpos
    FROM sfk_acc sfkacc,
         sch s,
         (SELECT sch_db_id, sfk_acc_db_id
            FROM sch_entl_pos
          UNION
          SELECT sch_db_id, sfk_acc_db_id
            FROM sch_entl_inx) pos
   WHERE pos.sch_db_id = s.sch_db_id
     AND pos.sfk_acc_db_id = sfkacc.sfk_acc_db_id
     AND s.status = 'VALI'
     AND EXISTS (
            SELECT 1
              FROM sch_option op, sch_type_option sto
             WHERE op.sch_db_id = s.sch_db_id
               AND sto.sch_type_option_db_id = op.sch_type_option_db_id
               AND op.db_status = 'ACTI'
               AND (sto.income_price_ap = 1 OR sto.income_sec_ap = 1)
               AND op.calculation_basis IS NOT NULL
               AND op.type_sch_option = 'OPTMAND')
     AND s.branch_db_id = 2
ORDER BY pos.sch_db_id, pos.sfk_acc_db_id;



But this time the query is not performing better than the original even after using analytic functions.

Could you please explain me the reason behind it?
I suppose since we are using the analytic within a scalar subquery here, it is performing worse than original.

Please also if you could suggest some clue/way to tune the orignal?

Thanks in advance !

condescending

A reader, March 21, 2008 - 6:03 pm UTC

hi tom... i just realized... you're really smart.. but you're a little bit condescending at times...
Tom Kyte
March 24, 2008 - 11:04 am UTC

sometimes pointing out the obvious sounds that way, I agree.

I don't think I'm really smart actually, I do think that I can express to others what I need to do when I'm looking for help myself - that is perhaps the "feature" that sets apart people that appear smart from others - the ability to clearly express a problem to another person - so they can actually help.

And I'm willing to read, and explore the documentation. Most of what I know about SQL, I learned by reading and understanding - not by having someone else write my SQL for me.

And I clearly get frustrated when people post things like:

I have this data
<output of a useless select * from table>
I need this report:
<data, sans-explanation of what it means>
Please write my query - it is urgent


I think that is only fair.

Retrieve the alpha numeric values in sorted order.

Sateesh, March 25, 2008 - 9:09 am UTC

Hi Tom,

I am trying to build a query that retrieves the data in sorted order from varchar type column. 90% of rows in the column has only numbers but the other have characters in it. I need to retrieve the data in ascending order for the numeric types and if the result has character rows, should be retrieved after the numeric values. If I keep "order by to_number(<column>)", I got invalid number exception, since the result has the character rows. If the result doesn't contain the character data the query runs perfectly. But I need to retrieve the character data also. Kindly help me.

Thanks in advance.
Tom Kyte
March 26, 2008 - 8:11 am UTC

need more info, are the numbers "simple" in form... just integers, could be as easy as this:

ops$tkyte%ORA10GR2> select x, replace( translate(x,'0123456789','0'), '0', '' )
  2    from t
  3   order by x;

X          REPLACE(TR
---------- ----------
1030
1234
123a       a
4567
abc1       abc

ops$tkyte%ORA10GR2> select x
  2    from t
  3   order by to_number( case when replace( translate(x,'0123456789','0'), '0', '' ) is null then x end ), x
  4  /

X
----------
1030
1234
4567
123a
abc1

so if i have this...

A reader, March 25, 2008 - 4:16 pm UTC

So if I have this data....

buid_num buidtyp_id buid_identifier status_num
54979 DEA AA4567802 ACTIVE
54980 DEA AA4567890 SUPERCEDED
54981 DEA AA0552439 SUPERCEDED
54982 DEA CC5642219 ACTIVE
54983 DEA AA4567890 DELETED
54984 DEA AA0552439 DELETED


and wanted to get the most recent updated and last active data

MOST_RECENT_UPDATED LAST_ACTIVE
AA0552439 CC5642219




Another example,

buid_num buidtyp_id buid_identifier status_num
54979 DEA AA4567802 ACTIVE
54980 DEA AA4567890 ACTIVE
54981 DEA AA0552439 ACTIVE
54982 DEA CC5642219 ACTIVE



the result is..

MOST_RECENT_UPDATED LAST_ACTIVE
CC5642219 AA0552439


How would I write this query?
Tom Kyte
March 26, 2008 - 8:46 am UTC

you would write the query using sql

if you want me to show you what the query might look like... you'd have to give me a create table and insert into.

and you'd need to describe in detail how to find the "last" - I see no dates or anything to sort by.

and you'd have to describe in detail what "updated" means - looks like "superceded", but it is definitely not clear.

i forgot to add something...

A reader, March 25, 2008 - 4:20 pm UTC

i forgot to add something...

"Please write my query - it is urgent.Thanks! "

Tom Kyte
March 26, 2008 - 8:47 am UTC

laughing out loud


funny thing is (this is for everyone else), the person that asked the question is actually the same person that posted this..... It isn't "a joke"

Retrieve the alpha numeric values in sorted order.

Sateesh, March 27, 2008 - 8:28 am UTC

Thanks a lot TOM for your prompt reply. The query helped me a lot. For your information, the numbers "simple" in form... just integers.

Please Please Hehp

G.RAMKRISHNA, April 04, 2008 - 9:09 am UTC

I am useing a simple query "SELECT EMPCODE, EMPNAME FROM PPEMPMAST ORDER BY CAST(EMPCODE AS NUMBER)" is is showing error to me "INVALID NUMBER". EMPCODE data type is varchar(10).
Tom Kyte
April 04, 2008 - 10:38 am UTC

you know what happens when you store numbers in strings?

you end up with strings that do not contain numbers.


ops$tkyte%ORA10GR2> create table t ( x varchar2(10) );

Table created.

ops$tkyte%ORA10GR2> insert into t select rownum from all_users where rownum <= 5;

5 rows created.

ops$tkyte%ORA10GR2> select * from t order by cast( x as number );

X
----------
1
2
3
4
5

ops$tkyte%ORA10GR2> insert into t values ( 'x');

1 row created.

ops$tkyte%ORA10GR2> select * from t order by cast( x as number );
select * from t order by cast( x as number )
       *
ERROR at line 1:
ORA-01722: invalid number


you can find the bad ones:

ops$tkyte%ORA10GR2> create or replace function is_number( p_str in varchar2 ) return number
  2  is
  3          invalid_num exception;
  4          pragma exception_init( invalid_num, -6502 );
  5          l_num number;
  6  begin
  7          l_num := to_number( p_str );
  8          return 1;
  9  exception
 10          when invalid_num then return 0;
 11  end;
 12  /

Function created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from t where is_number(x) = 0;

X
----------
x


Very good solution

Anish, April 04, 2008 - 1:28 pm UTC

This is the issue that i am also having at my client site where folks store numbers on a varchar2 column and expect the query such as above to work !

Thanks tom
Tom Kyte
April 04, 2008 - 10:11 pm UTC

they likely are disappointed when the easter bunny doesn't actually leave them a basket of candy too. (that is, delusion has set in firmly)

when you store anything in a varchar2 - the only thing you can say is "I have some pretty strings, would you like to look at them"

the strings are not numbers, they are not dates, they are just pretty strings.

query

A reader, April 10, 2008 - 4:13 pm UTC

tom:

which is better way to do.

if i have a table

A
-----
msg_no
server_id
dir
created_date

Now, i want the directoy that came in on with last message on a specific server.

I can do

SELECT MAX(msg_no) into v_msg_no from A where server_id = 'XXX';
IF (v_msg_no is not null) THEN
SELECT dir into v_dir from A where msg_no = v_msg_no
END IF;


OR

SELECt count(*) into l_cnt from A where server_id = 'XXX';

If (l_Cnt <> 0) THEN

SELECT dir into v_dir where create_date =
SELECT max(create_date) from A where server_id='XXX';
End if ;


thanks,
Tom Kyte
April 10, 2008 - 8:29 pm UTC

or

select dir into v_dir
  from (select dir from a where server_id = 'xxx' order by create_date desc)
 where rownum = 1;


you need NO procedural code here, none. You might have an exception block for "when no_data_found" to set a default value for dir, but that would be it.

sql

A reader, April 11, 2008 - 4:05 pm UTC

Tom:

The reason i added procedural code is that this is a subquery. In case there were no records found I do not want to hit an exception. I still want to report the main query results. so i have

query A
then this above query. if data found report it, if not report data from query A.

would you still do it your way with one sql statement. I thought querying on the primary key is faster than "Created date".

thank you
Tom Kyte
April 11, 2008 - 4:22 pm UTC

why don't you want to hit an exception?

do you understand that if you:

a) count records
b) if cnt > 0 then do something

that by the time you do something, records MAY NO LONGER EXIST

you do want an exception block
you do do do do SO much want an exception block.


You asked, should I do

SELECT MAX(msg_no) into v_msg_no from A where server_id = 'XXX';
IF (v_msg_no is not null) THEN
SELECT dir into v_dir from A where msg_no = v_msg_no
END IF;

or

SELECt count(*) into l_cnt from A where server_id = 'XXX';

If (l_Cnt <> 0) THEN

SELECT dir into v_dir where create_date =
SELECT max(create_date) from A where server_id='XXX';
End if ;


I said NEITHER do this

select dir into v_dir
from (select dir from a where server_id = 'xxx' order by create_date desc)
where rownum = 1;


and if you are expecting NO DATA then add an exception block to deal with it

begin
select dir into v_dir
from (select dir from a where server_id = 'xxx' order by create_date desc)
where rownum = 1;
exception
when no_data_found then v_dir := null; -- or whatever
end;

MY CODE DOES THE SAME AS YOUR CODE - only more efficiently.



this:
I thought querying on
the primary key is faster than "Created date".

I do not understand at all.

sql

A reader, April 11, 2008 - 7:48 pm UTC

Tom:

I do have an exception for the main program. What i basically have is

for x in (some query)
Loop

then I have this sub query that looks up the value for dir based on a book number resulting in the above query)

end loop;

I do not want to hit an expcetion "no data found" if there was no directory defined for that book. I will still report the book number. I only want to hit the "no data found" when the main query does not hit any records, where i raise a user defined exception telling client "no records were found".

do you see my point.

2. what i meant is that it would be much faster when i query table on primary key

select max(pk_col) from table where server_id='xxx'
select * from table where pk = v_pk

instead of querying on highest create data column

select * from table where create_date = (select max(create_date) from table where ... )


is that correct?


Tom Kyte
April 13, 2008 - 7:57 am UTC

... I do not want to hit an expcetion "no data found" if there was no directory
defined for that book. ...

YES YOU DO. IT WOULD BE EXACTLY THE SAME LOGIC AS YOU EMPLOY NOW WITH A "LET'S RUN A QUERY TO SEE IF WE SHOULD RUN A QUERY" - only more efficient, more correct, better.


You have logic now:

for x in ( select .. )
loop
   select
   if (something)
   then
      select 
   end if
end loop



forget for a moment that it probably should just be:

for x in (select THAT OUTER JOINS TO GET NEEDED INFORMATION THAT MIGHT EXIST)
loop
   ... no exra queries here....
end loop

it at the very least should be:

for x in (select )
loop
  begin
      select into 
  exception
      when no_data_found then v_dir := default
  end
end loop




do you see *MY POINT* stated over and over again - that the logic is similar - only mine is more efficient and in fact more correct (for the fact that since you run more than one query - by the time you run the second query, you are risking having the answer from the first query change on you)

I DESPISE CODE of the form:
select count(*) ....
if (cnt was > 0)
then
   do something


good gosh, just DO SOMETHING - and if no data is found, well - dea3 with it right there and then.


2) sigh, I'm giving up, please explain how:

select max(pk_col) from table where server_id='xxx'
select * from table where pk = v_pk

two queries TWO QUERIES - one that looks at ALL POSSIBLE primary keys and then one that queries by that primary key could be more efficient than

A SINGLE QUERY THAT JUST GETS IMMEDIATELY THE PRIMARY KEY OF INTEREST??????????

Please - lay out your thinking here - cause I don't see how anyone could "think that"

Max on whatever you want - you have two examples above (one on a date and one on a primary key), but come on - don't you see the logic being presented.

You were in general:


select max(X) max_x from t where y = 'abc';
select * from t where x = max_x;

great, that is better as:

select *
from (select * from t where y = 'abc' ORDER BY X DESC)
where rownum = 1;


do it all in a single statement. you are already querying by Y='ABC', to find your "max key" to requery that table by that max key - swell, just do it in a single step and forget about doing the second query - what could be more efficient than a primary key lookup?

[this space intentionally left blank]

the absence of any work at all definitely is more efficient.

query

A reader, April 15, 2008 - 6:21 pm UTC

Tom:

Excellent advice. I wil go with the outer join choice.

so you know why this query is not giving the record in table BOOK when there is no record in table book_server.
I did add the (or null) for the filters.

basically I want to get the record from table book, and then look at the last record received in table book_server
for that server "SSS" and status code "IN" .


SELECT a.*,b.directory
FROM book a, book_server b
WHERE a.bkno = b.bkno(+) and
a.program_id = 'PPP' and
a.flag_yn = 'Y' and
(b.server_id = 'SSS' or b.server_id is null) and
(b.status_code = 'IN' or b.status_code is null) and
(b.created_date = (SELECT max(created_date) FROM book_server
WHERE bkno = a.bkno)
or b.created_date is null)
Tom Kyte
April 16, 2008 - 3:12 pm UTC

you shall really have to explain the logic here.

You see, I know how an outer join works - and hence, this is really just confusing me - for I don't know what you would expect from this "where clause"

I think - MAYBE -
ops$tkyte%ORA10GR2> select * from book;

      BKNO PRO F
---------- --- -
         1 PPP Y
         2 PPP Y

ops$tkyte%ORA10GR2> select * from book_server;

      BKNO SER ST CREATED_D DIREC
---------- --- -- --------- -----
         1 SSS IN 16-APR-08 aaa
         1 SSS IN 15-APR-08 bbb
         1 xSS IN 17-APR-08 ccc

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select *
  2    from book a,
  3        (select bkno, max(directory) keep (dense_rank first order by created_date DESC) directory
  4               from book_server
  5                  where server_id = 'SSS'
  6                    and status_code = 'IN'
  7                  group by bkno) b
  8   where a.bkno = b.bkno(+)
  9     and a.program_id = 'PPP'
 10     and a.flag_yn = 'Y'
 11  /

      BKNO PRO F       BKNO DIREC
---------- --- - ---------- -----
         1 PPP Y          1 aaa
         2 PPP Y



but I'd only be using keep/dense_rank IF i was going after all the records, which is quite different from everything else you had above where you go after ONE record.

quite confusing when you change the question like that.

query

A reader, April 16, 2008 - 12:34 pm UTC

Tom:

An additional small question to the above, is there a way to query BOOK_SERVER table for the last
"IN" message for that book/server, with the condition there was no "OUT" message for that book from that server.

So if there was a later message saying that book was removed, the last "IN" message would not count and I will not get a record.

If there was no message "OUT", then I should get the last "IN" message.
Tom Kyte
April 16, 2008 - 4:24 pm UTC

do you mean there was "NEVER" an out or what - it is not very clear.

...
for the last
"IN" message for that book/server, with the condition there was no "OUT"
message for that book from that server.
....

seems to make it sound like "if there ever was an out"


..
if there was a later message saying that book was removed, the last "IN"
message would not count and I will not get a record.
...

seems to make me think you want me to conclude that "status = OUT means the same as 'removed'" and that you really only care if a in was followed by an out.




if #2 is true, then you just want "where status in ('IN','OUT')" - get the last in/out record - if it is OUT - discount it, disregard it. if it is in - obey it, use it, like it.

query

A reader, April 16, 2008 - 3:35 pm UTC

Tom:

I am doing outer join on the two tables so i get all records from table "BOOK" regardless of whether I have a matching record in table "BOOK_SERVER" or not.

Then, if records exist in table "BOOK_SERVER", then I want the last one saved with "IN" status on server "

so what i did is add the "or column is null" to the filter for "BOOK_SERVER" table.


SELECT a.*,b.directory
FROM book a, book_server b
WHERE a.bkno = b.bkno(+) and
a.program_id = 'PPP' and
a.flag_yn = 'Y' and
(b.server_id = 'SSS' or b.server_id is null) and
(b.status_code = 'IN' or b.status_code is null) and
(b.created_date = (SELECT max(created_date) FROM book_server
WHERE bkno = a.bkno and server_id='SSS' and status_code='IN')
or b.created_date is null)

What do you see wrong?


2. Yes your example is correct. this is what i want. But i am only going after one record and nto whole records.

3. How would i be able to add another filter to the above query if i want to do the following.

Check if there is another record in "BOOK_SERVER" that tells you book is in "OUT" status from server "SSS" after the last "IN" record.

If there is, then report no record found as we are looking for the highest "IN" record and there is another record that tell us the book was removed from server afterwards.

If there is not a record, then get the last "IN" record for that book /server.

can this be done in a query?

thank you,

query

A reader, April 16, 2008 - 5:32 pm UTC

Yes that is correct. But how do you implement this in the outer joing query above?

If the last record had a status of "IN" then print the directory.
If the last record had a status of "OUT" then do not print the directory.
If there are no records at all, just print the record from BOOK table.


<
if #2 is true, then you just want "where status in ('IN','OUT')" - get the last in/out record - if it is OUT - discount it, disregard it. if it is in - obey it, use it, like it.
>
Tom Kyte
April 16, 2008 - 9:37 pm UTC

yes, WHAT is correct?!?!?

please - stop moving cheese around - be straightforward - say what you need and say it all in one place. I cannot keep paging up and down - besides the fact you keep moving the cheese here (changing the QUESTION)


give me a book table (as small as possible)
give me a book server or whatever table (as small...)
give me inserts
make it easy for me to write YOUR query.

relationship of hint /+* index_asc ( table index) */ and order by?

jun hyek kwon, April 16, 2008 - 8:05 pm UTC

I have following sql statement having hint /+* index_asc ( table index) */.

SQL>select /*+ index_asc( test test_idx) */ *
from test
where col1 < 10;

col1 is indexed column and col1's value is 1,2,3,4.....1000000

In sqlplus, when i want this statement, result is
sort order as 1,2,3,4,5,6,7,8,9.

when i want sort order as 1,2,3,4,5,6,7,8,9., is this statement being without order by clause right?
Tom Kyte
April 16, 2008 - 9:47 pm UTC

unless and until you have an order by

YOU CANNOT EXPECT THE DATA TO BE ORDERED

no matter what

no kidding

no matter what


really - no kidding


select /*+ first_rows */ *
from test
where col1 < 10
ORDER BY COL1;

that is what you need, want, desire.

if and when oracle can skip a sort and use an index - AND IT IS DEEMED EFFICIENT TO DO SO - it will, all by itself.

You - you are never allowed to skip the order by

Never, not ever, no kidding. Really


no order by = no ordered data, if and when we feel like it.


hint, don't care
x, y or z - don't care


the only thing - THE ONLY THING - that returns sorted data is...........


order by

period.

query

A reader, April 17, 2008 - 12:30 pm UTC

Tom:

sorry, my fault. here are the tables and data.

WHat the query should give me is a list of books in Book table with prog_id='AB' and rd_flag='Y'
and the dir from the "book_server" table.

For book 1, the dir should be the last record received which is record 102.
For book 2, the dir there should no record retreived since there is an "OUT" as the last record for tht book.
For book 3, the dir is null since there is no record at all in book_server.


create table book(
bkno number(10),
prog_id varchar2(10),
rd_flag varchar2(1),
constraint PK_book primary key (bkno)
)

create table book_server(
msg_no number(10),
bkno number(10),
server_id varchar2(10),
dir varchar2(20),
status varchar2(3),
create_date date,
constraint PK_book_server primary key (msg_no)
)


insert into book values (1,'AB','Y')
/
insert into book values (2,'AB','Y')
/
insert into book values (3,'AB','Y')
/
commit
/


insert into book_server values (100,1,'SSS','/XYZ/1/A','IN',sysdate)
/
insert into book_server values (101,1,'SSS','/XYZ/1/B','IN',sysdate)
/
insert into book_server values (102,1,'SSS','/XYZ/1/C','IN',sysdate)
/
insert into book_server values (103,2,'SSS','/XYZ/1/D','IN',sysdate)
/
insert into book_server values (104,2,'SSS','/XYZ/1/D','OUT',sysdate)
/


Tom Kyte
April 17, 2008 - 4:22 pm UTC

fascinating, you've once again MOVED THE CHEESE

... WHat the query should give me is a list of books in Book table with
prog_id='AB' and rd_flag='Y'
and the dir from the "book_server" table.
...

... But i am only going after one record and nto
whole records.
...

please - which is it - do you need ALL RECORDS RETURNED or "I'll input something and get a single book"

also, seems that MSG_NO is what you use to figure out "what comes first", since - well - all of your created dates are THE SAME.


ops$tkyte%ORA9IR2> select b.*, bs.dir
  2    from (select * from book where prog_id = 'AB' and rd_flag = 'Y') b,
  3         (select bkno, max(decode(status,'IN',dir,'OUT',null)) keep(dense_rank first order by msg_no DESC)  dir
  4            from book_server
  5           where status in ( 'IN', 'OUT')
  6           group by bkno) bs
  7   where b.bkno = bs.bkno(+)
  8   order by b.bkno
  9  /

      BKNO PROG_ID    R DIR
---------- ---------- - --------------------
         1 AB         Y /XYZ/1/C
         2 AB         Y
         3 AB         Y


if you just need "a book", please add "and bkno = :x" to both inline views.

query

A reader, April 17, 2008 - 6:05 pm UTC

Tom:

You are a true genius. That is exactly what i want.

I did take the query and sticked it in PL/SQL as and it did work.

For x in (query)
Loop
...
end loop;

I did not know the dense rank function. I will read about it.

You brought a good point about "created_date". Are you saying that sometimes timestamps can be same for records inserted after each other? should i always be looking for the PK instead (i.e msg_no here) in this situation.

2.
Another question on writing efficient code. Do you see this as efficient or you would do it different.
Whenever i get a record into book_server table, I derive the values and either add or delete the record in book_status table. If record already exists, I delete it and then insert a new one. I save the message number that created this record as a reference to the "book_server" table.



IF (upper(server_status) = 'Y') THEN

SELECT count(*) INTO l_cnt FROM Book_Status
WHERE bkno = p_bkno and server_id = p_server_id;

IF (l_cnt = 1) THEN

DELETE FROM Book_Status
WHERE bkno = p_bkno and server_id = p_server_id;
END IF;

INSERT INTO Book_Status (Bkno,Server_id,v_msg_no,Created_date)
VALUES (p_bkno,p_server_id,sysdate);


END IF;


3. Since I use same logic in #2 above in several other procedures to update the book status table shall i just create an API for the above and call it like

save_book_status(p_1,p_2,...)
Tom Kyte
April 17, 2008 - 10:03 pm UTC

... Are you saying that sometimes
timestamps can be same for records inserted after each other? ...

of course - isn't that obvious? (run your own code!!! it should take less than a second for all of the inserts to happen, they probably all have the same date - maybe two dates)

I cannot answer your question "should I always use PK" - you do know that sequences are just unique numbers - a larger number does not imply "it was inserted later". In fact, I could show you an insert + commit followed by an insert + commit where the second insert + commit creates a row with a SMALLER sequence number than the first (think clustered environment)

Look - you know your data - you should know what therefore to order by.



2)

  IF (upper(server_status) = 'Y') THEN

    SELECT count(*) INTO l_cnt FROM Book_Status
        WHERE bkno = p_bkno and server_id = p_server_id;

    IF (l_cnt = 1) THEN

           DELETE FROM Book_Status
             WHERE bkno = p_bkno and server_id = p_server_id;


gosh, do I despise code that looks like that.

Are you writing a single user database application? If not, please - think about this - think about what happens in a multi-user situation here....


and your logic is entirely confusing. If the count is greater than one - you do what?? I don't see anything.

query

A reader, April 17, 2008 - 10:27 pm UTC

1. How can a new record have a sequence number less than a previous one? I never knew that it can happen.

If this is true then why did your previous query may not yield correct results because you
"order by msg_no desc". so you are selecting the record with hihest msg_no which is a sequence generated PK.

2. Can you explain why you despise the code and show me better, cleaner way of writing it for multi user application.

The count will be wither 1 or 0 because the PK on book_status is (book_no,server_id).

Are you implying that after i do the count someone may have inserted a record and i will get an error?

Logic is simple. Check if the table already has a status record.
if yes, delete old record and insert a new one.
if no, insert a new one.

Tom Kyte
April 17, 2008 - 10:46 pm UTC

1) because they told me what they wanted the answer to be. because they did not use a sequence. because it was the only way to get the answer.

I told you when sequences would do that - in a clustered environment - each instance will get it's own set of numbers to dole out - unless you used ordered on the sequence - in which case performance will become so poor - you'll just want to walk away...


2) do you see the problem in the logic? If not, re-read until you do. and then, after you do - you'll see the problem in the logic (in between the time you count records and you delete the singleton - the ANSWER to the first query could change - or there could be other transactions that you cannot see yet that will commit later.

You would need to be really utterly specific and detailed and clear as to what the logic is.

...
The count will be wither 1 or 0 because the PK on book_status is
(book_no,server_id).
...

how was I supposed to know that?

if true, why not just

a) delete it (if there, it'll succeed, if not there, it'll succeed - it is OK to delete "nothing")
b) then insert it

or

a) try to update
b) if sql%rowcount = 0 then insert


or

a) try to insert
b) if primary key violated - then update


why waste cycles looking for a record - that you MIGHT NOT BE ABLE TO SEE YET - because someone else has inserted it but not committed it yet......

...
Logic is simple. Check if the table already has a status record.
if yes, delete old record and insert a new one.
if no, insert a new one.
...

In order to understand that - one needed lots of other bits of information that you did not provide. And you didn't write it that way last time.

You wrote:

... I derive the values and either add or delete the
record in book_status table. ...

well, umm, no - you don't.

You wrote:

I either

a) add
b) delete

do you see the breakdown in communication here.


query

A reader, April 17, 2008 - 11:08 pm UTC

Tom:

1. I am not sure what you mean by clustered environment. If you mean that i have the sequence in several user schemas, then no. i only have it in one user schema and it should be sequential (i hope).

But based on what you say that a higher sequence number may not mean a more "recent" record, and Create_date can be similar for several records, how can i identify the last record received on a book.

The table is simple. I can either use create_Date or msg_no.

2. I like your option of
a) delete it
b) insert it

I think you are saying that my count can be 0 and then user B insert a record and I will do an insert and get an error.

You are right. IT can happen even thought it is pretty slim chance.

But at the same time, i can delete it, then user B insert a record, and then i try to insert again.

is not it same issue?

3. Can you refer me to a book or manuals where you discuss these neat design and coding techniques and how coding differ for mutiple user environment. Is this in your book?

thank you,


Tom Kyte
April 18, 2008 - 8:14 am UTC

1) real application clusters - RAC - whereby more than one instance of Oracle mounts and opens a single database.

only you sir can answer the question:

...
But based on what you say that a higher sequence number may not mean a more
"recent" record, and Create_date can be similar for several records, how can i
identify the last record received on a book.
.....

only you - it is after all YOUR DATA - you should be able to answer this.

2) I like my option of

update it
if no rows updated then insert it

if the row is likely to exist.

I like my option of

insert it
update it if insert failed

if the row is likely to not exist


I don't like my option of delete+insert. That is by far the most expensive of the three (but less expensive than your current approach)

...
But at the same time, i can delete it, then user B insert a record, and then i
try to insert again.

is not it same issue?
..


at that point you have what is known as a LOST UPDATE - and it would be up to you to determine if your design needs to accomidate for it (that is what you do, you program transactions that are multi-user safe and get to the right answer - you have to *think* about what you need to have happen in that case, that is what we do as database developers. Wait, let me correct that, that is what we SHOULD do (but many, sigh, do not)

3) yes, I've written about them extensively.

query

A reader, April 19, 2008 - 11:26 pm UTC

Tom:

is this how do you implement this without stopping execution of main program

<insert it
update it if insert failed
if the row is likely to not exist >


p1

begin

---code
begin
insert it
exception
update it here
end;
--main program code
exception

end;



Tom Kyte
April 23, 2008 - 4:16 pm UTC

yes
begin
   insert into t values ...
exception
   when no_dup_on_index then update
end


or

   update ...
   if (sql%rowcount = 0)
   then
      insert...
   end if;


query

A reader, April 21, 2008 - 11:10 pm UTC

Tom:

If i want to reset a flag in a table based on the condition that a book
exists on 3 servers "AAA", "BBB", and "CCC"?

How do you formulate that query for this check?

For example, after receiving msg "100" below, the query will not return anything.
Only after receiving the 5th record the condition is met.



create table book_serv(
msg_no number(10),
bkno number(10),
serv_id varchar2(10),
dir varchar2(20),
status varchar2(3),
create_date date,
constraint PK_book_serv primary key (msg_no)
)

insert into book_serv values (100,1,'AAA','/XYZ/1/A','IN',sysdate)
/
insert into book_serv values (101,1,'AAA','/XYZ/1/A','OUT',sysdate)
/
insert into book_serv values (102,1,'BBB','/XYZ/1/B','IN',sysdate)
/
insert into book_serv values (103,1,'CCC','/XYZ/1/C','IN',sysdate)
/
insert into book_serv values (104,1,'AAA','/XYZ/1/A','IN',sysdate)
/
Tom Kyte
April 23, 2008 - 5:41 pm UTC

ps$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select bkno, count(distinct serv_id) from book_serv where serv_id in ('AAA', 'BBB', 'CCC' )
  2  group by bkno;

      BKNO COUNT(DISTINCTSERV_ID)
---------- ----------------------
         1                      3

assess sql run time

junhua, April 23, 2008 - 1:45 am UTC

Hi,Tom
when tuning sql, i don't want execute the sql to know how many time it should take, because the sql will run several hours. so i want to know is there a method we can used to assess the time without execute the sql.
thanks.
Tom Kyte
April 28, 2008 - 9:10 am UTC

use autotrace traceonly explain, the "guesstimate" of the runtime will be there.

ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select count(*) from all_objects;

Execution Plan
----------------------------------------------------------
Plan hash value: 2415720637

----------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |            |     1 |    42 |   397  (10)| <b>00:00:02</b> |
|   1 |  SORT AGGREGATE                             |            |     1 |    42 |            |          |
|*  2 |   FILTER                                    |            |       |       |            |          |
|*  3 |    HASH JOIN                                |            | 25802 |  1058K|   397  (10)| 00:00:02 |
|   4 |     TABLE ACCESS FULL                       | USER$      |    81 |   324 |     3   (0)| 00:00:01 |
....

query

A reader, April 23, 2008 - 6:58 pm UTC

Tom:

select bkno, count(distinct serv_id) from book_serv where serv_id in ('AAA',
'BBB', 'CCC' )
2 group by bkno;

This query will not check the "IN" and "OUT". I need to make sure the last message received was not "OUT" for that book/server. E.g., I can have 3 "IN" messages and then one OUT so my condition fails.
Tom Kyte
April 28, 2008 - 10:50 am UTC

(come on, with all that we've done above, you are not starting to see how to do these sorts of things - rather than just taking the answers - please start TAKING APART the answers and seeing what is being done!)

... I need to make sure the last
message received was not "OUT" for that book/server. ...

hmmm, keep dense rank pops into mind?


ops$tkyte%ORA10GR2> select * from book_serv;

    MSG_NO       BKNO SERV_ID    DIR                  STA CREATE_DA
---------- ---------- ---------- -------------------- --- ---------
       100          1 AAA        /XYZ/1/A             IN  28-APR-08
       101          1 AAA        /XYZ/1/A             OUT 28-APR-08
       102          1 BBB        /XYZ/1/B             IN  28-APR-08
       103          1 CCC        /XYZ/1/C             IN  28-APR-08
       104          1 AAA        /XYZ/1/A             IN  28-APR-08
       200          2 AAA        /XYZ/1/A             IN  28-APR-08
       201          2 AAA        /XYZ/1/A             OUT 28-APR-08
       202          2 BBB        /XYZ/1/B             IN  28-APR-08
       203          2 CCC        /XYZ/1/C             IN  28-APR-08
       204          2 AAA        /XYZ/1/A             OUT 28-APR-08

10 rows selected.

ops$tkyte%ORA10GR2> select bkno,
  2         count(distinct serv_id) ,
  3         max(status) KEEP (dense_rank first order by msg_no desc) last_status
  4    from book_serv where serv_id in ('AAA', 'BBB', 'CCC' )
  5   group by bkno;

      BKNO COUNT(DISTINCTSERV_ID) LAS
---------- ---------------------- ---
         1                      3 IN
         2                      3 OUT


problem

A reader, April 27, 2008 - 4:17 pm UTC

Tom:

is there a problem with your site?

I can't get all my previous questions under my email.Sort of lost track of history.

I hope this is temporary as i cant find previously asked questions sometimes by doing word search.
Tom Kyte
April 28, 2008 - 1:11 pm UTC

I plugged your email in and found 112 of them.

history

A reader, April 29, 2008 - 4:28 pm UTC

Tom:

yes, they show up fine now. I am not sure what they disappeared before.

thanks,

query

A reader, April 29, 2008 - 4:38 pm UTC

Tom:

does this query look fine for you or you would rewrite it in a subquery format like below

for x in (SELECT a.*,b.name,b.dir
FROM table_A, table_B b
WHERE a.bk = b.bk(+)
AND a.prog_id = 'ACC'
AND a.ready = 'Y'
AND (b.created_date = (SELECT max(created_date) from TABLE_B
WHERE bk=a.bk)
or b.created_date is null)
)
LOOP


for x in (
select *
from (select * from table_A, table_B where a.bk=b.bk(+) and a.y = 'abc' ORDER BY b.CREATED_DATE DESC)
where rownum = 1
)
LOOP
Tom Kyte
April 30, 2008 - 9:21 am UTC

I don't get that logic at all.


is this a loop in a loop? (bad, don't do that)

is this two queries to be compared? Your outer join logic is *whacky*. They are two entirely different queries in this case.

query

A reader, April 29, 2008 - 5:11 pm UTC

Tom:

actually the above queries are not equivalent. The first can result in many records as a result of the join but each record in table A will match the last effective date record in Table B.

The 2nd query will ALWAYS result in one record. I was trying to use the 2nd format but i dont think it would work for record set.
Tom Kyte
April 30, 2008 - 9:29 am UTC

....The 2nd query will ALWAYS result in one record....

not true. zero or one record.

given that I have no clue what your question is, I only have a view of queries that make no sense to me, I cannot help you

access table of other database

Sateesh, April 30, 2008 - 12:55 am UTC

Hi Tom,

Can we access the table residing in one database from the other database residing on separate system (similar to accessing the table of one schema from the other schema in a single database). Both database systems connected in LAN. I have to insert the data from the table in one database to the similar table in the other database. Kindly help me.

Thanks in advance.
Tom Kyte
April 30, 2008 - 10:08 am UTC

read about database links - we've been doing that since version 5.0 of Oracle...

query

A reader, April 30, 2008 - 11:18 am UTC

Tom:

They are 2 different queries. Query 1 simply joins Table A with Table B and pulls out the last record from table B (if exists).
so if one parent record in table A has 10 children ion table B, I want the information for the last child in table B.

This is a cursor. I thought There might be more efficient way of writing like you usually recommend in query 2.
However, query 2 will not get a list of records.

Would you stick with query 1 format or change anything.



------------------
query1

for x in (SELECT a.*,b.name,b.dir
FROM table_A, table_B b
WHERE a.bk = b.bk(+)
AND a.prog_id = 'ACC'
AND a.ready = 'Y'
AND (b.created_date = (SELECT max(created_date) from TABLE_B
WHERE bk=a.bk)
or b.created_date is null)
)
LOOP
---code
END LOOP;
----------------------------------

-----------------------
query 2

for x in (
select *
from (select * from table_A, table_B where a.bk=b.bk(+) and a.y = 'abc' ORDER BY b.CREATED_DATE
DESC)
where rownum = 1
)
LOOP
---code
END LOOP
-----------------------
Tom Kyte
April 30, 2008 - 12:52 pm UTC

select *
  from (
SELECT a.*,b.name,b.dir, row_number() over (partition by a.bk order by b.created_date DESC) rn
  FROM table_A, table_B b
 WHERE a.bk = b.bk(+)
   AND a.prog_id = 'ACC'
   AND a.ready = 'Y'
       )
 where rn = 1

query

A reader, April 30, 2008 - 5:23 pm UTC

Tom:

would this yield "significant" or "little" performance result.

Should this be the standard in queries of that sort (ie using row_number() ).
Tom Kyte
April 30, 2008 - 6:11 pm UTC

it will either

a) go faster
b) go slower
c) go the same

as some other query.

If you need to get the "top N rows by something", this is the standard approach.

You wanted the top book by that date, by bkno.


query

A reader, May 01, 2008 - 8:35 pm UTC

Tom:


1. Is it OK to do 3 queries like his instead of table join

SELECT DISTINCT contract_no INTO v_contract_no from CONTRACTS@xxx_link
WHERE book_no = 123

SELECT vendor into v_vendor from CONTRACTS@xxx_link
WHERE contract_no = v_contract_no;

SELECT email_addr into v_email_addr FROM VENDORS
WHERE vendor_code = v_vendor and email_addr is not null;



2. If i want to run the send_email procedure only if there is an email address for company in the table

is this how you write it.

SELECT email_address into v_email_addr FROM company WHERE company_id=123;

IF (email_address is not null) THEN

mail_pkg.send(....


ELSE

null;

END IF;
Tom Kyte
May 01, 2008 - 9:55 pm UTC

1) how could those three queries be done in a join in general?!?!?!?!

2) well, that would fail.

begin
  select email_address into l_email from t where ...;
  mail_pkg.send( .. );
exception
  when no_data_found 
  then
      null; -- apparently this is OK, we just ignore that we didn't mail
end;




SQL QUERY

REVIEWER, May 02, 2008 - 8:30 am UTC

HOW TO DISPLAY THE DATE FORMAT IN A QUERY WITHOUT TIMESTAMP OTHER THAN TRUNC AND TO_CHAR FUNCTIONS?
EX:
SELECT CMONTH FROM TABLE ORDER BY TO_CHAR(CMONTH,'YYYY'),
TO_CHAR(CMONTH,'MM');
HERE IN THE ABOVE EXAMPLE I HAVE A SIMILAR QUERY OF SAME FORMAT BUT THE OUTPUT I GOT IS THE CMONTH DISPLAYED WITH TIMESTAMP.I WANT THE OUTPUT TO BE DISPLAYED CMONTH IN THE OF FORMAT "01-APR-2005".I DON'T WANT THE CMONTH IN THE OF FORMAT "01.04.2005 00:00:00" .IS THERE ANY FUNCTION OTHER THAN TO_CHAR AND TRUNC FUNCTIONS?

Tom Kyte
May 02, 2008 - 10:28 am UTC

WHAT IS WRONG WITH TO_CHAR WHICH WOULD BE THE CORRECT AND PROPER APPROACH (TRUNC WOULD NOT)

you want to format a date into a pretty string.

the to_char( dt, 'dd-MON-yyyy' ) function is precisely the function to do that.


also, for the love of whatever, do not do your order by that way - good gosh.

order by trunc(cmonth,'mm')


that is the only correct way to order your data by year and month, do NOT go to the extreme expense of TWICE invoking the NLS date code to turn a date into a string to sort strings


when you wanted to sort a DATE.

query

A reader, May 02, 2008 - 10:35 am UTC

1) well you can do this - sorry i listed the wrong table for the 1st query. table should be book_prod.

BOOK_PROD
----------
bk_no <PK>
contract_no <PK>
.........

CONTRACTS
------------
CONTRACT_NO <PK>
VEDNOR_ID

VENDORS
-----------
VENDOR_ID <PK>
Email_ADDR
.......



SELECT c.email_addr from book_prod@xxx_link A, contracts@xxx_link B, vendor C
WHERE a.contract_no = b.contract_no AND b.vendor_code = c.vendor_code
AND a.book_no = 123 and c.email_addr is not null;



2) Why would that fail?

SELECT email_address into v_email_addr FROM company WHERE company_id=123;
IF (email_address is not null) THEN
send it;

else
do not do anything

END IF;

I only want to send an email if there is a valid email address in the field.
It seems you always like to do "NO_DATA_FOUND" exception instead.

What is the benefit of doing your way .

so usually you can can have many many expcetions for SELECTS in your code instead of IF statements. Does not it make it hard to maintain.
Tom Kyte
May 02, 2008 - 10:44 am UTC

1) garbage in, garbage out

if you post something non-sensible, that is all I can respond with.

but - you seem to have already answered your question - you would always join in SQL, not in code, so the single query you have is the only correct way to approach this.


2) sigh, run your code.

SELECT email_address into v_email_addr FROM company WHERE company_id=123;
IF (email_address is not null) THEN
  send it;
  
else 
do not do anything

END IF;



If there is no record for company_id = 123 - what happens.

Oh, a NO_DATA_FOUND exception is raised

and we skip right over your if / else

and fail.


So, the benefit of doing it:
begin
    select into...
    send_mail
exception
    when no_data_found then null;  -- this is OK, expecting no data sometimes!
end;


is that it actually works


... so usually you can can have many many expcetions for SELECTS in your code
instead of IF statements.
...

that statement is absolutely making NO SENSE WHATSOEVER.



There is one specific case you have to deal with here - a select INTO

I would expect NO EXCEPTIONS from any other SELECT - if I got one, it would be a serious runtime error, one I cannot really fix or deal with - so I would never catch it (except maybe to log it and re-raise it, but I do that ONCE at the top level, not at the query level)

And for a select into - there are precisely TWO exceptions you *might* expect


a) no_data_found
b) too_many_rows <<<=== this probably isn't something you want to catch


So, there is exactly ONE exception - for a select INTO style query only - and your if/else stuff just DOES NOT WORK with it at all.

query

A reader, May 02, 2008 - 2:52 pm UTC

Tom:

<but - you seem to have already answered your question - you would always join in SQL, not in code, so the single query you have is the only correct way to approach this. >

What do you mean by only doing joins in SQL not in code.
SQL is also in pl/sql as a cursor.

I can do the same thing in PL/SQL as

SELECT column int ov_column fom table A, table B
WHERE table_a.pk=table_b.pk

Can you explain

2. Sorry I meant You seem to always do an expcetion instead of

SELECT count(*) into l_Cnt from TABLE
IF (l_cnt = 1)
send email
else
do nothing;
end if;

is an expcetion always better.

Tom Kyte
May 02, 2008 - 3:12 pm UTC

1) doing three queries, and assembling the results, would be doing the join in code - your first three queries, you would get their results and "join them together" yourself.

you just JOIN, use sql, use the power of sql, just join in SQL.

your goal: write as little code as possible
your approach: use sql to its fullest



2) you example is nonsense. Look above at your other stuff - there an if/else would be meaningless.

in your current example, what is the point.


My gripe is with people that

a) count rows to see if something needs to be processed
THEN
b) process stuff that needs to be processed


I say

a) process stuff that needs to be processed (and if there ISN'T ANY, there WON'T BE ANY EXCEPTIONS).

there are no exceptions to be dealt with here.

Need Help in Sql Query

Nikunj, May 02, 2008 - 11:16 pm UTC

Hello Tom,

I have a table with the Foloowing Type of Data

A/C Number Balance
------- -------
000100 5000.00
000101 6000.00
000102 1000.00
001001 9000.00
001002 2000.10
002001 6520.00
002002 75600.00
. .
. .
. .
009001 2222.22
009009 5555.22

Now i need the output of my select query in the floowing Format

A/C Number Balance Total
------- ------- -----
000100 5000.00
000101 6000.00
000102 1000.00
12000.00
001001 9000.00
001002 2000.10
11000.10
002001 6520.00
002002 75600.00
xxxx.xx
. .
. .
. .
009001 2222.22
009009 5555.22
xxxxx.xx
------------
Grand Total xxxxx.xx

I have account number of 6 digit so i need the total of account as give below

suppose account number is 000789,000790,000756
so i need total of all account number whoes first four digit will be 0007

and 002011,002012,002013 so i need the total of account number start with 0020

Thanks in Advance

Regards,
Nikunj Patel


Tom Kyte
May 03, 2008 - 8:46 am UTC

no create, no insert, no answer

this will be a rather simple "group by rollup" on a substr apparently. try it out.

query

A reader, May 05, 2008 - 5:30 pm UTC

Tom:

I am a little confused on how you user KEEP and dense rank. But the query does not give the correct answer ( i want to reset a flag if a book exists on 3 servers, that means there are 3 IN messages for that book/servers. If there was a "OUT" on any server then it does nto exist on all three).

Based on this data, the book exists on serv BBB and CCC only. It was removed from AAA with msg_no 101.
However, the query tells me that book exists on 3 server (count = 3) and status is "IN".

create table test(
msg_no  number(10),
bkno   number(10),
serv_id  varchar2(10),
dir        varchar2(20),
status     varchar2(3),
create_date date,
constraint PK_book_serv primary key (msg_no)
)

insert into test values (100,1,'AAA','/XYZ/1/A','IN',sysdate)
/
insert into test values (101,1,'AAA','/XYZ/1/A','OUT',sysdate)
/
insert into test values (102,1,'BBB','/XYZ/1/B','IN',sysdate)
/
insert into test values (103,1,'CCC','/XYZ/1/C','IN',sysdate)
/



SQL> select * from test;

    MSG_NO       BKNO SERV_ID    DIR                  STA CREATE_DATE
---------- ---------- ---------- -------------------- --- --------------------
       100          1 AAA        /XYZ/1/A             IN  05-may-2008 17:24:04
       101          1 AAA        /XYZ/1/A             OUT 05-may-2008 17:24:04
       102          1 BBB        /XYZ/1/B             IN  05-may-2008 17:24:04
       103          1 CCC        /XYZ/1/C             IN  05-may-2008 17:24:05

4 rows selected.

SQL> select bkno,count(distinct serv_id),
  2         max(status) KEEP (dense_rank first order by msg_no desc) last_status
  3         from test where serv_id in ('AAA', 'BBB', 'CCC' )
  4         group by bkno;

      BKNO COUNT(DISTINCTSERV_ID) LAS
---------- ---------------------- ---
         1                      3 IN

1 row selected.

query

A reader, May 05, 2008 - 5:56 pm UTC

Tom:

This seems to work. is it correct.
If i get a count of 3 that means book is on all three servers. I am ranking the messages in desc order and then filtering the set with rank =1.


select count(*) from (
select msg_no,bkno,serv_id,status,dense_rank() over (partition by serv_id order by msg_no desc) last_status
from test where serv_id in ('AAA', 'BBB', 'CCC' )
) where last_status = 1 and status = 'IN'

COUNT(*)
----------
3

query

A reader, May 05, 2008 - 7:55 pm UTC

Tom:

I have several procedures that use this query format instead of the 2nd one you listed.

Do you advise to keep it as is or rewrite/retest everything to your other format?

I beleive both should yield same output.

CURSOR ONE

for x in (SELECT a.*,b.name,b.dir
FROM table_A, table_B b
WHERE a.bk = b.bk(+)
AND a.prog_id = 'ACC'
AND a.ready = 'Y'
AND (b.created_date = (SELECT max(created_date) from TABLE_B
WHERE bk=a.bk)
or b.created_date is null)
)
LOOP
---code
END LOOP;
----------------------------------


CURSOR TWO

for x in (

select *
from (
SELECT a.*,b.name,b.dir, row_number() over (partition by a.bk order by b.created_date DESC) rn
FROM table_A, table_B b
WHERE a.bk = b.bk(+)
AND a.prog_id = 'ACC'
AND a.ready = 'Y'
)
where rn = 1

)

LOOP

END LOOP;

Sachin, May 16, 2008 - 12:29 am UTC

Hi Tom,

look into this query. as query is showing object type as column that is hardcoded, so any owner add new object type like XYZ we will have to mention in query so that is not dynamic query.

i want to this query dynamic.

suggest me how can do this???

select DECODE(GROUPING(a.owner), 1, 'All Owners',
a.owner) AS "Owner",
count(decode( a.object_type,'TABLE',1,null)) "Tables",
count(decode( a.object_type,'INDEX' ,1,null)) "Indexes",
count(decode( a.object_type,'PACKAGE',1,null)) "Packages",
count(decode( a.object_type,'SEQUENCE',1,null)) "Sequences",
count(decode( a.object_type,'TRIGGER',1,null)) "Triggers",
count(decode( a.object_type,'PACKAGE',null,'TABLE',null,'INDEX',null,'SEQUENCE',null,'TRIGGER',null, 1)) "Other",
count(1) "Total"
from dba_objects a
group by rollup(a.owner)
Tom Kyte
May 19, 2008 - 2:34 pm UTC

well, the first thing that pops into head would be:

count(
decode( a.object_type, 'TABLE', null, 'INDEX', null, .... 'TRIGGER', null, 1 )
)

return null for the things YOU KNOW are counted - 1 for all else, we'll just count the 1's


or

count( case when a.object_type NOT IN ('TABLE', 'INDEX', ..., 'TRIGGER') then 1 end )



A reader, June 15, 2008 - 1:08 pm UTC

hi Tom;
It may be off-topic but I couldnt find any topic about log minor.
In order to mine redologs and archivelogs using log minor feature (oracle 10.0.3),Is it necessary to perform
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

because without this statement,I cannot see DML statements.

Tom Kyte
June 16, 2008 - 11:38 am UTC

you ask "is is necessary"

The answer to that is "no, it is not necessary".

You can see DML with or without it. However, there will be things you cannot see without it - you and only you can say if that information is necessary to you to make log miner useful to you in your cause.

is it necessary: no, absolutely not
does it add information if you have it on: sure.

GROUP BY and nulls

Mike, June 16, 2008 - 7:38 am UTC

Why do the following queries produce different results:

07:20:45 > select max(dummy) from dual where 1=2;

M
-


1 row selected.

Elapsed: 00:00:00.02
07:20:47 > select max(dummy) from dual where 1=2 group by '?';

no rows selected

Elapsed: 00:00:00.01
07:20:47 > select max(dummy) from dual where 1=2 group by dummy;

no rows selected

Elapsed: 00:00:00.01


Tom Kyte
June 16, 2008 - 12:14 pm UTC

because when an aggregate has "no group", there is by definition "the single group, the group"

when you added a group by, there has to be something there to produce a group, there is nothing to 'group by "?" or group by dummy' - no values, hence no groups.


An aggregate without a group by - by definition returns at least and at most ONE ROW.

An aggregate with a group by - returns a row for each group produced by the group by, zero groups in your case.

a query to create a procedure

A reader, June 16, 2008 - 1:27 pm UTC

Hi Tom,

I'd like to write a query to create a procedure as the following:

loop....
1) show table name;
2) dbms_stats.gather_table_stats(user,'ODS_CONTRIBUTION',ESTIMATE_PERCENT=>10);
3) show time stamp;

I can finish the task of 2 and 3 easily as following:

select 'dbms_stats.gather_table_stats(user,'''||table_name||''', ESTIMATE_PERCENT=>20);
'
||chr(10)||'select to_char(sysdate, ''DD-Mon-YYYY HH24:MI:SS'') from dual;' from user_tables ;

How can I add the first function in?

Thank you very much! Yuna

Tom Kyte
June 16, 2008 - 1:55 pm UTC

I can do this easily

create procedure p( p_tname in varchar2)
as
begin
dbms_output.put_line( p_tname );
dbms_stats....
dbms_output.....
end;


you would not create a procedure per table, you just want ONE procedure to do any table.

set autotrace traceonly explain

junhua, June 26, 2008 - 8:56 am UTC

hi,Tom
you said using set autotrace traceonly explain can get the "guesstimate" about the query,but i still can't get the infomation about time(I am using oracle 9.2).
SQL> conn epsuser/eps_user@atlord38

SQL> set autotrace traceonly explain
SQL> SELECT /*+ no_merge(in_c) no_merge(ctru)*/
  2  distinct in_c.serial_number serial_number,
  3           to_char(in_c.first_fire_dt, 'mm-dd-yyyy') first_fire_date,
  4           DECODE(in_c.lst_nm,
  5                  NULL,
  6                  in_c.frst_nm,
  7                  in_c.lst_nm || ' ' || in_c.frst_nm) installation_engineer,
  8           in_c.val_cd as technology,
  9           in_c.CUSTOMER_NM customer_name,
 10           in_c.email_id,
 11           rwt.Task_Project_Nm as site_nm
 12    FROM ctsv_tbl_rfr_units ctru,
 13         rfrt_wk_task rwt,
 14         (select distinct ceis.serial_number,
 15                          ceis.first_fire_dt,
 16                          ceis.CUSTOMER_NM,
 17                          ceis.euip_inst_schedule_id,
 18                          ceis.parent_tech_type_id,
 19                          pr.prty_rec_id,
 20                          o.val_cd,
 21                          p.lst_nm,
 22                          p.frst_nm,
 23                          p.email_id
 24            from ctst_equip_inst_schedule ceis,
 25                 ctst_equipment           ce,
 26                 prty_role_bsns           pr,
 27                 lov_prpty                o,
 28                 prty                     p,
 29                 ctsv_rfr_section_status  crss,
 30                 CTSV_RFR_RED_FLAG_REVIEW crrfr
 31           where ceis.first_fire_dt between sysdate and sysdate + 7
 32             and pr.role_id = 'CTIE'
 33             AND pr.ctgy_id = 'RFRCTR'
 34             and pr.bsns_rec_id = ce.unit_number_id
 35             and ce.equip_serial_num = ceis.SERIAL_NUMBER
 36             and o.lov_prpty_id = ceis.parent_tech_type_id
 37             and o.prpty_id = 'EQPPRNTECH'
 38             and p.prty_rec_id = pr.prty_rec_id
 39             and ceis.euip_inst_schedule_id = crss.euip_inst_schedule_id
 40             AND CRRFR.FIRST_FIRE_STATUS <> 'RELEASE'
 41             and CRRFR.UNIT_NUMBER = ceis.SERIAL_NUMBER) in_c
 42   where ctru.TASK_ID = rwt.TASK_ID
 43     and ctru.UNIT_NO = in_c.serial_number
 44   order by serial_number;

explain plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=411 Card=1 Bytes=117
          )

   1    0   SORT (UNIQUE) (Cost=409 Card=1 Bytes=117)
   2    1     NESTED LOOPS (Cost=408 Card=1 Bytes=117)
   3    2       HASH JOIN (Cost=407 Card=1 Bytes=89)
   4    3         VIEW (Cost=403 Card=1 Bytes=67)
   5    4           SORT (UNIQUE) (Cost=400 Card=1 Bytes=143)
   6    5             FILTER
   7    6               FILTER
   8    7                 NESTED LOOPS (Cost=398 Card=1 Bytes=143)
   9    8                   NESTED LOOPS (Cost=314 Card=28 Bytes=3668)
  10    9                     NESTED LOOPS (Cost=249 Card=1 Bytes=125)
  11   10                       NESTED LOOPS (Cost=248 Card=1 Bytes=10
          4)

  12   11                         HASH JOIN (Cost=246 Card=1 Bytes=77)
  13   12                           NESTED LOOPS (Cost=96 Card=82 Byte
          s=4264)

  14   13                             TABLE ACCESS (FULL) OF 'CTST_EQU
          IP_INST_SCHEDULE' (Cost=9 Card=87 Bytes=3393)

  15   13                             TABLE ACCESS (BY INDEX ROWID) OF
           'CTST_EQUIPMENT' (Cost=1 Card=1 Bytes=13)

  16   15                               INDEX (UNIQUE SCAN) OF 'EQUIP_
          SERIAL_NUM_INDX' (UNIQUE)

  17   12                           INDEX (FAST FULL SCAN) OF 'PRTY_RO
          LE_BSNS_PK' (UNIQUE) (Cost=149 Card=507 Bytes=12675)

  18   11                         TABLE ACCESS (BY INDEX ROWID) OF 'PR
          TY' (Cost=2 Card=1 Bytes=27)

  19   18                           INDEX (UNIQUE SCAN) OF 'PRTY_PK' (
          UNIQUE) (Cost=1 Card=1)

  20   10                       TABLE ACCESS (BY INDEX ROWID) OF 'LOV_
          PRPTY' (Cost=1 Card=1 Bytes=21)

  21   20                         INDEX (UNIQUE SCAN) OF 'LOV_PRPTY_PK
          ' (UNIQUE)

  22    9                     TABLE ACCESS (BY INDEX ROWID) OF 'RFRT_S
          ECTION_STATUS' (Cost=65 Card=28 Bytes=168)

  23   22                       INDEX (RANGE SCAN) OF 'INDX_EQUIP_ID'
          (NON-UNIQUE) (Cost=2 Card=139)

  24    8                   TABLE ACCESS (FULL) OF 'RFRT_TASK_UNIT' (C
          ost=3 Card=1 Bytes=12)

  25    6               TABLE ACCESS (BY INDEX ROWID) OF 'STATUS_CODE'
           (Cost=1 Card=1 Bytes=18)

  26   25                 INDEX (UNIQUE SCAN) OF 'STATUS_CODE_PK' (UNI
          QUE)

  27    6               TABLE ACCESS (BY INDEX ROWID) OF 'CTST_EQUIP_I
          NST_SCHEDULE' (Cost=2 Card=1 Bytes=13)

  28   27                 INDEX (UNIQUE SCAN) OF 'XPKCTST_EQUIP_INST_S
          CHEDULE' (UNIQUE) (Cost=1 Card=1)

  29    3         VIEW OF 'CTSV_TBL_RFR_UNITS' (Cost=3 Card=3275 Bytes
          =72050)

  30   29           TABLE ACCESS (FULL) OF 'RFRT_TASK_UNIT' (Cost=3 Ca
          rd=3275 Bytes=55675)

  31    2       TABLE ACCESS (BY INDEX ROWID) OF 'RFRT_WK_TASK' (Cost=
          1 Card=1 Bytes=28)

  32   31         INDEX (UNIQUE SCAN) OF 'PK_TBL_WKF_TASKS' (UNIQUE)

Tom Kyte
June 26, 2008 - 4:15 pm UTC

time was added in 10g

junhua, June 27, 2008 - 8:35 am UTC

Hi Tom,
so for Oracle9i, any solution?
Tom Kyte
June 27, 2008 - 9:28 am UTC

two

a) upgrade to 11g

b) tell us what you really need and how often you need it, there is a 'trick' you might be able to use in 9i, but if you want to do this 1000 times a minute, I'm not going to show you. You wouldn't want to do it.

SQL Query

Venkat, June 27, 2008 - 6:26 pm UTC

Hi Tom,
My question the first query belwo will not return any results. The second resturns results. I realized when the
frist query was not returning any rows that i made a mistake with the alias. The differnce between
the first query and second query is D.EFFDT<=SYSDATE changed to D1.EFFDT<=SYSDATE inside the sub query.
Iam curious to know why the first query did not return any rows. I know the issue is with the alias
inside the sub query. I would like to know how ORACLE is parsing the first query.

CREATE TABLE PS_X_JOB_SBL (EMPLID VARCHAR2(11) NOT NULL,
EMPL_RCD SMALLINT NOT NULL,
EFFDT DATE,
EFFSEQ SMALLINT NOT NULL)


INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-08-28',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-06-25',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-05-18',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-04-01',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '1994-01-17',0);


CREATE TABLE PS_PER_ORG_ASGN (EMPLID VARCHAR2(11) NOT NULL,
EMPL_RCD SMALLINT NOT NULL)

INSERT INTO PS_PER_ORG_ASGN VALUES ('100028987',0);


SELECT A.EMPLID
,A.EMPL_RCD
,D.CLOCK_NUMBER
FROM PS_PER_ORG_ASGN A
,SYSADM.PS_X_JOB_SBL D
WHERE A.EMPLID=D.EMPLID
AND A.EMPL_RCD=D.EMPL_RCD
AND D.EFFDT=(
SELECT MAX(D1.EFFDT)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT<=SYSDATE)
AND D.EFFSEQ=(
SELECT MAX(D1.EFFSEQ)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT=D1.EFFDT)
AND A.EMPLID = '100028987';


SELECT A.EMPLID
,A.EMPL_RCD
,D.CLOCK_NUMBER
FROM PS_PER_ORG_ASGN A
,SYSADM.PS_X_JOB_SBL D
WHERE A.EMPLID=D.EMPLID
AND A.EMPL_RCD=D.EMPL_RCD
AND D.EFFDT=(
SELECT MAX(D1.EFFDT)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D1.EFFDT<=SYSDATE)
AND D.EFFSEQ=(
SELECT MAX(D1.EFFSEQ)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT=D1.EFFDT)
AND A.EMPLID = '100028987';
Tom Kyte
June 28, 2008 - 1:27 pm UTC

subqueries have access to the outer query

select dummy from dual
where exists (select dummy from all_users)

works great because it is really

select dual.dummy from dual
where exists ( select dual.dummy from all_users)

it is called a correlated subquery - a subquery has total access to the values supplied by the surrounding query

SQL Query

Venkat, June 27, 2008 - 11:37 pm UTC

Hi Tom,
My question the first query belwo will not return any results. The second resturns results.
I realized when the
frist query was not returning any rows that i made a mistake with the alias. The differnce between
the first query and second query is D.EFFDT<=SYSDATE changed to D1.EFFDT<=SYSDATE inside the sub
query.
Iam curious to know why the first query did not return any rows. I know the issue is with the alias

inside the sub query. I would like to know how ORACLE is parsing the first query.


CREATE TABLE PS_X_JOB_SBL (EMPLID VARCHAR2(11) NOT NULL,
EMPL_RCD SMALLINT NOT NULL,
EFFDT DATE,
EFFSEQ SMALLINT NOT NULL)


INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-08-28',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-06-25',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-05-18',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '2008-04-01',0);
INSERT INTO PS_X_JOB_SBL VALUES ('100028987', 0 , '1994-01-17',0);


CREATE TABLE PS_PER_ORG_ASGN (EMPLID VARCHAR2(11) NOT NULL,
EMPL_RCD SMALLINT NOT NULL)

INSERT INTO PS_PER_ORG_ASGN VALUES ('100028987',0);


SELECT A.EMPLID
,A.EMPL_RCD
FROM PS_PER_ORG_ASGN A
,SYSADM.PS_X_JOB_SBL D
WHERE A.EMPLID=D.EMPLID
AND A.EMPL_RCD=D.EMPL_RCD
AND D.EFFDT=(
SELECT MAX(D1.EFFDT)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT<=SYSDATE)
AND D.EFFSEQ=(
SELECT MAX(D1.EFFSEQ)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT=D1.EFFDT)
AND A.EMPLID = '100028987';


SELECT A.EMPLID
,A.EMPL_RCD
FROM PS_PER_ORG_ASGN A
,SYSADM.PS_X_JOB_SBL D
WHERE A.EMPLID=D.EMPLID
AND A.EMPL_RCD=D.EMPL_RCD
AND D.EFFDT=(
SELECT MAX(D1.EFFDT)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D1.EFFDT<=SYSDATE)
AND D.EFFSEQ=(
SELECT MAX(D1.EFFSEQ)
FROM SYSADM.PS_X_JOB_SBL D1
WHERE D.EMPLID=D1.EMPLID
AND D.EMPL_RCD=D1.EMPL_RCD
AND D.EFFDT=D1.EFFDT)
AND A.EMPLID = '100028987';

SQL Query

Venkat, June 29, 2008 - 9:29 pm UTC

Hi Tom,
I still did not get why the first query was not fetching rows and the second one does. Could you expalin?

Thanks,

Venkat

junhua, July 02, 2008 - 10:45 pm UTC

hi tom,
we need to run it one time everyday.

Tom,another question.about pl/sql tuning,how to locate the bottleneck,dbms_profile?any other suggestion?

Correlated Subquery

Anwar, July 03, 2008 - 1:26 am UTC

Venkat,
It is a correlated subquery. For each record of outer query there will be a record from inner one. Following may help you understand why one query fetches data and other does not:

-----------
--First query
-----------

D.EFFDT D1.EFFDT
---------- ----------
2008-08-28 = NULL (Do not fetch) (Since d.effdt<sysdate is not true)
2008-06-25 = 2008-08-28 (Do not fetch)
2008-05-18 = 2008-08-28 (Do not fetch)
2008-04-01 = 2008-08-28 (Do not fetch)
1994-01-17 = 2008-08-28 (Do not fetch)

----------
--Second Query
----------

2008-08-28 = 2008-06-25 (Do not Fetch)
2008-06-25 = 2008-06-25 (Fetch)
2008-05-18 = 2008-06-25 (Do not fetch)
2008-04-01 = 2008-06-25 (Do not fetch)
1994-01-17 = 2008-06-25 (Do not fetch)


query

A reader, July 29, 2008 - 9:35 pm UTC

Tom:

I have a query that serves to list books ready for review. It serves like a queue.

select * from booK_rev where status = 'A' and rownum < (select parm_value from parm where parm_name='no_rows');


create table book_rev
(bkno number(10),
status varchar2(1),
created_date date,
assign_date date );

create table parm
(parm_name varchar2(10),
parm_value number(30) );


insert into parm values ('no_rows',15)
/
insert into parm values ('no_days',10)
/


insert into book_rev values (1000,'A','19-JUL-2008','21-JUL-2008')
/
insert into book_rev values (1001,'A','20-JUL-2008',null)
/
insert into book_rev values (1002,'A','21-JUL-2008',null)
/
insert into book_rev values (1003,'A','24-JUL-2008','25-JUL-2008')
/
insert into book_rev values (1004,'A','25-JUL-2008','26-JUL-2008')
/
insert into book_rev values (1005,'A','2-AUG-2008',null)
/
insert into book_rev values (1006,'A','3-AUG-2008',null)
/


I

However, I want to add a condition (if a book was not assigned "no date" within the number of days specified by no_assign_days, then freeze the list and do not add any books to it.

For example, if i run the query for above data on

7/29 I should get 1000,1001,1002, 1003, 1004 since no books were not assigned within 10 days of created date.

8/5 the list should freeze meaning that it wont add 1005 and 1006 because 1001 was not assigned.

I think to implement this I have to find the created date of the first unassigned book in the list
and then filter my main query by < (that_created_date + No_assign_days).

Is this logic correct and is this how you would write the sql?


select * from book_rev where status = 'A' and rownum < (select parm_value from parm where parm_name='no_rows')
and created_Date < (select created_date+(select parm_value from parm where parm_name='no_days') from book_rev where status='A' and assign_date is null and rownum=1) order by created_date
/
Tom Kyte
August 02, 2008 - 4:59 pm UTC

I don't know what you mean by "freeze" in this context, you write:

... 8/5 the list should freeze meaning that it wont add 1005 and 1006 because 1001
was not assigned.
...

which SEEMS to indicate that the set returned is not empty, it has what was returned on 7/29 apparently - but it is not at all clear.

psuedo code would be very helpful.

sql

A reader, August 03, 2008 - 9:01 pm UTC

Tom:

Freeze means "keep result set the same". Do not add any new records/books to it.

Yes the result set on 7/29 is not empty.

I am not sure if this will explain it better.

If you have a library and people walk in. you issue a ticket (number/date-time) for every person that walks in and you need to service. They have a rule to shut the main door and block people if one person is inside and has not been serviced within 4 hours to force the people who work inside (clerks) to assist him.

So if 10 customers are inside and one person ticket went over 4 hours, you shut the main door (freeze the list) until this person is serviced by a clerk. After this person gets serviced, you check if any other person inside exceeded the 4 hour limit, if not, you open the door ansd let more people in.

same logic applies later. is this something doable in SQL.


Tom Kyte
August 04, 2008 - 1:04 pm UTC

the result set is always changing, your terminology must change. There is not any such concept as "freeze a result set", we do not add or subtract from them. We take your query and run it. That is all

You however have to precisely define what your desired result set is.

Do you want to prevent inserts? (That is what your description seems to say - we'd need to look at the transaction that creates data - you want to shut the door)


So, rather than use an analogy, give us some psuedo code - if the data is allowed to be inserted and you just don't want to select it - tell us the logic, the filter you would apply (in psuedo code, need not be in sql, but it does need to be "technical" in nature - talk about your data)


Jermin Dawoud, August 08, 2008 - 11:29 pm UTC

Tom,

There is a transaction created by each machine for out of service "O/S" and back in service "I/S" event.

However it is possible that they are not paired for whatever reason.

I am trying to match "the first out of service txn" with "the first back in service txn"
having the same ticket serial number (as no ticket will be generated if the device is out of service)
to get how long the device was not out of service.

Please note: The ticket serial number can be reset to 1 if the machine is reset or when a certain limit is reached
(ie can not used in partition by ...). In this special case I have to match the last O/S txn with the first I/S having ticket # = 1



I need to get the output listed below for the following test data.

Many thanks.



CREATE TABLE TEST_OS
(
MACHINE_ID NUMBER(4) NOT NULL,
TXN_DATE_TIME DATE NOT NULL,
TXN_NO NUMBER(5) NOT NULL,
TICKET NUMBER(6) NOT NULL,
CASH_TYPE NUMBER(3) NOT NULL,
ERROR_NO NUMBER(3) NOT NULL,
START_END_INDICATOR VARCHAR2(5));


INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 05:51','DD/MM/YYYY HH24:MI'),3,103,15,190,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:24','DD/MM/YYYY HH24:MI'),65,162,14,57,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:25','DD/MM/YYYY HH24:MI'),69,163,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),100,193,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),101,193,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),516,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),517,596,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),518,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),519,596,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:11','DD/MM/YYYY HH24:MI'),523,597,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:12','DD/MM/YYYY HH24:MI'),525,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:13','DD/MM/YYYY HH24:MI'),527,597,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:15','DD/MM/YYYY HH24:MI'),528,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:17','DD/MM/YYYY HH24:MI'),530,597,14,119,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),531,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),533,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:26','DD/MM/YYYY HH24:MI'),534,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:27','DD/MM/YYYY HH24:MI'),536,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('31/07/2008 13:27','DD/MM/YYYY HH24:MI'),1,1,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),2,1,14,119,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),3,1,14,177,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:28','DD/MM/YYYY HH24:MI'),4,1,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:10','DD/MM/YYYY HH24:MI'),18,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:12','DD/MM/YYYY HH24:MI'),20,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:15','DD/MM/YYYY HH24:MI'),22,596,15,403,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:31','DD/MM/YYYY HH24:MI'),1,13631,89,177,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:33','DD/MM/YYYY HH24:MI'),4,13631,15,4,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 05:34','DD/MM/YYYY HH24:MI'),39,13665,15,4,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 18:04','DD/MM/YYYY HH24:MI'),993,14486,14,227,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 19:35','DD/MM/YYYY HH24:MI'),1003,14486,15,185,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 21:00','DD/MM/YYYY HH24:MI'),1083,14547,14,50,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 21:03','DD/MM/YYYY HH24:MI'),1096,14547,15,4,'End');


MACHINE_ID TXN_DATE_TIME TXN_NO TICKET CASH_TYPE ERROR_NO END_TXN_NO END_TXN_DATE
---------- -------------------- ---------- ---------- ---------- ---------- ---------- -------------
1601 29-jul-2008:07:24:00 65 162 14 57
1601 29-jul-2008:07:48:00 100 193 14 48 101 29-jul-2008:07:48:00
1601 29-jul-2008:13:10:00 516 596 14 48 517 29-jul-2008:13:10:00
1601 29-jul-2008:13:10:00 518 596 14 48 519 29-jul-2008:13:10:00
1601 29-jul-2008:13:11:00 523 597 14 48 528 30-jul-2008:13:15:00
1601 30-jul-2008:13:17:00 530 597 14 119 531 30-jul-2008:13:25:00
1601 30-jul-2008:13:26:00 534 597 14 403 1 31-jul-2008:13:27:00
1601 01-aug-2008:13:27:00 2 1 14 119 4 01-aug-2008:13:28:00
1601 02-aug-2008:13:10:00 18 596 14 48 22 02-aug-2008:13:15:00
2016 29-jul-2008:01:31:00 1 13631 89 177 4 29-jul-2008:01:33:00
2016 29-jul-2008:18:04:00 993 14486 14 227 1003 29-jul-2008:19:35:00
2016 29-jul-2008:21:00:00 1083 14547 14 50 1096 29-jul-2008:21:03:00

Tom Kyte
August 12, 2008 - 4:33 am UTC

you'll need to explain this a tad bit better - you refer to a serial number, but I don't see one (for example)

You refer to O/S and I/S records - but you don't use O/S and I/S in your example?

Make it painfully obvious what everything is and means, use consistent names and terms.

Is it sufficient to say that for a given machine, you want to pair each out of service (o/s) record with the next i/s record (if available) - so all we need to do is take each o/s read, use lead to get the next record - nulling out the values if the next record is another o/s record... and then just keeping the o/s records which now have been matched to the next i/s record by machine id.

Jermin Dawoud, August 09, 2008 - 6:16 am UTC

Hi Tom,

Following is my SQL for the question mentioned above. I got the required output however I don't think it is the best. Would you please have a look?

Many thanks.

select machine_id,
txn_date_time,
txn_no,
ticket,
cash_type,
error_no,
case
when lead_new_tkt <> 1
then
lead_txn_no
else
case when lead_tkt <> 1
then null
else
CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
THEN NULL ELSE lead_txn_no
END
end
end end_txn_no,
case
when lead_new_tkt <> 1
then
lead_txn_date
else
case when lead_tkt <> 1
then null
else CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
then
null else lead_txn_date
end
end
end end_txn_date
from
(
select c.*,
lead(new_tkt) over (partition by machine_id order by txn_date_time, txn_no ) lead_new_tkt,
LEAD(TXN_NO) OVER (partition by machine_id order by txn_date_time, txn_no ) lead_txn_no,
lead(txn_date_time) over (partition by machine_id order by txn_date_time, txn_no ) lead_txn_date,
lead(ticket) over (partition by machine_id order by txn_date_time, txn_no ) lead_tkt,
lead(START_END_INDICATOR) over (partition by machine_id order by txn_date_time, txn_no ) lead_START_END_INDICATOR
from
( Select c.*,
LAG(START_END_INDICATOR) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) LAG_START_END ,
case
when
LAG(TICKET) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) <> ticket
then 1
else 0
end new_tkt
from
TEST_OS c
) c
where ( (LAG_START_END <> START_END_INDICATOR)
OR (LAG_START_END IS NULL AND START_END_INDICATOR = 'Start')
OR (new_tkt = 1)
)
order by machine_id,txn_date_time
)c
where START_END_INDICATOR = 'Start'
Tom Kyte
August 12, 2008 - 4:34 am UTC

you'll need to address my comments above - thanks

Reader

Pat, August 12, 2008 - 12:57 pm UTC

Excellent, i gained lot of knowledge just browsing this site.

i have a timestamp math question. i want the average time (xe - xs) group by id in the format
x days xx hours xx minutes xx.xxx seconds.


create table te ( id number,xs timestamp, xe timestamp);

insert into te values ( 1,SYSTIMESTAMP + interval '01' minute + INTERVAL '01.111' SECOND
, SYSTIMESTAMP + interval '01' minute + INTERVAL '02.222' SECOND);
insert into te values ( 1,SYSTIMESTAMP + interval '02' minute + INTERVAL '00.111' SECOND
, SYSTIMESTAMP + interval '02' minute + INTERVAL '00.222' SECOND);
insert into te values ( 2,SYSTIMESTAMP + interval '06' hour + interval '44' minute + INTERVAL '01.111' SECOND
, SYSTIMESTAMP + interval '07' hour + interval '00' minute + INTERVAL '05.234' SECOND);
insert into te values ( 2,SYSTIMESTAMP + interval '06' hour + interval '43' minute + INTERVAL '02.222' SECOND
, SYSTIMESTAMP + interval '07' hour + interval '00' minute + INTERVAL '08.568' SECOND );

select * from te;
ID XS XE
1 12-AUG-08 08.58.44.939628 AM 12-AUG-08 08.58.46.050628 AM
1 12-AUG-08 08.59.43.950883 AM 12-AUG-08 08.59.44.061883 AM
2 12-AUG-08 03.41.44.970998 PM 12-AUG-08 03.57.49.093998 PM
2 12-AUG-08 03.40.46.106369 PM 12-AUG-08 03.57.52.452369 PM


i tried converting all into millisecods do aggregate, but i could not able to convert back to the format i want.

select
id
, avg((extract(day from xe)-extract(day from xs))* (24 * 60 * 60)
+(extract(hour from xe)-extract(hour from xs))* (60 * 60)
+ (extract(minute from xe)-extract(minute from xs))*60
+ extract(second from xe)-extract(second from xs)*1000
)c
from te
group by id

i appreciate you help.

Jermin Dawoud, August 13, 2008 - 8:01 am UTC

Sorry. i'll try to be clearer...

There is a transaction created by the devices for out of service "O/S" (START_END_INDICATOR ="Start") and back in service
"I/S" "START_END_INDICATOR =End" event.

However it is possible that they are not paired for whatever reason (for example missing transaction...)


I am trying to match "the first out of service txn" with "the first back in service txn follwing it"
having the same ticket (as no ticket will be generated if the device is out of service)
to get how long the device was out of service. (many o/s "START_END_INDICATOR =Start" can be followed by one or more i/s "START_END_INDICATOR =End" or vice versa)

Please note: The ticket can be reset to 1 if the machine is reset or when a certain limit is reached
(ie can not used in partition by ...). In this special case I have to match the last O/S txn with the first I/S having ticket= 1


example:


MACHINE_ID TXN_DATE_TIME TXN_NO TICKET CASH_TYPE ERROR_NO START
---------- -------------------- ---------- ---------- ---------- ---------- -----
1601 29-jul-2008:05:51:00 3 103 15 190 End <---no start so it will not be considered
1601 29-jul-2008:07:24:00 65 162 14 57 Start<---start but no end as the folowing end has different ticket
1601 29-jul-2008:07:25:00 69 163 15 1 End

1601 29-jul-2008:07:48:00 100 193 14 48 Start ---|will be paired with the following end
1601 29-jul-2008:07:48:00 101 193 15 1 End <---|


1601 29-jul-2008:13:11:00 523 597 14 48 Start --|
1601 29-jul-2008:13:12:00 525 597 14 403 Start |Many starts and one end for the same ticket
1601 30-jul-2008:13:13:00 527 597 14 48 Start |first start will be paired with the first
1601 30-jul-2008:13:15:00 528 597 15 403 End <--|end


1601 30-jul-2008:13:17:00 530 597 14 119 Start ---|
1601 30-jul-2008:13:25:00 531 597 15 403 End <---|many ends and one start

1601 30-jul-2008:13:25:00 533 597 15 403 End


1601 30-jul-2008:13:27:00 536 597 14 403 Start ---|
1601 31-jul-2008:13:27:00 1 1 15 1 End <---|ticket reset to 1 (last start will be paired with the first end)

CREATE TABLE TEST_OS
(
MACHINE_ID NUMBER(4) NOT NULL,
TXN_DATE_TIME DATE NOT NULL,
TXN_NO NUMBER(5) NOT NULL,
TICKET NUMBER(6) NOT NULL,
CASH_TYPE NUMBER(3) NOT NULL,
ERROR_NO NUMBER(3) NOT NULL,
START_END_INDICATOR VARCHAR2(5));




INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 05:51','DD/MM/YYYY HH24:MI'),3,103,15,190,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:24','DD/MM/YYYY HH24:MI'),65,162,14,57,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:25','DD/MM/YYYY HH24:MI'),69,163,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),100,193,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),101,193,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),516,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),517,596,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),518,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:10','DD/MM/YYYY HH24:MI'),519,596,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:11','DD/MM/YYYY HH24:MI'),523,597,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:12','DD/MM/YYYY HH24:MI'),525,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:13','DD/MM/YYYY HH24:MI'),527,597,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:15','DD/MM/YYYY HH24:MI'),528,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:17','DD/MM/YYYY HH24:MI'),530,597,14,119,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),531,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),533,597,15,403,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:26','DD/MM/YYYY HH24:MI'),534,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:27','DD/MM/YYYY HH24:MI'),536,597,14,403,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('31/07/2008 13:27','DD/MM/YYYY HH24:MI'),1,1,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),2,1,14,119,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),3,1,14,177,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:28','DD/MM/YYYY HH24:MI'),4,1,15,1,'End');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:10','DD/MM/YYYY HH24:MI'),18,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:12','DD/MM/YYYY HH24:MI'),20,596,14,48,'Start');
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:15','DD/MM/YYYY HH24:MI'),22,596,15,403,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:31','DD/MM/YYYY HH24:MI'),1,13631,89,177,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:33','DD/MM/YYYY HH24:MI'),4,13631,15,4,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 05:34','DD/MM/YYYY HH24:MI'),39,13665,15,4,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 18:04','DD/MM/YYYY HH24:MI'),993,14486,14,227,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 19:35','DD/MM/YYYY HH24:MI'),1003,14486,15,185,'End');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 21:00','DD/MM/YYYY HH24:MI'),1083,14547,14,50,'Start');
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 21:03','DD/MM/YYYY HH24:MI'),1096,14547,15,4,'End');


The required output



MACHINE_ID TXN_DATE_TIME TXN_NO TICKET CASH_TYPE ERROR_NO END_TXN_NO END_TXN_DATE
---------- -------------------- ---------- ---------- ---------- ---------- ---------- -------------
1601 29-jul-2008:07:24:00 65 162 14 57
1601 29-jul-2008:07:48:00 100 193 14 48 101 29-jul-2008:07:48:00
1601 29-jul-2008:13:10:00 516 596 14 48 517 29-jul-2008:13:10:00
1601 29-jul-2008:13:10:00 518 596 14 48 519 29-jul-2008:13:10:00
1601 29-jul-2008:13:11:00 523 597 14 48 528 30-jul-2008:13:15:00
1601 30-jul-2008:13:17:00 530 597 14 119 531 30-jul-2008:13:25:00
1601 30-jul-2008:13:26:00 534 597 14 403 1 31-jul-2008:13:27:00
1601 01-aug-2008:13:27:00 2 1 14 119 4 01-aug-2008:13:28:00
1601 02-aug-2008:13:10:00 18 596 14 48 22 02-aug-2008:13:15:00
2016 29-jul-2008:01:31:00 1 13631 89 177 4 29-jul-2008:01:33:00
2016 29-jul-2008:18:04:00 993 14486 14 227 1003 29-jul-2008:19:35:00
2016 29-jul-2008:21:00:00 1083 14547 14 50 1096 29-jul-2008:21:03:00

12 rows selected.

My attempt
select machine_id,
txn_date_time,
txn_no,
ticket,
cash_type,
error_no,
case
when lead_new_tkt <> 1
then
lead_txn_no
else
case when lead_tkt <> 1
then null
else
CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
THEN NULL ELSE lead_txn_no
END
end
end end_txn_no,
case
when lead_new_tkt <> 1
then
lead_txn_date
else
case when lead_tkt <> 1
then null
else CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
then
null else lead_txn_date
end
end
end end_txn_date
from
(
select c.*,
lead(new_tkt) over (partition by machine_id order by txn_date_time, txn_no ) lead_new_tkt,
LEAD(TXN_NO) OVER (partition by machine_id order by txn_date_time, txn_no ) lead_txn_no,
lead(txn_date_time) over (partition by machine_id order by txn_date_time, txn_no ) lead_txn_date,
lead(ticket) over (partition by machine_id order by txn_date_time, txn_no ) lead_tkt,
lead(START_END_INDICATOR) over (partition by machine_id order by txn_date_time, txn_no ) lead_START_END_INDICATOR
from
( Select c.*,
LAG(START_END_INDICATOR) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) LAG_START_END ,
case
when
LAG(TICKET) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) <> ticket
then 1
else 0
end new_tkt
from
TEST_OS c
) c
where ( (LAG_START_END <> START_END_INDICATOR)
OR (LAG_START_END IS NULL AND START_END_INDICATOR = 'Start')
OR (new_tkt = 1)
)
order by machine_id,txn_date_time
)c
where START_END_INDICATOR = 'Start'



Many thanks for your help.
Tom Kyte
August 18, 2008 - 9:31 am UTC

please use the CODE button, things do not line up, very very very hard to read

can the ticket be reset to 1 over and over and over again - this reset is a 'bad thing' here. how do you avoid assigning the same end ticket over and over and over?

Jermin, August 18, 2008 - 9:30 pm UTC

Hi Tom,

I've tried to reduce the number of records in the table to minimum (just to cover all the possible scenarios)


Yes, the ticket can be reset to 1 over and over and over again (however, it will be at different time and the prior O/S txn will be different). This is a real life scenario, sometimes the only way to get the machine back in service is to reset it (cold start it) meaning txn_no, ticket all reset to their initial values (1). In this case the prior O/S (start_end_indicator = 'Start') txn should be paired to I/S txn (start_end_indicator = 'End' having ticket = 1)

CREATE TABLE TEST_OS 
( 
  MACHINE_ID      NUMBER(4) NOT NULL, 
  TXN_DATE_TIME      DATE NOT NULL, 
  TXN_NO        NUMBER(5) NOT NULL, 
  TICKET        NUMBER(6) NOT NULL, 
  CASH_TYPE      NUMBER(3) NOT NULL, 
    ERROR_NO      NUMBER(3) NOT NULL, 
  START_END_INDICATOR  VARCHAR2(5)); 



INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 05:51','DD/MM/YYYY HH24:MI'),3,103,15,190,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:24','DD/MM/YYYY HH24:MI'),65,162,14,57,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:25','DD/MM/YYYY HH24:MI'),69,163,15,1,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),100,193,14,48,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 07:48','DD/MM/YYYY HH24:MI'),101,193,15,1,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:11','DD/MM/YYYY HH24:MI'),523,597,14,48,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('29/07/2008 13:12','DD/MM/YYYY HH24:MI'),525,597,14,403,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:13','DD/MM/YYYY HH24:MI'),527,597,14,48,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:15','DD/MM/YYYY HH24:MI'),528,597,15,403,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:17','DD/MM/YYYY HH24:MI'),530,597,14,119,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),531,597,15,403,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:25','DD/MM/YYYY HH24:MI'),533,597,15,403,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:26','DD/MM/YYYY HH24:MI'),534,597,14,403,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('30/07/2008 13:27','DD/MM/YYYY HH24:MI'),536,597,14,403,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('31/07/2008 13:27','DD/MM/YYYY HH24:MI'),1,1,15,1,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),2,1,14,119,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:27','DD/MM/YYYY HH24:MI'),3,1,14,177,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('01/08/2008 13:28','DD/MM/YYYY HH24:MI'),4,1,15,1,'End'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:10','DD/MM/YYYY HH24:MI'),18,596,14,48,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:12','DD/MM/YYYY HH24:MI'),20,596,14,48,'Start'); 
INSERT INTO TEST_OS VALUES(1601,TO_DATE('02/08/2008 13:15','DD/MM/YYYY HH24:MI'),22,1,15,403,'End'); 
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:31','DD/MM/YYYY HH24:MI'),1,13631,89,177,'Start'); 
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 01:33','DD/MM/YYYY HH24:MI'),4,13631,15,4,'End'); 
INSERT INTO TEST_OS VALUES(2016,TO_DATE('29/07/2008 05:34','DD/MM/YYYY HH24:MI'),39,13665,15,4,'End'); 


SQL> select 
  2  machine_id,
  3  txn_date_time,
  4  txn_no,
  5  ticket,
  6  start_end_indicator
  7   from test_os
  8  order by machine_id, txn_date_time;

MACHINE_ID TXN_DATE_TIME           TXN_NO    TICKET START
---------- -------------------- --------- --------- -----
      1601 29-JUL-2008:05:51:00         3       103 End   <------ No Start, will be ignored

      1601 29-JUL-2008:07:24:00        65       162 Start <------ No End as the following End has different ticket

      1601 29-JUL-2008:07:25:00        69       163 End

      1601 29-JUL-2008:07:48:00       100       193 Start ------| Paired together 
      1601 29-JUL-2008:07:48:00       101       193 End   <-----|


      1601 29-JUL-2008:13:11:00       523       597 Start ------|First Start with the 
      1601 29-JUL-2008:13:12:00       525       597 Start       |following  first End 
      1601 30-JUL-2008:13:13:00       527       597 Start       |(Many Starts and one End)
      1601 30-JUL-2008:13:15:00       528       597 End   <-----|

 
      1601 30-JUL-2008:13:17:00       530       597 Start ------|First Start with the
      1601 30-JUL-2008:13:25:00       531       597 End   <-----|following first End 
      1601 30-JUL-2008:13:25:00       533       597 End


      1601 30-JUL-2008:13:26:00       534       597 Start ------|First "Start" with the
      1601 30-JUL-2008:13:27:00       536       597 Start       |following End 
      1601 31-JUL-2008:13:27:00         1         1 End   <-----|(Ticket reset to 1)


      1601 01-AUG-2008:13:27:00         2         1 Start ------|
      1601 01-AUG-2008:13:27:00         3         1 Start       |
      1601 01-AUG-2008:13:28:00         4         1 End   <-----|

      1601 02-AUG-2008:13:10:00        18       596 Start ------|
      1601 02-AUG-2008:13:12:00        20       596 Start       |
      1601 02-AUG-2008:13:15:00        22         1 End   <-----|Ticket reset to 1 again

      2016 29-JUL-2008:01:31:00         1     13631 Start ------| 
      2016 29-JUL-2008:01:33:00         4     13631 End   <-----|

      2016 29-JUL-2008:05:34:00        39     13665 End

24 rows selected.

The required output

MACHINE_ID TXN_DATE_TIME           TXN_NO    TICKET END_TXN_NO END_TXN_DATE
---------- -------------------- --------- --------- ---------- --------------------
      1601 29-JUL-2008:07:24:00        65       162
      1601 29-JUL-2008:07:48:00       100       193        101 29-JUL-2008:07:48:00
      1601 29-JUL-2008:13:11:00       523       597        528 30-JUL-2008:13:15:00
      1601 30-JUL-2008:13:17:00       530       597        531 30-JUL-2008:13:25:00
      1601 30-JUL-2008:13:26:00       534       597          1 31-JUL-2008:13:27:00
      1601 01-AUG-2008:13:27:00         2         1          4 01-AUG-2008:13:28:00
      1601 02-AUG-2008:13:10:00        18       596         22 02-AUG-2008:13:15:00
      2016 29-JUL-2008:01:31:00         1     13631          4 29-JUL-2008:01:33:00

8 rows selected.


My SQL
select  machine_id,
  txn_date_time,
  txn_no,
  ticket,
  case
  when lead_new_tkt <> 1
    then
        lead_txn_no
    else
        case  when lead_tkt <> 1
          then  null
          else
            CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
            THEN NULL ELSE lead_txn_no
            END
        end
  end end_txn_no,
  case
  when lead_new_tkt <> 1
    then
        lead_txn_date
    else
        case  when lead_tkt <> 1
          then  null
          else CASE WHEN (LEAD_START_END_INDICATOR = START_END_INDICATOR)
            then
                null else lead_txn_date
            end
        end
  end end_txn_date
  from
  (
  select  c.*,
    lead(new_tkt) over (partition by machine_id order by txn_date_time, txn_no ) lead_new_tkt,
    LEAD(TXN_NO) OVER (partition by machine_id order by txn_date_time, txn_no )  lead_txn_no,
    lead(txn_date_time) over (partition by machine_id order by txn_date_time, txn_no )  lead_txn_date,
    lead(ticket) over (partition by machine_id order by txn_date_time, txn_no )  lead_tkt,
    lead(START_END_INDICATOR) over (partition by machine_id order by txn_date_time, txn_no ) lead_START_END_INDICATOR
    from
    (  Select c.*,
          LAG(START_END_INDICATOR) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) LAG_START_END ,
        case
          when
              LAG(TICKET) OVER (PARTITION BY MACHINE_ID ORDER BY txn_date_time,txn_no) <> ticket
          then 1
          else 0
        end new_tkt
      from
        TEST_OS c
    ) c
    where (  (LAG_START_END <> START_END_INDICATOR)
          OR (LAG_START_END IS NULL AND START_END_INDICATOR = 'Start')
          OR (new_tkt = 1)
        )
    order by machine_id,txn_date_time
  )c
where START_END_INDICATOR = 'Start'
/


Appreciate your help.
Tom Kyte
August 20, 2008 - 10:14 am UTC

if your sql works - go with it - I see no straight forward way to query this sort of a mess of data, that resetting ticket number will always make this near impossible to do efficiently.

A reader, August 22, 2008 - 1:04 pm UTC

Hi - I have a query as follows:

SELECT COUNT(*) TCNT, CODE1, CODE2, GMTVAL
FROM TAB1
WHERE CODE2 <> '01'
GROUP BY CODE1, CODE2 GMTVAL
ORDER BY CODE2;

I get results as follows:

TCNT CODE1 CODE2 GMTVAL
121 03 03 08/22/2008 1:21:46.000 AM
12 03 03 08/22/2008 1:32:49.000 AM
1 03 04 08/22/2008 1:21:46.000 AM
1 03 04 08/22/2008 1:32:49.000 AM

Now for each distinct code1, code2 I want the max gmtval displayed. In this example, I just want the following results

TCNT CODE1 CODE2 GMTVAL
12 03 03 08/22/2008 1:32:49.000 AM
1 03 04 08/22/2008 1:32:49.000 AM

Can you help ?




Tom Kyte
August 26, 2008 - 7:34 pm UTC

SELECT COUNT(*) TCNT, CODE1, CODE2, max(GMTVAL)
FROM TAB1
WHERE CODE2 <> '01'
GROUP BY CODE1, CODE2
ORDER BY CODE2;


A reader, August 26, 2008 - 2:20 am UTC

CREATE TABLE TEST_RANGE
(
TEST_ID NUMBER(10),
FROM_RANGE VARCHAR2(5 BYTE),
TO_RANGE VARCHAR2(5 BYTE)
);

INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
1, '0051A', '0076A');
INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
2, '0051T', '0098T');
INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
3, '00101', '02008');
INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
4, '03456', '57690');
INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
5, '1237T', '6789T');
INSERT INTO TEST_RANGE ( TEST_ID, FROM_RANGE, TO_RANGE ) VALUES (
6, '7892T', '9345T');
COMMIT;


When I search for '00610', in which range it exits then i want to get following record and not records containing characters.
3 00101 02008


when I search for '0061T', in which range it exits then i want to get following record which contains character 'T'

2 0051T 0098T
Tom Kyte
August 26, 2008 - 9:24 pm UTC

ok, thanks for letting know that.

now, if you let me in on the LOGIC behind that, I'd be very very much happier than I am now.

define the logic in psuedo code and define - precisely define - what a 'character' is to you.

A reader, August 27, 2008 - 7:15 am UTC

Sorry i could not put up my question properly. Here i try to explain it.

select *
from test_range
where '00610' between from_range and to_range

TEST_ID FROM_RANGE TO_RANGE

1 0051A 0076A
2 0051T 0098T
3 00101 02008

it returns 3 records but i want to get only 3rd one that is= 3 00101 02008

select *
from test_range
where '0061T' between from_range and to_range

TEST_ID FROM_RANGE TO_RANGE

1 0051A 0076A
2 0051T 0098T
3 00101 02008

it returns 3 records but i want to get only 2nd one that is =2 0051T 0098T

it is required that if we search for data wich contains Alphabetic then it will return the data which contain that Alphabetic. If search data does not contain Alphabetic then it will return data without alphabetic.

If there is alphabetic (A - Z) then it will be the last character.
Tom Kyte
August 27, 2008 - 10:01 am UTC

isn't that just:

where :bind between from_range and to_range
and (
      ( 
      substr(:bind,length(:bind)) = substr(from_range,length(from_range)
      and 
      substr(:bind,length(:bind)) = substr(to_range,length(to_range)
      )
      OR
      (substr(:bind,length(:bind) between '0' and '9' 
       and
       substr(from_range,length(from_range)) between '0' and '9'
       and 
       substr(to_range,length(to_range)) between '0' and '9'
      )
   )


little different from what you stated - safer typically to look at the digits rather than try to define what "alphabetic" means (character sets..)



A reader, August 27, 2008 - 5:59 pm UTC

CREATE TABLE t (
   value     VARCHAR2(20),
   timestamp DATE
);

INSERT INTO t VALUES ('abc', SYSDATE - 35);
INSERT INTO t VALUES ('abc', SYSDATE - 5);
INSERT INTO t VALUES ('xyz', SYSDATE - 8);
INSERT INTO t VALUES ('xyz', SYSDATE - 10);

SELECT value,
       SUM(seven_day)  seven_day,
       SUM(thirty_day) thirty_day,
       SUM(total)      total
FROM (
   SELECT value,
         CASE WHEN age < 7  THEN 1 ELSE 0 END seven_day,
         CASE WHEN age < 30 THEN 1 ELSE 0 END thirty_day,
         1 Total
   FROM (
      SELECT value,
             TRUNC(SYSDATE) - TRUNC(timestamp) age
      FROM   t
      WHERE  value IN ('abc', 'xyz')
   )
)
GROUP BY value;

VALUE                 SEVEN_DAY THIRTY_DAY      TOTAL
-------------------- ---------- ---------- ----------
abc                           1          1          2
xyz                           0          2          2




Now I add another value in the in list that does not have a matching record in the table

SELECT value,
       SUM(seven_day)  seven_day,
       SUM(thirty_day) thirty_day,
       SUM(total)      total
FROM (
   SELECT value,
         CASE WHEN age < 7  THEN 1 ELSE 0 END seven_day,
         CASE WHEN age < 30 THEN 1 ELSE 0 END thirty_day,
         1 Total
   FROM (
      SELECT value,
             TRUNC(SYSDATE) - TRUNC(timestamp) age
      FROM   t
      WHERE  value IN ('abc', 'xyz', 'def')
   )
)
GROUP BY value;



Instead of getting


VALUE                 SEVEN_DAY THIRTY_DAY      TOTAL
-------------------- ---------- ---------- ----------
abc                           1          1          2
xyz                           0          2          2




I'd like the output in the following format. Is it possible to do this? Note that my in list can potentially have many values.

VALUE                 SEVEN_DAY THIRTY_DAY      TOTAL
-------------------- ---------- ---------- ----------
abc                           1          1          2
xyz                           0          2          2
def                           0          0          0



Tom Kyte
August 29, 2008 - 1:23 pm UTC

ops$tkyte%ORA10GR2> variable txt varchar2(100);
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :txt := 'abc,xyz,def'

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with data
  2  as
  3  (
  4  select
  5    trim( substr (txt,
  6          instr (txt, ',', 1, level  ) + 1,
  7          instr (txt, ',', 1, level+1)
  8             - instr (txt, ',', 1, level) -1 ) )
  9      as token
 10     from (select ','||:txt||',' txt
 11             from dual)
 12   connect by level <=
 13      length(:txt)-length(replace(:txt,',',''))+1
 14   )
 15  select data.token, trunc(sysdate)-trunc(t.timestamp) age
 16    from t, data
 17   where data.token = t.value(+);

TOKEN           AGE
-------- ----------
abc              35
abc               5
xyz               8
xyz              10
def

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with data
  2  as
  3  (
  4  select
  5    trim( substr (txt,
  6          instr (txt, ',', 1, level  ) + 1,
  7          instr (txt, ',', 1, level+1)
  8             - instr (txt, ',', 1, level) -1 ) )
  9      as token
 10     from (select ','||:txt||',' txt
 11             from dual)
 12   connect by level <=
 13      length(:txt)-length(replace(:txt,',',''))+1
 14   )
 15  select data.token,
 16         sum(case when trunc(sysdate)-trunc(t.timestamp) < 7 then 1 else 0 end) seven,
 17         sum(case when trunc(sysdate)-trunc(t.timestamp) < 30 then 1 else 0 end) thirty,
 18             count(t.timestamp)  cnt
 19    from t, data
 20   where data.token = t.value(+)
 21   group by data.token
 22   order by data.token;

TOKEN         SEVEN     THIRTY        CNT
-------- ---------- ---------- ----------
abc               1          1          2
def               0          0          0
xyz               0          2          2


A reader, September 02, 2008 - 12:48 pm UTC

Let's say I changed the data so that the value column contains mixed case. For example, now there are 'abc' and 'ABC' in the table (see below). I'd like to have the result set to be case insensitive. However, the value/token column should return whatever the input string (:txt) provided. The output of the SQL would look like the following. Thanks.
DELETE FROM t;

INSERT INTO t VALUES ('abc', SYSDATE - 35);
INSERT INTO t VALUES ('ABC', SYSDATE - 5);
INSERT INTO t VALUES ('xyz', SYSDATE - 8);
INSERT INTO t VALUES ('xyz', SYSDATE - 10);

exec :txt := 'abc,xyz,def'

-- Run the SQL


TOKEN         SEVEN     THIRTY        CNT
-------- ---------- ---------- ----------
abc               1          1          2
def               0          0          0
xyz               0          2          2


exec :txt := 'ABC,xyz,def'

-- Run the SQL

TOKEN         SEVEN     THIRTY        CNT
-------- ---------- ---------- ----------
ABC               1          1          2
def               0          0          0
xyz               0          2          2


Tom Kyte
September 02, 2008 - 1:41 pm UTC

we are already displaying whatever the token was in the input string, you just need to change the join condition

where lower(data.token) = lower(t.value)(+)

sql query

P, September 10, 2008 - 2:45 pm UTC

I have a big table (around 2 billion rows). Even when I run simple query like select count(1), it takes around 17 min. The table is not partitioned. Table has index on primary key.

can you please tell me how can I get the result faster ?

SQL> select count(1) from app.prod_list;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=33589 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'PROD_LIST_2IX' (INDEX) (Cost
=33589 Card=65319966)





Statistics
----------------------------------------------------------
15 recursive calls
0 db block gets
1158630 consistent gets
1135408 physical reads
1800032 redo size
223 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Tom Kyte
September 11, 2008 - 11:19 am UTC

... 1135408 physical reads ...

make your IO faster - think about this please.....

and ask yourself "why the heck am I counting rows, what is the point of that, besides burning a lot of resources, of what POSSIBLE USE is that number"


P, September 11, 2008 - 8:20 pm UTC

Correct. I agree with you and so, I ran real query and got following output. What are the things I should look to reduce IO ? The query I am doing is on a read only database. Any init parameters in particular I need to look again ?

SQL> show sga

Total System Global Area 4294967296 bytes
Fixed Size 2147872 bytes
Variable Size 1438562784 bytes
Database Buffers 2852126720 bytes
Redo Buffers 2129920 bytes
SQL> show parameter opti

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.2
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2

SQL> select a.prod_id, a.catl_num, b.bill_seq, b.oper_id, b.appl_id from app.prod_list a, app.billing_info
t b where a.prod_id = b.prod_id;

156589154 rows selected.

Elapsed: 07:11:00.02

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=340806 Card=15655206
5 Bytes=3600697495)

1 0 HASH JOIN (Cost=340806 Card=156552065 Bytes=3600697495)
2 1 TABLE ACCESS (FULL) OF 'BILLING_INFO' (TABLE) (Cost=2
02502 Card=34575655 Bytes=553210480)

3 1 INDEX (FAST FULL SCAN) OF 'PROD_ID_3IX' (INDEX) (Cost
=61739 Card=156552065 Bytes=1095864455)





Statistics
----------------------------------------------------------
2515 recursive calls
582 db block gets
2996958 consistent gets
3297323 physical reads
4644800 redo size
2425715458 bytes sent via SQL*Net to client
73075209 bytes received via SQL*Net from client
10439278 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
156589154 rows processed


Thanks in Advance,

Tom Kyte
September 16, 2008 - 1:09 pm UTC

156,589,154

what possible use is that result set?


an autotrace is useless here - we'd need to see what the wait events were (tkprof+sql tracing). To see if your IO is "slow or fast"

but it looks pretty fast:

ops$tkyte%ORA10GR2> select (7*60*60+11*60) / 3297323  from dual;

(7*60*60+11*60)/3297323
-----------------------
             .007842726



you transferred 2.25 GB of data over the network, you read a ton of times from disk using multiblock IO, the worst case IO times are 0.008 seconds per read.

You are just doing *a lot of stuff*. Think about it for a minute...

Urgent

Sumon, October 01, 2008 - 7:53 am UTC

Hi Tom,

I have 3 tables with not fully normalize.
But I have urgently need the output as follows

Required output:

ID CUSTOMER ID SUPPLIER SUM(TOT_COST)
--- -------- --- -------- ------------
11 name-11 22 name-22 10
11 name-11 22 name-22 14
11 name-11 33 name-33 18

Scripts:

create table users
(id number(2),
name varchar2(15));

create table user_ip
(id number(2),
ip varchar2(15));

create table cdr_summary
(sourceip varchar2(15),
destip varchar2(15),
tot_cost number);

Data:

insert into users values (11,'name-11');
insert into users values (22,'name-22');
insert into users values (33,'name-33');
insert into user_ip values (11,'202.68.75.11');
insert into user_ip values (11,'202.68.70.100');
insert into user_ip values (11,'192.168.170.110');
insert into user_ip values (11,'195.166.10.11');
insert into user_ip values (11,'199.169.110.112');
insert into user_ip values (22,'192.168.170.110');
insert into user_ip values (22,'190.166.111.111');
insert into user_ip values (22,'211.111.111.116');
insert into user_ip values (33,'202.116.175.105');
insert into user_ip values (33,'211.111.115.150');
insert into cdr_summary values ('202.68.75.11','190.166.110.110',2);
insert into cdr_summary values ('202.68.75.11','190.166.111.11',4);
insert into cdr_summary values ('202.68.75.11','190.166.111.111',5);
insert into cdr_summary values ('202.68.70.100','202.68.75.105',6);
insert into cdr_summary values ('202.68.70.100','202.168.175.115',7);
insert into cdr_summary values ('192.168.170.110','202.16.75.15',4);
insert into cdr_summary values ('192.168.170.110','202.116.175.105',9);
insert into cdr_summary values ('192.168.100.110','202.111.115.115',3);
insert into cdr_summary values ('192.168.100.110','202.115.105.15',4);
insert into cdr_summary values ('192.168.100.110','202.121.125.15',7);
insert into cdr_summary values ('195.166.10.11','212.11.15.5',8);
insert into cdr_summary values ('195.166.10.11','21.101.105.50',9);
insert into cdr_summary values ('190.160.100.111','211.111.115.150',3);
insert into cdr_summary values ('190.160.100.111','201.110.11.15',5);
insert into cdr_summary values ('190.160.100.111','201.110.11.16',6);
insert into cdr_summary values ('199.169.110.112','211.111.111.116',7);
commit;


Relations:

usres.id = user_ip.id
user_ip.ip = cdr_summary.sourceip
and
cdr_summary.destip also in user_ip.ip;


Please help me.
Best regards.




Tom Kyte
October 01, 2008 - 12:47 pm UTC

you would sort of have to explain your output and how the inputs got to make that output...

and once you do that - you almost surely would be able to write this somewhat simple query (just a bunch of JOINS and an aggregate???)

 ID    CUSTOMER    ID    SUPPLIER    SUM(TOT_COST) 
---    --------    ---   --------    ------------ 
 11    name-11     22     name-22     10 
 11    name-11     22     name-22     14 
 11    name-11     33     name-33     18 


why does 11 relate to 22 twice.

the input data seems to be wanting to give this result:

ops$tkyte%ORA10GR2> select ip1.id, u1.name, ip2.id, u2.name, sum(c.tot_cost)
  2    from users u1, users u2, user_ip ip1, user_ip ip2, cdr_summary c
  3   where c.sourceip = ip1.ip
  4     and c.destip   = ip2.ip
  5     and ip1.id = u1.id
  6     and ip2.id = u2.id
  7   group by ip1.id, u1.name, ip2.id, u2.name
  8  /

        ID NAME             ID NAME     SUM(C.TOT_COST)
---------- -------- ---------- -------- ---------------
        22 name-22          33 name-33                9
        11 name-11          22 name-22               12
        11 name-11          33 name-33                9




Does oracle took the right decision ?

A reader, October 05, 2008 - 12:16 pm UTC

Hi tom,
Bellow a select statment joining a view built on monthly billing tables (V_CALL_DEBTS_ONLINE),
using correlate subquery named : BL_INVOICES_IN_RANGE
The table : BL_INVOICES_IN_RANGE is EMPTY , But it took to oracle about 30 minute to return no_data_found.

Its 8174 database with rule base optimizer.
I tried two things that may help the optimizer to "understand" .
I collect statistics against the table BL_INVOICES_IN_RANGE , so dba_tables show that num_rows is zero ,
and in this way oracle should know that the table is empty , hopping that this will cause
oracle to return no_data_found immediatally.

I also tried to change the query by joining the tables insted of using exists, eg :
select ....
FROM V_CALL_DEBTS_ONLINE CALLs , BL_INVOICES_IN_RANGE bib
WHERE bib.invoice_no=calls.invoice_no
and ...

All this changes didnt cause oracle to retrun answer faster.

Could you please explain why oracle is not returning answer immediatally , after all the table is empty ....

Thank You.


SELECT *
FROM V_CALL CALLs
WHERE CALLs.invoice_year= :"SYS_B_0"
AND CALLs.invoice_no>=:"SYS_B_1"
AND CALLs.invoice_no <=:"SYS_B_2"
AND EXISTS ( SELECT :"SYS_B_3"
FROM BL_INVOICES_IN_RANGE bib
WHERE bib.invoice_no=calls.invoice_no)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.03 1 0 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 52.94 2093.82 198874 243017 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 52.96 2093.85 198875 243017 1 0

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 13

Rows Row Source Operation
------- ---------------------------------------------------
0 FILTER
495072 VIEW V_CALL
495072 UNION-ALL
12 TABLE ACCESS BY INDEX ROWID TC_CALL0706
12 INDEX RANGE SCAN (object id 10783984)
9 TABLE ACCESS BY INDEX ROWID TC_CALL0806
9 INDEX RANGE SCAN (object id 10788409)
9 TABLE ACCESS BY INDEX ROWID TC_CALL0906
9 INDEX RANGE SCAN (object id 10790641)
16 TABLE ACCESS BY INDEX ROWID TC_CALL1006
16 INDEX RANGE SCAN (object id 10792677)
109 TABLE ACCESS BY INDEX ROWID TC_CALL1106
109 INDEX RANGE SCAN (object id 10795324)
84 TABLE ACCESS BY INDEX ROWID TC_CALL1206
84 INDEX RANGE SCAN (object id 10798341)
15 TABLE ACCESS BY INDEX ROWID TC_CALL0107
15 INDEX RANGE SCAN (object id 10801329)
6 TABLE ACCESS BY INDEX ROWID TC_CALL0207
6 INDEX RANGE SCAN (object id 10804856)
18 TABLE ACCESS BY INDEX ROWID TC_CALL0307
18 INDEX RANGE SCAN (object id 10807387)
23 TABLE ACCESS BY INDEX ROWID TC_CALL0407
23 INDEX RANGE SCAN (object id 10813017)
47 TABLE ACCESS BY INDEX ROWID TC_CALL0507
47 INDEX RANGE SCAN (object id 10821842)
100 TABLE ACCESS BY INDEX ROWID TC_CALL0607
100 INDEX RANGE SCAN (object id 10825027)
1 TABLE ACCESS BY INDEX ROWID TC_CALL0707
1 INDEX RANGE SCAN (object id 10826356)
65 TABLE ACCESS BY INDEX ROWID TC_CALL0807
65 INDEX RANGE SCAN (object id 10835595)
194 TABLE ACCESS BY INDEX ROWID TC_CALL0907
194 INDEX RANGE SCAN (object id 10844999)
354 TABLE ACCESS BY INDEX ROWID TC_CALL1007
354 INDEX RANGE SCAN (object id 10848421)
613 TABLE ACCESS BY INDEX ROWID TC_CALL1107
613 INDEX RANGE SCAN (object id 10852855)
1302 TABLE ACCESS BY INDEX ROWID TC_CALL1207
1302 INDEX RANGE SCAN (object id 10856156)
13070 TABLE ACCESS BY INDEX ROWID TC_CALL0108
13070 INDEX RANGE SCAN (object id 10860536)
479044 TABLE ACCESS BY INDEX ROWID TC_CALL0208
479044 INDEX RANGE SCAN (object id 10864617)
1 TABLE ACCESS BY INDEX ROWID TC_CALL0308
1 INDEX RANGE SCAN (object id 10868270)
1 TABLE ACCESS BY INDEX ROWID TC_CALL0408
1 INDEX RANGE SCAN (object id 10872386)
1 TABLE ACCESS BY INDEX ROWID TC_CALL0408
1 INDEX RANGE SCAN (object id 10872386)
1 TABLE ACCESS BY INDEX ROWID TC_CALL0408
1 INDEX RANGE SCAN (object id 10872386)
20700 INDEX RANGE SCAN (object id 10811740)

***********************************************************


Losing Data in Selecting Data

Thiago Rodrigues de Farias, October 17, 2008 - 6:20 pm UTC

Hi Tom,

I have a table with around 6,000,000 rows that is updated using the SQL Loader every month in the first day.
The problem is:
I have two different results for the column3 and column4 in these SELECT:

SELECT SUM(column3), SUM(column4) FROM table;
It shows results to column3 and column4.

SELECT SUM(column3), SUM(column4) FROM (SELECT column1, column2, SUM(column3), SUM(column4) FROM table);
It shows results in the column3 and column4 with values lesser than the other SELECT. It looks like some data were lost in this case.

I tried different ways to INDEX the table, of using the SLQ Loader, PARTITIONING, SELECTS, REORGANIZE tablespace and nothing solved this problem.

I ask for some idea to solve this problem.
The reason of the second SELECT is to check the result of the summed selected data that will be used to generate a txt file by UTL_FILE. I need the same result in the first SELECT and in the second one.

Thanks for your help and sorry about some mistakes in my English (I am from Brazil and I am trying to improve this!)
Tom Kyte
October 18, 2008 - 9:51 pm UTC

ops$tkyte%ORA10GR2> SELECT SUM(column3), SUM(column4) FROM (SELECT column1, column2, SUM(column3), SUM(column4) FROM t);
SELECT SUM(column3), SUM(column4) FROM (SELECT column1, column2, SUM(column3), SUM(column4) FROM t)
                         *
ERROR at line 1:
ORA-00904: "COLUMN4": invalid identifier


ops$tkyte%ORA10GR2> SELECT SUM(column3), SUM(column4) FROM (SELECT column1, column2, SUM(column3) column3, SUM(column4) column4 FROM t);
SELECT SUM(column3), SUM(column4) FROM (SELECT column1, column2, SUM(column3) column3, SUM(column4) column4 FROM t)
                                               *
ERROR at line 1:
ORA-00937: not a single-group group function



English was fine.

SQL needs to be corrected so we can see what SQL we are really comparing...

Also, the plans used by EACH of YOUR real queries would be very useful

Urgent

Sumon, October 31, 2008 - 1:45 pm UTC

Hi Tom,
Thanks for your support.
I am urgently need your help.
Please help with the query.

scripts:

create table users
(id number(2),
name varchar2(10));

create table userip
(id number(2),
ip varchar2(10));

create table sup_cost
(ip varchar2(10),
sup_ip varchar2(10),
bill_amt number);

insert into users values (11,'name-11');
insert into users values (22,'name-22');
insert into users values (33,'name-33');
insert into userip values (11,'ip-111');
insert into userip values (11,'ip-112');
insert into userip values (11,'ip-113');
insert into userip values (22,'ip-1111');
insert into userip values (22,'ip-1112');
insert into userip values (22,'ip-1115');
insert into userip values (22,'ip-1117');
insert into userip values (33,'ip-1113');
insert into userip values (33,'ip-1114');
insert into userip values (33,'ip-1116');
insert into sup_cost values ('ip-111','ip-1111',4);
insert into sup_cost values ('ip-111','ip-1112',5);
insert into sup_cost values ('ip-111','ip-1111',3);
insert into sup_cost values ('ip-111','ip-1112',7);
insert into sup_cost values ('ip-112','ip-1113',8);
insert into sup_cost values ('ip-112','ip-1114',6);
insert into sup_cost values ('ip-112','ip-1114',2);
insert into sup_cost values ('ip-112','ip-1115',4);
insert into sup_cost values ('ip-113','ip-1115',7);
insert into sup_cost values ('ip-113','ip-1116',2);
insert into sup_cost values ('ip-113','ip-1117',9);
insert into sup_cost values ('ip-113','ip-1117',8);
insert into sup_cost values ('ip-113','ip-1113',6);
commit;
/

/*

Relations:
------------------------------
usres.id = userip.id
userip.ip = sup_cost.ip
also
sup_cost.sup_ip = userip.ip

*/

Output required as follows:

users.id user_ip.id tot_bill_amt
-------- ---------- ------------
11 22 47
11 33 24

Thanks in advance.

Sumon from Singapore

Girish Singhal, November 10, 2008 - 3:24 am UTC

I think there is a mismatch between the relation that exists between the three tables and the output that is expected using a SELECT query. Also when you say there there is a relation viz. "also sup_cost.sup_ip = userip.ip", then data in the two columns sup_cost.sup_ip and userip.ip has to be same which is contrary to the sample data given in the question.

Cheers, Girish

Reader

Pat, November 18, 2008 - 4:04 pm UTC

I need to calculate weighted average based on the previous cum_balance and previous wa. i was not able to come-up with the solution using alalytical functions.

is it can be done using analytics? please let me know how to proceed.

Id = 0 is my beginning balance.
Weighted average is calculated starting from id >= 1

Wa =
Case when cum_balance = 0 then rate
else
((prior cum_balance * prior wa)
+ (daily_balance * rate))
/ cum_balance
End

((lag(cum_balance) * lag(wa))
+ (daily_balance * rate))
/cum_balance


create table ab
(
id number,
dno number,
daily_balance number,
cum_balance number,
rate number,
wa number);

insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(0,0,0,232284,0,10.1956);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(1,1,0,232284,6.955,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(2,1,32000,264284,6.955,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(3,2,-30665,233619,6.955,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(4,2,7671,241290,6.955,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(5,3,-108363,132927,6.45,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(6,3,79991,212918,6.45,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(7,4,-147552,65366,6.305,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(8,4,53000,118366,6.305,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(9,5,-118366,0,6.38,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(10,5,-103548,-103548,6.38,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(11,5,103548,0,6.38,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(12,5,20700,20700,6.38,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(13,6,-20700,0,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(14,6,-167236,-167236,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(15,6,41000,-126236,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(16,7,42500,-83736,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(17,7,-97654,-181390,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(18,8,39500,-141890,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(19,8,-120780,-262670,6.405,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(20,9,262670,0,6.74,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(21,9,11089,11089,6.74,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(22,9,-16,11073,6.74,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(23,10,0,11073,6.22,0);
insert into ab(id,dno,daily_balance,cum_balance,rate,wa) values(24,10,74500,85573,6.22,0);

select * from ab;

ID DNO DAILY_BALANCE CUM_BALANCE RATE WA
0 0 0 232284 0 10.1956
1 1 0 232284 6.955 0
2 1 32000 264284 6.955 0
3 2 -30665 233619 6.955 0
4 2 7671 241290 6.955 0
5 3 -108363 132927 6.45 0
6 3 79991 212918 6.45 0
7 4 -147552 65366 6.305 0
8 4 53000 118366 6.305 0
9 5 -118366 0 6.38 0
10 5 -103548 -103548 6.38 0
11 5 103548 0 6.38 0
12 5 20700 20700 6.38 0
13 6 -20700 0 6.405 0
14 6 -167236 -167236 6.405 0
15 6 41000 -126236 6.405 0
16 7 42500 -83736 6.405 0
17 7 -97654 -181390 6.405 0
18 8 39500 -141890 6.405 0
19 8 -120780 -262670 6.405 0
20 9 262670 0 6.74 0
21 9 11089 11089 6.74 0
22 9 -16 11073 6.74 0
23 10 0 11073 6.22 0
24 10 74500 85573 6.22 0



This is output I want for the column "wa"


id day daily_balance CUM_balance rate wa
0 0 0 232284 0 10.1956
1 1 0 232284 6.955 10.1956
2 1 32000 264284 6.955 9.8032
3 2 -30665 233619 6.955 10.1771
4 2 7671 241290 6.955 10.0746
5 3 -108363 132927 6.45 13.0295
6 3 79991 212918 6.45 10.5576
7 4 -147552 65366 6.305 20.1572
8 4 53000 118366 6.305 13.9547
9 5 -118366 0 6.38 6.3800
10 5 -103548 -103548 6.38 6.3800
11 5 103548 0 6.38 6.3800
12 5 20700 20700 6.38 6.3800
13 6 -20700 0 6.405 6.4050
14 6 -167236 -167236 6.405 6.4050
15 6 41000 -126236 6.405 6.4050
16 7 42500 -83736 6.405 6.4050
17 7 -97654 -181390 6.405 6.4050
18 8 39500 -141890 6.405 6.4050
19 8 -120780 -262670 6.405 6.4050
20 9 262670 0 6.74 6.7400
21 9 11089 11089 6.74 6.7400
22 9 -16 11073 6.74 6.7400
23 10 0 11073 6.22 6.7400
24 10 74500 85573 6.22 6.2873



sql query

sam, January 06, 2009 - 4:29 pm UTC

Tom:

How can i get a SQL query to get me the current active quota values for all customers.

This gives me the result for XYZ in FEBRUARY. I look at the higest effective date for February or below.

Now i want to modify this to get all customers.

SELECT * FROM QUOTA
WHERE year = 2009
AND cust_id = 'XYZ'
AND effective_Date = (select max(effective_date) from quota
where cust_id = 'XYZ'
and year = 2009
and effective_Date <= '01-FEB-2009' )



create table quota (
quota_id number(10),
year number(4),
cust_id varchar2(10),
media varchar2(5),
qty number,
effective_date date)
/

insert into quota values (1, 2009, 'XYZ' , 'CD' , 4000, '01-JAN-2009')
/
insert into quota values (2, 2009, 'XYZ' , 'DVD' , 3000, '01-JAN-2009')
/
insert into quota values (3, 2009, 'XYZ' , 'PR' , 6000, '01-JAN-2009')
/
insert into quota values (4, 2009, 'XYZ' , 'CD' , 4000, '01-MAR-2009')
/
insert into quota values (5, 2009, 'XYZ' , 'CD' , 4000, '01-MAR-2009')
/
insert into quota values (6, 2009, 'XYZ' , 'CD' , 4000, '01-MAR-2009')
/
insert into quota values (7, 2009, 'ABC' , 'CD' , 4000, '01-JAN-2009')
/
insert into quota values (8, 2009, 'ABC' , 'DVD' ,3000, '01-JAN-2009')
/
insert into quota values (9, 2009, 'ABC' , 'PR' , 6000, '01-JAN-2009')
/
insert into quota values (10, 2009, 'ABC' , 'CD' , 4000, '01-MAY-2009')
/
insert into quota values (11, 2009, 'ABC' , 'DVD' ,3000, '01-MAY-2009')
/
insert into quota values (12, 2009, 'ABC' , 'PR' , 6000, '01-MAY-2009')
/
Tom Kyte
January 07, 2009 - 9:14 am UTC

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:122801500346829407

ops$tkyte%ORA9IR2> select *
  2    from (select quota.*, row_number() over (partition by cust_id order by effective_date DESC) rn
  3            from quota
  4                   where effective_date <= sysdate)
  5   where rn = 1;

  QUOTA_ID       YEAR CUST_ID    MEDIA        QTY EFFECTIVE         RN
---------- ---------- ---------- ----- ---------- --------- ----------
         7       2009 ABC        CD          4000 01-JAN-09          1
         1       2009 XYZ        CD          4000 01-JAN-09          1

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select cust_id,
  2         max(qty) keep (dense_rank first order by effective_date desc)qtr,
  3         max(effective_date) keep (dense_rank first order by effective_date desc) edate
  4    from quota
  5   where effective_date <= sysdate
  6   group by cust_id
  7  /

CUST_ID           QTR EDATE
---------- ---------- ---------
ABC              6000 01-JAN-09
XYZ              6000 01-JAN-09

query

A reader, January 07, 2009 - 6:55 pm UTC

Tom:

Your query only retrieved current quota for one one media "CD". I want the current period quota for each customer media (CD, DVD, PR, etc).

thank you
Tom Kyte
January 08, 2009 - 8:46 am UTC

umm, sam, read your question:

.............
How can i get a SQL query to get me the current active quota values for all customers.

This gives me the result for XYZ in FEBRUARY. I look at the higest effective date for February or
below.

Now i want to modify this to get all customers.
......................


now, you tell me where you see any reference to media in there - anywhere??????

You (after asking as many questions on as many pages as you have) cannot take what was presented, understand what it does/how it does it and modify it?

First, you must phrase the question properly. I *presume* the question is:

for each CUSTOMER and MEDIA combination - show the most "current" record.

Now, given that if the question was "for each CUSTOMER combination - show the most 'current' record" you know the answer, you have the template - it is pretty *trivial* to modify that to satisfy your NEW question.


I need you to

a) read the linked to page, understand the technique
b) develop the query yourself to prove to yourself that you understand what you are doing and how it is accomplished.



query

A reader, January 08, 2009 - 6:18 pm UTC

Tom:

you are right i should have thought about it.

It seems to me all i have to do is partition by year,cust_id,media and i get the last active set for this month by customer, year, media. correct? I also want to search records for current year and not previous years.

If I want to create a VIEW for this query and pass the effective date as a value/variable instead of sysdate how would i do that

  1  select * from
  2  (
  3  select quota.*, row_number() over (partition by year,cust_id,media order by effective_date DESC) rn
  4  from quota
  5   where effective_date <= to_date('01-MAY-2009','DD-MON-YYYY')
  6  and year = to_number(to_char(to_date('01-MAY-2009','DD_MON-YYYY'),'YYYY'))
  7  ) where rn=1
  8* order by cust_id,effective_date
SQL> /

  QUOTA_ID       YEAR CUST_ID    MEDIA        QTY EFFECTIVE         RN
---------- ---------- ---------- ----- ---------- --------- ----------
        10       2009 ABC        CD          5000 01-MAY-09          1
        11       2009 ABC        DVD         5100 01-MAY-09          1
        12       2009 ABC        PR          5200 01-MAY-09          1
         4       2009 XYZ        CD          3000 01-MAR-09          1
         5       2009 XYZ        DVD         3100 01-MAR-09          1
         6       2009 XYZ        PR          3200 01-MAR-09          1

6 rows selected.

Tom Kyte
January 08, 2009 - 8:08 pm UTC

You will not be able to do the view trick here, not easily.

see
http://asktom.oracle.com/pls/ask/search?p_string=%22parameterized+view%22


New year 2009 date issue

atul, January 14, 2009 - 1:54 am UTC

Hello TOM,

We have been having some issues with custom code starting the beginning of the year and I was wondering if there has been some type of change or patch applied on the database that could be the root cause (which is OK).

The issue surrounds dates and how things are being interpreted.

Issue #1.
There has been some comparisons going on where the code would read: l_current_date between nvl(l_low_date, '01-JAN-1901') and nvl(l_high_date, '31-DEC-4712'). Apparently, this has been working for a while and starting the beginning of the year, it fails. I can see why this would fail but I cannot understand why it would work at all.

Issue #2.
In a database trigger, an update would happen where a varchar field was being set to sysdate. Since 2006, this would cause the field to be set as 31-DEC-08. Starting in 2009, this would set the value as '01/02/2009'. Again, the developer never specified a format so I can see stuff going in one way or another but why all of a sudden, did it start working another way?



As these items come up, I am putting a format around them (as it should always have) but I am wondering what changed? Any insight would be greatly appreciated.


Any suggestions help?


Tom Kyte
January 16, 2009 - 4:19 pm UTC

issue #1, you are not using a date format, I have no idea what '01-jan-1901' is in a date using YOUR date format.

ops$tkyte%ORA11GR1> select to_char( to_date( '01-jan-1901' ), 'dd-mon-yyyy' ) from dual;

TO_CHAR(TO_
-----------
01-jan-1901

ops$tkyte%ORA11GR1> alter session set nls_date_format = 'dd-mon-yyhh';

Session altered.

ops$tkyte%ORA11GR1> select to_char( to_date( '01-jan-1901' ), 'dd-mon-yyyy' ) from dual;

TO_CHAR(TO_
-----------
01-jan-2019



never ever, never ever rely on defaults - why would you not explicitly do that?? use to_date, use a format that KNOWS how to convert your string unambiguously

oh, yeah, define "it fails.". What does "it fails." mean. To me, it means "it gets an error", but I don't see an error. To you it probably means "it does not seem to return the right data", but I don't see an example???


issue 2 - BINGO someone - not us, but someone YOU work with - changed the default date format

and you were relying on it.

*you did this*, *you or someone you work with*

Heck, the application could be doing this.


We did not change the date format (not since version 7.something when we changed the default from yy to rr for year 2000 related issues)

Here is what I mean

atul, January 20, 2009 - 9:29 am UTC

Here is an example of a trigger:
================================
create or replace trigger xxcbg_periods_of_service_biu
before INSERT OR UPDATE on per_periods_of_service
for each row
declare
-- local variables here
begin
IF :new.attribute20 IS NULL and :new.actual_termination_date is not null THEN
:new.attribute20 := sysdate;
END IF;
IF ( nvl(:old.actual_termination_date,to_date('12314712', 'mmddyyyy')) <> nvl(:new.actual_termination_date,to_date('12314712', 'mmddyyyy'))
AND :new.actual_termination_date is null
) THEN
:new.attribute20 := null;
END IF;
end xxcbg_periods_of_service_biu;
================================
From 2006 through Dec 31 2007, it would store the current date in the field ATTRIBUTE20 as such, 01-JAN-07 OR 31-MAR-08. Basically, DD-MON-RR. Starting in 2009, the data was stored as 01/02/2009, 01/03/2009, basically, mm/dd/yyyy.

I made a change to the trigger so it would revert back to the format that everything utilizing the ATTRIBUTE20 was originally geared towards:
================================
create or replace trigger xxcbg_periods_of_service_biu
before INSERT OR UPDATE on per_periods_of_service
for each row
declare
-- local variables here
begin
IF :new.attribute20 IS NULL and :new.actual_termination_date is not null THEN
:new.attribute20 := to_char(sysdate,'DD-MON-RR'); END IF;
IF ( nvl(:old.actual_termination_date,to_date('12314712', 'mmddyyyy')) <> nvl(:new.actual_termination_date,to_date('12314712', 'mmddyyyy'))
AND :new.actual_termination_date is null
) THEN
:new.attribute20 := null;
END IF;
end xxcbg_periods_of_service_biu;
================================

My question is why it could happen only starting from 2009.
DBA's claim that NLS format is not changed and it is DD-MON-RR


so?
Tom Kyte
January 20, 2009 - 10:20 am UTC

give full example, no idea what actual_termination_date is.

that code cannot store mm/dd/yyyy in attribute20, it is not possible.

Insert a character into a string at specified locations.

Rajeshwaran, Jeyabal, January 20, 2009 - 5:20 pm UTC


Tom,

Is there a way in Oracle 10g to automatically insert a character into a string at specified locations without using a substring/concatenation command?
Such as inserting hypens/dashes into a phone number. Example: 2024561234.
I want to insert a hypen at the 2nd position from the last character. 20245612-34.
I know how to do it by using the substring command and then concatenating hypens. Is there is some way that can be achieved by using Oracle regular expressions OR without using substring and string concatenation ?

Insert a character into a string at specified locations

Rajeshwaran, Jeyabal, January 20, 2009 - 10:19 pm UTC


Thanks a Ton Tom!!! I got it..

scott@OPSTAR> SELECT REGEXP_REPLACE ('2024561234','^(.*)(.{2})$','\1-\2') AS RESULT
2 FROM DUAL
3 /

RESULT
-----------
20245612-34

update statement by comparing two table

ychalla, January 26, 2009 - 10:34 pm UTC

hi tom

when iam trying to compare two tables and updating first table by comparing column in the second table its displaying following error
single row subquiery returns more than one row

update oe_lines_iface_all A
set A.inventory_item= (SELECT B.inv_item_concat_segs
FROM MTL_CROSS_REFERENCES_V B
WHERE A.inventory_item = B.cross_reference)

thanks

Tom Kyte
January 28, 2009 - 8:18 am UTC

that means that b.cross_reference must not be unique. Think about it.

If the error says "subquery returns more than one row", and the subquery is

select something from t where t.column = ?


then apparently t.column must not be unique and hence - we have no idea which row to use to apply the update - so - we fail.

update statement

reader, January 29, 2009 - 5:03 am UTC







hi tom


there are three conditions to update table t1
first one if both id values are null give error
second one is both id are having twice then also error
third one if having same values for both then update the column which i specified there

the quiery i write is wrong but what iam expecting u can uunderstand














UPDATE table t1
SET t1.A='error'
WHERE
(select count(t2.a) from table t2
where (t2.ID = t1.id) is null
t1.A='error' (select count(t2.a) from table t2
where (t2.B = t1.B)>1
SET t1.A=((select max(t2.a) from table t2
where (t2ID = t1.id)
Tom Kyte
January 30, 2009 - 2:06 pm UTC

you expected wrong. (and bear in mind, it takes three letters to spell "you").

"If both id values are null" - what id values? huh?? "where (t2.id = t1.id) is null" is not SQL. I'm not sure at all how to interpret that. I don't know what that means at all


"both id are having twice then also error" - but they you appear to be looking at a column "b" - not ID?????

"if having same values for both " - for both what??


do you really mean:

if there is more than one row with ID is null, mark all of those rows 'error' in column A.

if there is more than one row with the same ID, mark all of those rows 'error' as well.

if so,

update table
   set a = 'error' 
 where rowid in (select rid
                   from (select rowid rid, count(*) over (partition by id) cnt
                           from table)
                  where cnt > 1 );



select rowid rid, count(*) over (partition id id) cnt from table

will generate a set of every rowid in the table and a count of the records in that table that have the same id (null or otherwise)

where cnt > 1

will keep only the interesting ones


the update will update them.



update statement

reader, January 29, 2009 - 6:36 am UTC

hi tom
look at this quiery
this statement is working but i have another condition here that is the cross reference column has duplicate values if they have duplicate values that row should not be update

how to write a condition the column have same number twice


update oe_lines_iface_All A
set A.inventory_item = ( select max( B.inv_item_concat_segs)
from kpl_cross_references_v B
where B.cross_reference=A.inventory_item )
where
A.inventory_item is not null
and exists ( select B.cross_reference
from kpl_cross_references_v B
where B.cross_reference =A.inventory_item


thanks in adnvance

Tom Kyte
January 30, 2009 - 2:13 pm UTC

...
this statement is working but i have another condition here that is the cross
reference column has duplicate values if they have duplicate values that row
should not be update
.....

TOM-00123 Unable to parse.


I did not understand what you tried to say.

update statments

A reader, February 03, 2009 - 1:25 am UTC

is it possible to execute two update statements in single quiery
Tom Kyte
February 03, 2009 - 10:16 am UTC

sometimes.


update t set x = 5 where y = 6;
update t set x = 1 where y = 2;


could (should) be written as:

update t set x = case when y = 6 then 5 when y = 2 then 1 end 
where y in (6,2);



But if you mean "can I update two tables in a single update", the answer is no - only one table at a time..

A reader, February 03, 2009 - 11:08 pm UTC

hi tom

thanks
here iam having one more doubt
if i want to update two to three columns in single table by comparing other table
can u suggest me how to write that quiery
Tom Kyte
February 04, 2009 - 10:34 am UTC

pretty vague, but in general:

update 
( select t1.a t1_a, t1.b t1_b, t1.c t1_c,
         t2.a t2_a, t2.b t2_b, t2.c t2_c
    from t1, t2
   where t1.key = t2.key )
set t1_a = t2_a, t1_b = t2_b, t1_c = t2_c
/


You update a join in general

map pk id

Pauline, February 05, 2009 - 11:09 pm UTC

Tom,
 
I have the question which is unnormal. For the SQL in your answer from February 4, 2009 - 10am ,
it must have PK on both tables. Now my case is one of the table does not have primary key. Let me tell
my situation. We have scaned character set and got scan.err file which logged many lossy data. Now 
We need to have temp table to have owner_name,table_name,column_name,data_rowid which select from 
csmig.csmv$errors, also we need to have fields to store primary key value,data for column name corresponding 
data_rowid in that table. For example,

ORACLE@dev1 > col DATA_ROWID format a18
ORACLE@dev1 > set linesize 120
ORACLE@dev1 > select * from test;

OWNER_NAME                     TABLE_NAME                     COLUMN_NAME                    DATA_ROWID
------------------------------ ------------------------------ ------------------------------ ------------------
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAACl2FAAh
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAACl2FAAg
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAACluOAA5
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAACltRAAO
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAAClsqABl
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAAClpBABj
CORE                           PARTIES                        PARTY_NM                       ABBZGcAAIAACi8iAAz
CORE                           PARTIES                        TERMS_NOTES_TXT                ABBZGcAAIAAClntAAI
CORE                           PARTIES                        IDENT_DOC_CITY_NM              ABBZGcAAIAACltRAAO

9 rows selected.

ORACLE@dev1 > alter table test add pk_id number;

Table altered.

ORACLE@dev1 > alter table test add column_data varchar2(4000);

Table altered.

For this parties table, I know its pk column called pty_id, so I tried

use the something like

update ( select test.pk_id      t1_pk_id ,
                parties.pty_id  t2_pk_id            
        from test , core.parties    where data_rowid=parties.ROWID )
        set t1_pk_id=t2_pk_id;

I got

ERROR at line 4:
ORA-01779: cannot modify a column which maps to a non key-preserved table

because test table data_rowid has no PK and can't add PK. But it seems data_rowid is the only field I can
join to base table parties to get pk value.

Now I do:

ORACLE@dev1 > declare
  2       lv_row_count        NUMBER := 0;
  3  
  4  BEGIN
  5     FOR rec IN (
  6      select pty_id,party_nm,rowid 
  7      from core.parties
  8      where core.parties.rowid in (select data_rowid from test where column_name='PARTY_NM')
  9     )
 10  LOOP 
 11     
 12              UPDATE test
 13              SET pk_id = rec.pty_id
 14                ,column_data=rec.party_nm
 15              where data_rowid =rec.rowid;
 16          
 17      END LOOP;
 18      --
 19      COMMIT;
 20  END;
 21  / 

PL/SQL procedure successfully completed.

ORACLE@dev1 > col column_data format a50 
ORACLE@dev1 > select pk_id ,column_data,column_name,DATA_ROWID from test;

     PK_ID COLUMN_DATA                                        COLUMN_NAME                    DATA_ROWID
---------- -------------------------------------------------- ------------------------------ ------------------
 168660060 mr alex trÿ¿sman                                  PARTY_NM                       ABBZGcAAIAACl2FAAh
 168660047 Mr Alex Trÿsman                                    PARTY_NM                       ABBZGcAAIAACl2FAAg
 162520206 Ms Jackie O¿Halloran                                PARTY_NM                       ABBZGcAAIAACluOAA5
 161480019 Mr Pavel¿ Zejfartü                                  PARTY_NM                       ABBZGcAAIAACltRAAO
 160880608 Signora Orietta D¿Oria                              PARTY_NM                       ABBZGcAAIAAClsqABl
 157419127 Mr Michel L¿Héritier                                PARTY_NM                       ABBZGcAAIAAClpBABj
  74299641 Contessa Ilaria Borletti Dell¿Acqua                 PARTY_NM                       ABBZGcAAIAACi8iAAz
                                                               TERMS_NOTES_TXT                ABBZGcAAIAAClntAAI
 161480019 Mr Pavel¿ Zejfartü                                  IDENT_DOC_CITY_NM              ABBZGcAAIAACltRAAO

9 rows selected.

I still have some problems:

1. the data for column of IDENT_DOC_CITY_NM also updated to PARTY_NM because of same data_rowid.
2. I have many tables have lossy data issue, I don't want to manually find pk column and put it 
   select statement (in this case, it is pty_id). 


How can I easily get PK id from original table to update in table test? (column data has same issue as PK id).

Please help.

Thanks.








Tom Kyte
February 06, 2009 - 3:27 pm UTC

you can use merge instead

instead of:

update ( select test.pk_id      t1_pk_id ,
                parties.pty_id  t2_pk_id            
        from test , core.parties    where data_rowid=parties.ROWID )
        set t1_pk_id=t2_pk_id;



code

merge into test
using parties
on (test.data_rowid = parties.rowid)
when matched then update set pk_id = pty_id;


Pauline, February 06, 2009 - 4:50 pm UTC

Tom,
Thanks so much. Using merge is very helpful.

Regards.

Pauline

update statement

A reader, February 08, 2009 - 10:00 pm UTC

hi tom
i want to update a table by comparing a view check this table and view once can u suggest how to do it

table:


item error_flag

1. 123456789 -
2. 234567891 -
3. 456789123 -
4 - -


view: intentory_id Item

1 123456789 A
2 234567891 B
3 345678912 C
4 123456789 D
5 123456789 E
6 456789123 F


now i want to update table ,item column by comparing view .
here i have conditions
1.if there are same id for both the items i dont want to update. for example u can see in view are three items(A,D,E) for one inventory_id at that time i need to get error flag column ='Y' and should not update that column in table

2.if it is match then it should update
3.in table the item column is null then also error_flag should get 'Y'


thanks in advance
Tom Kyte
February 09, 2009 - 6:56 pm UTC

"U" is dead
http://en.wikipedia.org/wiki/U_of_Goryeo

now what??? what can we do for YOU?

hah, when you read the page you used to post this "review", you missed the very top of it somehow?




FAIR WARNING

if you use 'IM' speak (you know, u wrt like this w plz and thnks), I will make fun of you. I will point out that it is not very professional or readable.
FAIR WARNING

If your followup requires a response that might include a query, you had better supply very very simple create tables and insert statements. I cannot create a table and populate it for each and every question. The SMALLEST create table possible (no tablespaces, no schema names, just like I do in my examples for you)



ok,

update target
using (select id, count(*) cnt, max(item) item
         from source
        group by id) s
on (target.id = s.id)
when matched then update set item = case when s.cnt = 1 and s.item is not null then s.item end, error_flag = case when s.cnt > 1 or s.item is null then 'Y' else error_flag end;


might do it... But, I cannot test it - so, look at the concept (generate a set out of your view that has a single row per id with a count of rows for that id and the max(item) for that id. if cnt = 1 and item is not null, update item, else set error_flag = y)

A reader, February 13, 2009 - 1:45 am UTC

sorry for what i made mistake

Strange execution plan

Salman Syed, February 16, 2009 - 6:28 pm UTC

Tom,

I have the following query with a good execution plan.

select o.opportunity_id /*test 10*/ from opportunity o
inner join opportunity_status_history osh
on o.opportunity_id = osh.opportunity_id
and osh.opp_status_id not in (1,3,4,5)
and osh.is_latest = 1
where o.opportunity_id = 73214

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 7 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.01 0 7 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 533

Rows Row Source Operation
------- ---------------------------------------------------
1 NESTED LOOPS (cr=7 pr=0 pw=0 time=102 us)
1 TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=4 pr=0 pw=0 time=50 us)
1 INDEX UNIQUE SCAN SYS_C0014150 (cr=3 pr=0 pw=0 time=20 us)(object id 63666)
1 INDEX RANGE SCAN I_OSH_JOIN_5 (cr=3 pr=0 pw=0 time=39 us)(object id 277487)


However, when I change the 'not in' to 'in', an extra table lookup is added to the execution plan:

select o.opportunity_id /*test 8*/ from opportunity o
inner join opportunity_status_history osh
on o.opportunity_id = osh.opportunity_id
and osh.is_latest = 1
and osh.opp_status_id in (1,3,4,5)
where o.opportunity_id = 73214

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 2 8 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.01 2 8 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 533

Rows Row Source Operation
------- ---------------------------------------------------
0 NESTED LOOPS (cr=8 pr=2 pw=0 time=230 us)
1 TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=4 pr=0 pw=0 time=45 us)
1 INDEX UNIQUE SCAN SYS_C0014150 (cr=3 pr=0 pw=0 time=20 us)(object id 63666)
0 TABLE ACCESS BY INDEX ROWID OPPORTUNITY_STATUS_HISTORY (cr=4 pr=2 pw=0 time=176 us)
1 INDEX RANGE SCAN I_OSH_JOIN_4 (cr=3 pr=2 pw=0 time=163 us)(object id 277488)

Can you suggest how I can go about figuring why this execution plan changes? The column is a foreign key to another table and is NOT NULL.

Any insight will be much appreciated.

Thanks!
Tom Kyte
February 16, 2009 - 6:58 pm UTC

table creates and indexes would be useful, I like to be able to see it myself.


the autotrace traceonly explain would be really useful - it would show us the PREDICATES applied at each step

More information

Salman Syed, February 16, 2009 - 7:20 pm UTC

Tom,

Thanks for the quick reply.

1. The two indexes are as follows:

create index I_OSH_JOIN_4 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST)

create index I_OSH_JOIN_5 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST, OPP_STATUS_ID)


2. The cardinality of the data in opp_status_id is very low (10 different possible values).

3. Both these tables have a simple 'labelcol = sys_context' style VPD policy.

4. Here is the auto trace output:

SELECT STATEMENT, GOAL = ALL_ROWS Cost=6 Cardinality=1 Bytes=26 Operation=SELECT STATEMENT CPU cost=46410
NESTED LOOPS Cost=6 Cardinality=1 Bytes=26 Operation=NESTED LOOPS CPU cost=46410
TABLE ACCESS BY INDEX ROWID Object owner=SGAPP Object name=OPPORTUNITY Cost=3 Cardinality=1 Bytes=10 Operation=TABLE ACCESS Filter predicates="LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')) CPU cost=23664
INDEX UNIQUE SCAN Object owner=SGAPP Object name=SYS_C0014150 Cost=2 Cardinality=1 Operation=INDEX CPU cost=15293
TABLE ACCESS BY INDEX ROWID Object owner=SGAPP Object name=OPPORTUNITY_STATUS_HISTORY Cost=3 Cardinality=1 Bytes=16 Operation=TABLE ACCESS Filter predicates="OPP_STATUS_ID"=1 OR "OPP_STATUS_ID"=3 OR "OPP_STATUS_ID"=4 OR "OPP_STATUS_ID"=5 CPU cost=22745
INDEX RANGE SCAN Object owner=SGAPP Object name=I_OSH_JOIN_4 Cost=2 Cardinality=1 Operation=INDEX CPU cost=15293


Here are the table DDL:
CREATE TABLE "SGAPP"."OPPORTUNITY" 
   ( "OPPORTUNITY_ID" NUMBER NOT NULL ENABLE, 
       "LABELCOL" NUMBER DEFAULT sys_context ('company_label','current_label'), 
PRIMARY KEY ("OPPORTUNITY_ID"));


CREATE TABLE "SGAPP"."OPPORTUNITY_STATUS_HISTORY" 
   ( "OPP_STATUS_HISTORY_ID" NUMBER NOT NULL ENABLE, 
 "OPPORTUNITY_ID" NUMBER NOT NULL ENABLE, 
 "OPP_STATUS_ID" NUMBER NOT NULL ENABLE, 
 "ACHIEVED_ON" TIMESTAMP (6) WITH LOCAL TIME ZONE NOT NULL ENABLE, 
 "ACHIEVED_BY" NUMBER NOT NULL ENABLE, 
 "IS_LATEST" NUMBER DEFAULT 1 NOT NULL ENABLE, 
 "LABELCOL" NUMBER DEFAULT sys_context ('company_label','current_label'), 
  PRIMARY KEY ("OPP_STATUS_HISTORY_ID"),
    FOREIGN KEY ("OPPORTUNITY_ID")
   REFERENCES "SGAPP"."OPPORTUNITY" ("OPPORTUNITY_ID") ENABLE 
 -- FOREIGN KEY ("OPP_STATUS_ID")
 --  REFERENCES "SGAPP"."OPPORTUNITY_STATUS" ("OPP_STATUS_ID") ENABLE, 
  );



I have commented out the FK reference as that is purely data integrity.




Tom Kyte
February 16, 2009 - 9:37 pm UTC

ugh, 9i, autotrace isn't good enough, you'd need to use dbms_xplan way back in that release.

to reproduce I'll need to know about how many rows in each table

and use this script to get the plans FOR BOTH queries:

delete from plan_table;
explain plan for 'each query in turn';
select * from table(dbms_xplan.display);

Explain plans

Salman Syed, February 17, 2009 - 9:41 am UTC

For the 'NOT IN' query:

Plan hash value: 3513326173

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 212 | 5 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 212 | 5 (0)| 00:00:01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| OPPORTUNITY | 1 | 196 | 3 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | SYS_C0014150 | 1 | | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | I_OSH_JOIN_6 | 1 | 16 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
3 - access("OPPORTUNITY_ID"=73214)
4 - access("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')) AND
"OPPORTUNITY_ID"=73214 AND "IS_LATEST"=1)
filter("OPP_STATUS_ID"<>4 AND "IS_LATEST"=1 AND "OPP_STATUS_ID"<>3 AND
"OPP_STATUS_ID"<>1 AND "OPP_STATUS_ID"<>5)

For the 'IN' query:

Plan hash value: 3417167185

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 212 | 6 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 212 | 6 (0)| 00:00:01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| OPPORTUNITY | 1 | 196 | 3 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | SYS_C0014150 | 1 | | 2 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| OPPORTUNITY_STATUS_HISTORY | 1 | 16 | 3 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | I_OSH_JOIN_4 | 1 | | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
3 - access("OPPORTUNITY_ID"=73214)
4 - filter("OPP_STATUS_ID"=1 OR "OPP_STATUS_ID"=3 OR "OPP_STATUS_ID"=4 OR "OPP_STATUS_ID"=5)
5 - access("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')) AND
"OPPORTUNITY_ID"=73214 AND "IS_LATEST"=1)

Please note that we are on 10.2.0.3.

I was using PL/SQL developer to give you the explain plan so that might be the reason for the discrepancy.

The number of rows in opportunity table are: 388326

The number of rows in opportunity_status_history are: 603776

Thanks!

Tom Kyte
February 17, 2009 - 10:00 am UTC

*use the code button* in the future, this is entirely unreadable in a proportional font...


and - and I think I see a problem, a fly in the ointment.

You obviously must have VPD enabled on this table (dbms_rls), your indexes start with labelcol.

So, tell us - what is the REAL QUERY we need to be testing, what is the predicate we need to ADD to this in order to reproduce.

VPD

Salman Syed, February 17, 2009 - 10:16 am UTC

Tom,

I mentioned the VPD policy in my first post but I should have been more explicit.

So, the query will be rewritten as:
select o.opportunity_id /*test 8*/ from 
(select * from opportunity where labelcol = 5) o
    inner join 
(select * from opportunity_status_history where labelcol = 5) osh
    on o.opportunity_id = osh.opportunity_id
    and osh.is_latest = 1
    and osh.opp_status_id in (1,3,4,5)
    where o.opportunity_id = 73214


And I still get this plan:

Plan hash value: 3417167185
 
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |     1 |    26 |     6   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                            |     1 |    26 |     6   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| OPPORTUNITY                |     1 |    10 |     3   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0014150               |     1 |       |     2   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID| OPPORTUNITY_STATUS_HISTORY |     1 |    16 |     3   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN          | I_OSH_JOIN_4               |     1 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("LABELCOL"=5)
   3 - access("OPPORTUNITY"."OPPORTUNITY_ID"=73214)
   4 - filter("OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=1 OR 
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=3 OR "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=4 
              OR "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=5)
   5 - access("LABELCOL"=5 AND "OPPORTUNITY_STATUS_HISTORY"."OPPORTUNITY_ID"=73214 AND 
              "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1)

Tom Kyte
February 17, 2009 - 2:19 pm UTC

just for future reference....

This is a test case:

/*

drop TABLE "OPPORTUNITY" cascade constraints;
CREATE TABLE "OPPORTUNITY"
   (    "OPPORTUNITY_ID" NUMBER NOT NULL ENABLE,
          "LABELCOL" NUMBER DEFAULT sys_context ('company_label','current_label'),
PRIMARY KEY ("OPPORTUNITY_ID"));

insert into "OPPORTUNITY"
select level, mod(level,10)
from dual
connect by level <= 388326;


drop TABLE "OPPORTUNITY_STATUS_HISTORY" ;
CREATE TABLE "OPPORTUNITY_STATUS_HISTORY"
   (    "OPP_STATUS_HISTORY_ID" NUMBER NOT NULL ENABLE,
    "OPPORTUNITY_ID" NUMBER NOT NULL ENABLE,
    "OPP_STATUS_ID" NUMBER NOT NULL ENABLE,
    "ACHIEVED_ON" TIMESTAMP (6) WITH LOCAL TIME ZONE NOT NULL ENABLE,
    "ACHIEVED_BY" NUMBER NOT NULL ENABLE,
    "IS_LATEST" NUMBER DEFAULT 1 NOT NULL ENABLE,
    "LABELCOL" NUMBER DEFAULT sys_context ('company_label','current_label'),
     PRIMARY KEY ("OPP_STATUS_HISTORY_ID"),
    FOREIGN KEY ("OPPORTUNITY_ID")
      REFERENCES "OPPORTUNITY" ("OPPORTUNITY_ID") ENABLE
    -- FOREIGN KEY ("OPP_STATUS_ID")
    --  REFERENCES "OPPORTUNITY_STATUS" ("OPP_STATUS_ID") ENABLE,
     );

insert into OPPORTUNITY_STATUS_HISTORY
select rownum, OPPORTUNITY_ID, mod(rownum,10), sysdate, 1, mod(rownum,5), mod(rownum,10)
from (select * from opportunity union all select * from opportunity );

exec dbms_stats.gather_table_stats( user, 'OPPORTUNITY', cascade=>true );
exec dbms_stats.gather_table_stats( user, 'OPPORTUNITY_STATUS_HISTORY', cascade=>true );
create index I_OSH_JOIN_4 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST);
create index I_OSH_JOIN_5 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST, OPP_STATUS_ID);
*/

set linesize 1000
set autotrace traceonly explain

select o.opportunity_id /*test 8*/ from
(select * from opportunity where labelcol = 5) o
    inner join
(select * from opportunity_status_history where labelcol = 5) osh
    on o.opportunity_id = osh.opportunity_id
    and osh.is_latest = 1
    and osh.opp_status_id not in (1,3,4,5)
    where o.opportunity_id = 73214
/
select o.opportunity_id /*test 8*/ from
(select * from opportunity where labelcol = 5) o
    inner join
(select * from opportunity_status_history where labelcol = 5) osh
    on o.opportunity_id = osh.opportunity_id
    and osh.is_latest = 1
    and osh.opp_status_id in (1,3,4,5)
    where o.opportunity_id = 73214
/

set autotrace off


that is what I'm expecting - it reproduces. That is the hallmark of a test case - something you give to someone and it

a) runs
b) runs to completion
c) has everything needed to reproduce
d) is as small as possible
e) yet has EVERYTHING needed to reproduce

Somehow, when I see it on my screen (as opposed to reading bits and pieces of disjointed information in between dozens of other questions), it just jumps right out. Meaning - sure, you mentioned a day or two ago "vpd", but hey, that was a day or two ago, up a couple of pages and hidden in the details that were not detailed enough. It helps to have it all in one concise place (and notice - NO SCHEMA's, I don't have your schemas, or your tablespaces and such - a test case should be easy to run)

Anyway - it is using a different index, no surprise it has to go to the table for that one. Somehow that it used a different index just jumped out at me when I ran it from start to finish - this is not a surprise....

See, in the first one, it has a predicate of:

   4 - access("LABELCOL"=5 AND "OPPORTUNITY_STATUS_HISTORY"."OPPORTUNITY_ID"=73214
              AND "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1)
       filter("OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>1 AND
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>3 AND
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>4 AND
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>5)


that it is evaluating - it can use the index to to the ACCESS part, and then further use the index to do the filter.

In the IN query, the predicate is:

   4 - filter("OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=1 OR
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=3 OR "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=4
              OR "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=5)
   5 - access("LABELCOL"=5 AND "OPPORTUNITY_STATUS_HISTORY"."OPPORTUNITY_ID"=73214 AND
              "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1)


so it is using the index on (labelcol,opportunity_id,is_latest)


It must be that the cost of using either or is "the same or so close as to be a tie" - for the inlist using "where in", we are sort of programmed to look for inlist iteration (hit the index multiple times) or skip that and filter. It was deemed "a tie" as far as hit the index 5 times or hit the other index once and the table 5 times (actually, that is probably cheaper than hitting the index five times as hitting the index five times for the IN would start at the top and walk down the tree five times)


I would question the reasoning for having:

ops$tkyte%ORA10GR2> create index I_OSH_JOIN_4 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST);
ops$tkyte%ORA10GR2> create index I_OSH_JOIN_5 on OPPORTUNITY_STATUS_HISTORY (LABELCOL, OPPORTUNITY_ID, IS_LATEST, OPP_STATUS_ID);


drop index I_OSH_JOIN_4, it doesn't look excessively useful given you have join_5 (funny, I have a theory what your index names mean, the were added one at a time reactively to 'tune' individual queries - after 'design' happened, that is the best way to end up with an index per query, instead of a small set of indexes that can do it all).

and then you'll have:

Execution Plan
----------------------------------------------------------
Plan hash value: 3375384269

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |     1 |    21 |     7   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |              |     1 |    21 |     7   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| OPPORTUNITY  |     1 |     7 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0034776 |     1 |       |     1   (0)| 00:00:01 |
|   4 |   INLIST ITERATOR            |              |       |       |            |          |
|*  5 |    INDEX RANGE SCAN          | I_OSH_JOIN_5 |     1 |    14 |     5   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("LABELCOL"=5)
   3 - access("OPPORTUNITY"."OPPORTUNITY_ID"=73214)
   5 - access("LABELCOL"=5 AND "OPPORTUNITY_STATUS_HISTORY"."OPPORTUNITY_ID"=73214
              AND "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1 AND
              ("OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=1 OR
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=3 OR
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=4 OR
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"=5))



Plan for NOT IN query

Salman Syed, February 17, 2009 - 10:19 am UTC

Sorry, pressed submit too soon.

Here is the plan for the 'NOT IN' query when I manually rewrite it with the predicates:


explain plan for 
select o.opportunity_id /*test 8*/ from 
(select * from sgapp.opportunity where labelcol = 5) o
    inner join 
(select * from sgapp.opportunity_status_history where labelcol = 5) osh
    on o.opportunity_id = osh.opportunity_id
    and osh.is_latest = 1
    and osh.opp_status_id not in (1,3,4,5)
    where o.opportunity_id = 73214


Plan hash value: 3513326173
 
---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |     1 |    26 |     5   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |              |     1 |    26 |     5   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| OPPORTUNITY  |     1 |    10 |     3   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0014150 |     1 |       |     2   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN           | I_OSH_JOIN_6 |     1 |    16 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("LABELCOL"=5)
   3 - access("OPPORTUNITY"."OPPORTUNITY_ID"=73214)
   4 - access("LABELCOL"=5 AND "OPPORTUNITY_STATUS_HISTORY"."OPPORTUNITY_ID"=73214 
              AND "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1)
       filter("OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>4 AND 
              "OPPORTUNITY_STATUS_HISTORY"."IS_LATEST"=1 AND 
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>3 AND 
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>1 AND 
              "OPPORTUNITY_STATUS_HISTORY"."OPP_STATUS_ID"<>5)


Distinct values with order

Slavko Brkic, March 06, 2009 - 2:11 am UTC

Hi Tom,

I have the following problem. I want to display result from an exact match on one columns and order by a weight column, add result of an exact match from another column ordered by weight, then add result of an exact match from a third column ordered by weight and finally add result from the first column without exact match ordered by weight. I do not want duplicates. I will give you the script which will probably explain what I am trying to do.

Here is the example table
create table test (artist varchar2(50), song varchar2(50), album varchar2(50), weight number);

Some data:
insert into test values ('Prince', 'Purple rain', 'I would die 4 U', 21345);
insert into test values ('Prince', 'Come', 'Race', 50);
insert into test values ('Prince', 'Planet Earth', 'Planet earth', 86345);
insert into test values ('Prince', 'I would die 4 U', 'Prince', 56215);
insert into test values ('Lundaland', 'Prince', 'Lundaland', 98);
insert into test values ('Quarkspace', 'Prince', 'The hidden moon', 257);
insert into test values ('Ketil Lien', 'Dark Princess', 'The White Peak', 385);

Here is a select which almost give me my result. I have these 4 unions in order to add order_search (as I want to display the first select first in my resultset then the second select and so on). The problem with that is that I will be displaying "same" (I know it is not the same row as I have added order_search column) row twice. Is there some way to do what I want which is not too complex. Am I missing something really obvious?
select sub.* 
  from (
      select ta.*, 1 order_search from test ta where upper(artist) = 'PRINCE'
        union
      select tb.*, 2 order_search from test tb where upper(song) = 'PRINCE'
        union
      select tc.*, 3 order_search from test tc where upper(album) = 'PRINCE'
        union
      select td.*, 4 order_search from test td where upper(song) like '%PRINCE%'
       ) sub 
 order by order_search, weight;

ARTIST            SONG            ALBUM                WEIGHT     ORDER_SEARCH           
----------------- --------------- -------------------- ---------- ------------
Prince            Come            Race                 50         1                      
Prince            Purple rain     I would die 4 U      21345      1                      
Prince            Planet Earth    Planet earth         86345      1                      
Lundaland         Prince          Lundaland            98         2                      
Quarkspace        Prince          The hidden moon      257        2                      
Lundaland         Prince          Lundaland            98         4                      
Quarkspace        Prince          The hidden moon      257        4                      
Ketil Lien        Dark Princess   The White Peak       385        4                      

8 rows selected

BR
Tom Kyte
March 06, 2009 - 10:19 am UTC

ops$tkyte%ORA11GR1> select case when upper(artist) = 'PRINCE' then 1
  2                when upper(song) = 'PRINCE' then 2
  3                when upper(album) = 'PRINCE' then 3
  4                else 4
  5         end oc, x.*
  6    from (
  7  select * from test ta where upper(artist) = 'PRINCE' union
  8  select * from test tb where upper(song) = 'PRINCE' union
  9  select * from test tc where upper(album) = 'PRINCE' union
 10  select * from test td where upper(song) like '%PRINCE%'
 11         ) X
 12  order by oc, weight
 13  /

        OC ARTIST          SONG                 ALBUM               WEIGHT
---------- --------------- -------------------- --------------- ----------
         1 Prince          Come                 Race                    50
         1 Prince          Purple rain          I would die 4 U      21345
         1 Prince          I would die 4 U      Prince               56215
         1 Prince          Planet Earth         Planet earth         86345
         2 Lundaland       Prince               Lundaland               98
         2 Quarkspace      Prince               The hidden moon        257
         4 Ketil Lien      Dark Princess        The White Peak         385

7 rows selected.

Distinct values with order

Slavko Brkic, March 09, 2009 - 3:45 am UTC

But of course,
Thanks a lot
Tom Kyte
March 09, 2009 - 3:55 am UTC

actually, I want to revisit this, since:

ops$tkyte%ORA11GR1> select case when upper(artist) = 'PRINCE' then 1
2 when upper(song) = 'PRINCE' then 2
3 when upper(album) = 'PRINCE' then 3
4 else 4
5 end oc, x.*
6 from (
7 select * from test ta where upper(artist) = 'PRINCE' union
8 select * from test tb where upper(song) = 'PRINCE' union
9 select * from test tc where upper(album) = 'PRINCE' union
10 select * from test td where upper(song) like '%PRINCE%'
11 ) X
12 order by oc, weight
13 /


upper(song) like %prince% will full scan anyway - just use OR and NOT union in the inner query - no need to make the first three where predicates efficient since the 4th is not really that opimizable (unless you use a text index and substring index it)

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:37336026927381#37360199308431

Distinct values with order

Slavko Brkic, March 09, 2009 - 6:02 am UTC

Hi Tom,
In order to simplify the creation of the example I excluded the text index (and used like to simulate text index functionality). We are using text index (instead of like '%xxx%') and we have also created upper indexes on each of these columns to avoid full table scans.

BR
Tom Kyte
March 09, 2009 - 12:57 pm UTC

ok, if the 4th query was a contains - use the UNION.

And - in the future - that is a relevant bit of information :) not something to leave out...

Distinct values with order

Bernhard Schwarz, March 09, 2009 - 6:46 am UTC

Hi Tom,

since FTS is done anyway, why not just skip the UNIONs and instead use:

SQL> select case when upper(artist) = 'PRINCE' then 1
  2                when upper(song) = 'PRINCE' then 2
  3                when upper(album) = 'PRINCE' then 3
  4                else 4
  5         end oc, x.*
  6    from tk01 x
  7  order by oc, weight;

   OC ARTIST          SONG            ALBUM               WEIGHT
----- --------------- --------------- --------------- ----------
    1 Prince          Come            Race                    50
    1 Prince          Purple rain     I would die 4 U      21345
    1 Prince          I would die 4 U Prince               56215
    1 Prince          Planet Earth    Planet earth         86345
    2 Lundaland       Prince          Lundaland               98
    2 Quarkspace      Prince          The hidden moon        257
    4 Ketil Lien      Dark Princess   The White Peak         385

7 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4275516387

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     7 |   658 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY     |      |     7 |   658 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| TK01 |     7 |   658 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          7  consistent gets
          0  physical reads
          0  redo size
        595  bytes sent via SQL*Net to client
        246  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          7  rows processed


To me this execution plan sure looks better than with the UNIONs:

SQL> select case when upper(artist) = 'PRINCE' then 1
  2                when upper(song) = 'PRINCE' then 2
  3                when upper(album) = 'PRINCE' then 3
  4                else 4
  5         end oc, x.*
  6    from (
  7  select * from tk01 ta where upper(artist) = 'PRINCE' union
  8  select * from tk01 tb where upper(song) = 'PRINCE' union
  9  select * from tk01 tc where upper(album) = 'PRINCE' union
 10  select * from tk01 td where upper(song) like '%PRINCE%'
 11         ) X
 12  order by oc, weight;

   OC ARTIST          SONG            ALBUM               WEIGHT
----- --------------- --------------- --------------- ----------
    1 Prince          Come            Race                    50
    1 Prince          Purple rain     I would die 4 U      21345
    1 Prince          I would die 4 U Prince               56215
    1 Prince          Planet Earth    Planet earth         86345
    2 Lundaland       Prince          Lundaland               98
    2 Quarkspace      Prince          The hidden moon        257
    4 Ketil Lien      Dark Princess   The White Peak         385

7 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2094238496

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |    10 |   940 |    17  (30)| 00:00:01 |
|   1 |  SORT ORDER BY        |      |    10 |   940 |    17  (30)| 00:00:01 |
|   2 |   VIEW                |      |    10 |   940 |    16  (25)| 00:00:01 |
|   3 |    SORT UNIQUE        |      |    10 |   940 |    16  (82)| 00:00:01 |
|   4 |     UNION-ALL         |      |       |       |            |          |
|*  5 |      TABLE ACCESS FULL| TK01 |     4 |   376 |     3   (0)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL| TK01 |     2 |   188 |     3   (0)| 00:00:01 |
|*  7 |      TABLE ACCESS FULL| TK01 |     1 |    94 |     3   (0)| 00:00:01 |
|*  8 |      TABLE ACCESS FULL| TK01 |     3 |   282 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter(UPPER("ARTIST")='PRINCE')
   6 - filter(UPPER("SONG")='PRINCE')
   7 - filter(UPPER("ALBUM")='PRINCE')
   8 - filter(UPPER("SONG") LIKE '%PRINCE%')

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         28  consistent gets
          0  physical reads
          0  redo size
        595  bytes sent via SQL*Net to client
        246  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          7  rows processed

Berny

Tom Kyte
March 09, 2009 - 1:01 pm UTC

well, originally i kept as 4 because I was thinking "use index accesses for each", then revised when I noticed the %xxx%, and we have just revised back to 4 since we can do 4 index probes.

Yes, if we had to full scan once, as I said above, just full scan one and use OR. If we can use indexes for each of the 4, use 4 unioned together.

How to tune this?

Chinni, March 20, 2009 - 6:01 am UTC

Hi Tom,
I have a query which takes nearly 10 Minutes to run in the first attempt and next attempts runs in less than 10 seconds. Could you please help me in identifying the issue?

First Attemp:
Statistics
----------------------------------------------------------
        205  recursive calls
          0  db block gets
     182998  consistent gets
      79000  physical reads
          0  redo size
          0  workarea executions - onepass
          0  workarea executions - multipass
          0  parse time cpu
          0  parse count (failures)
         19  execute count  -- Why 19 executions, what is this number?
          2  rows processed
 
Elapsed: 00:08:59:61
 

Second Attemp:

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     182942  consistent gets
         11  physical reads
          0  redo size
          0  workarea executions - onepass
          0  workarea executions - multipass
          0  parse time cpu
          0  parse count (failures)
          2  execute count  -- here also 2 attempts corresponds to what?
          2  rows processed
 
Elapsed: 00:00:07:21

Query:
SELECT COUNT(*) FROM (
SELECT site, acc_id, portfolio,
       CASE
          WHEN flag = 'B'
             THEN (amount * -1) / fxr_price
          ELSE (amount) / fxr_price
       END
  FROM (SELECT p.acc_sit_id AS site, p.acc_id AS acc_id,
               p.portfolio AS portfolio, t.tra_buy_sell_flag AS flag,
               NVL (t.tra_amt_settled, 0) AS amount,
               NVL (t.tra_ccy_code_payment, 'EUR') AS currency,
               CASE
                  WHEN t.tra_ccy_code_payment = 'EUR'
                   OR t.tra_ccy_code_payment IS NULL
                     THEN 1
                  ELSE (SELECT f.fxr_price
                          FROM cris.fxr f
                         WHERE f.fxr_sit_id = 'TE'
                           AND f.fxr_base_ccy_code = 'EUR'
                           AND f.fxr_ccy_code = t.tra_ccy_code_payment
                           AND f.fxr_as_of_date =
                                  (SELECT MAX (y.fxr_as_of_date)
                                     FROM cris.fxr y
                                    WHERE y.fxr_sit_id = 'TE'
                                      AND y.fxr_base_ccy_code =
                                                           f.fxr_base_ccy_code
                                      AND y.fxr_ccy_code = f.fxr_ccy_code))
               END AS fxr_price
          FROM cris.tra t, dcsw_owner.pec_acc_local p
         WHERE p.acc_type = 'LC'
           AND p.status = 'P'
           AND p.pec_cli_id = '4'
           AND p.ex_intraday_cash = 'N'
           AND t.tra_sit_id = p.cli_sit_id
           AND t.tra_transact_type NOT IN ('L', 'Y')
           AND t.tra_trt_trans_type IN ('RVP', 'DVP', '1RVP', '1DVP')
           AND (   t.tra_acs_id = p.acc_id
                OR (    t.tra_acs_id =
                                    SUBSTR (p.acc_id, 1, LENGTH (p.acc_id) - 3)
                    AND t.tra_ccy_code_payment =
                                    SUBSTR (p.acc_id, LENGTH (p.acc_id) - 2,
                                            3)
                   )
               )
           AND t.tra_buy_sell_flag IN ('B', 'S')
           AND t.tra_amt_settled > 0
           AND t.tra_act_settl_dat >= (SELECT sit_eod
                                         FROM cris.sit
                                        WHERE sit_corr_sit_id = p.cli_sit_id))
)


PLAN_TABLE_OUTPUT

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 77 | 912 | | |
| 1 | SORT AGGREGATE | | 1 | 77 | | | |
| 2 | CONCATENATION | | | | | | |
|* 3 | FILTER | | | | | | |
|* 4 | TABLE ACCESS BY GLOBAL INDEX ROWID| TRA | 1 | 48 | 871 | ROWID | ROW L |
| 5 | NESTED LOOPS | | 1 | 77 | 873 | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | PEC_ACC_LOCAL | 1 | 29 | 2 | | |
|* 7 | INDEX RANGE SCAN | PEC_ACC_LOCAL_PK | 4 | | 2 | | |
|* 8 | INDEX RANGE SCAN | TRA_IND_CP | 2534 | | 44 | | |
|* 9 | TABLE ACCESS FULL | SIT | 1 | 8 | 4 | | |
|* 10 | FILTER | | | | | | |
|* 11 | TABLE ACCESS BY GLOBAL INDEX ROWID| TRA | 1 | 48 | 871 | ROWID | ROW L |
| 12 | NESTED LOOPS | | 1 | 77 | 873 | | |
|* 13 | TABLE ACCESS BY INDEX ROWID | PEC_ACC_LOCAL | 1 | 29 | 2 | | |
|* 14 | INDEX RANGE SCAN | PEC_ACC_LOCAL_PK | 4 | | 2 | | |
|* 15 | INDEX RANGE SCAN | TRA_NU08 | 2534 | | 44 | | |
|* 16 | TABLE ACCESS FULL | SIT | 1 | 8 | 4 | | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("SYS_ALIAS_1"."TRA_ACT_SETTL_DAT">= (SELECT /*+ */ "SIT"."SIT_EOD" FROM "CRIS"."SIT" "SIT"
WHERE "SIT"."SIT_CORR_SIT_ID"=:B1))
4 - filter("SYS_ALIAS_1"."TRA_TRANSACT_TYPE"<>'L' AND "SYS_ALIAS_1"."TRA_TRANSACT_TYPE"<>'Y' AND
("SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='1DVP' OR "SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='1RVP' OR
"SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='DVP' OR "SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='RVP') AND
("SYS_ALIAS_1"."TRA_BUY_SELL_FLAG"='B' OR "SYS_ALIAS_1"."TRA_BUY_SELL_FLAG"='S') AND
"SYS_ALIAS_1"."TRA_AMT_SETTLED">0)
6 - filter("SYS_ALIAS_2"."ACC_TYPE"='LC' AND "SYS_ALIAS_2"."EX_INTRADAY_CASH"='N')
7 - access("SYS_ALIAS_2"."PEC_CLI_ID"=4 AND "SYS_ALIAS_2"."STATUS"='P')
filter("SYS_ALIAS_2"."STATUS"='P')
8 - access("SYS_ALIAS_1"."TRA_SIT_ID"="SYS_ALIAS_2"."CLI_SIT_ID" AND
"SYS_ALIAS_1"."TRA_ACS_ID"=SUBSTR("SYS_ALIAS_2"."ACC_ID",1,LENGTH("SYS_ALIAS_2"."ACC_ID")-3) AND
"SYS_ALIAS_1"."TRA_CCY_CODE_PAYMENT"=SUBSTR("SYS_ALIAS_2"."ACC_ID",LENGTH("SYS_ALIAS_2"."ACC_ID")-2,3))
9 - filter("SIT"."SIT_CORR_SIT_ID"=:B1)
10 - filter("SYS_ALIAS_1"."TRA_ACT_SETTL_DAT">= (SELECT /*+ */ "SIT"."SIT_EOD" FROM "CRIS"."SIT" "SIT"
WHERE "SIT"."SIT_CORR_SIT_ID"=:B1))
11 - filter("SYS_ALIAS_1"."TRA_TRANSACT_TYPE"<>'Y' AND "SYS_ALIAS_1"."TRA_TRANSACT_TYPE"<>'L' AND
"SYS_ALIAS_1"."TRA_AMT_SETTLED">0 AND ("SYS_ALIAS_1"."TRA_BUY_SELL_FLAG"='B' OR
"SYS_ALIAS_1"."TRA_BUY_SELL_FLAG"='S') AND (LNNVL("SYS_ALIAS_1"."TRA_ACS_ID"=SUBSTR("SYS_ALIAS_2"."ACC_ID",
1,LENGTH("SYS_ALIAS_2"."ACC_ID")-3)) OR LNNVL("SYS_ALIAS_1"."TRA_CCY_CODE_PAYMENT"=SUBSTR("SYS_ALIAS_2"."AC
C_ID",LENGTH("SYS_ALIAS_2"."ACC_ID")-2,3))))
13 - filter("SYS_ALIAS_2"."EX_INTRADAY_CASH"='N' AND "SYS_ALIAS_2"."ACC_TYPE"='LC')
14 - access("SYS_ALIAS_2"."PEC_CLI_ID"=4 AND "SYS_ALIAS_2"."STATUS"='P')
filter("SYS_ALIAS_2"."STATUS"='P')
15 - access("SYS_ALIAS_1"."TRA_SIT_ID"="SYS_ALIAS_2"."CLI_SIT_ID" AND
"SYS_ALIAS_1"."TRA_ACS_ID"="SYS_ALIAS_2"."ACC_ID")
filter("SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='1DVP' OR "SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='1RVP' OR
"SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='DVP' OR "SYS_ALIAS_1"."TRA_TRT_TRANS_TYPE"='RVP')
16 - filter("SIT"."SIT_CORR_SIT_ID"=:B1)

Note: cpu costing is off

9i database:
SHARED pool : 1.46875 GB
buffer CACHE :12 GB
pga_aggregate_target :0.4 GB

As there is no parsing involved(?) it should be due to physical reads? how to reduce these physical reads given my db cache size is 12 GB, could you please suggest?
Tom Kyte
March 24, 2009 - 10:32 am UTC

wasn't it sort of obvious?

      79000  physical reads

versus

         11  physical reads



You did 79,000 physical IO's, which got into the cache apparently.

ops$tkyte%ORA10GR2> select 540/79000 from dual;

 540/79000
----------
.006835443


each IO took an average of 0.0067 seconds to complete, really fast - but when you do a lot of anything - it adds up.


as for the execute count, see the recursive calls? You hard parsed the first time, we ran some SQL to figure out how to run your SQL - then second time we didn't have to.

Thanks .. but how to reduce these physical reads

Chinni, March 24, 2009 - 3:03 pm UTC

Thanks Tom.
Yes I could see that the physical reads are the main difference. But could you suggest some techniques to reduce these physical reads.
My problem is, whenever client executing this query, with different parameters then it executes as if it is executing for the first time and taking very long time. Due to this they get timeout error from the application.

I cannot tell them to re execute the query. Now, could you help me to get around this?

This table is of 34 GB. My db cache size is 12 GB(Oracle 9i). I tried changing the query before but could not find any thing to tune in that query. All the joins are thoroughly checked actually. Now what other options i have instead of changing the query, to reduce the time taken to these physical reads? Kindly suggest

..
as for the execute count, see the recursive calls? You hard parsed the first time, we ran some SQL to figure out how to run your SQL - then second time we didn't have to.
...
I could not understand this, could you please elaborate? If there was a hard parse in the first run

then what are these parameters indicate, both runs,?
0 parse time cpu
0 parse count (failures)

Thank You
Tom Kyte
March 29, 2009 - 10:54 am UTC

run the query twice, that worked :) That is one way to reduce the physical IO's.


look for a way to reduce the work being done, I don't have your tables, your data, your tkprofs, I don't know what each step in the plan was doing, I don't have your indexing schema, Heck - I don't even have your question (I have your query, but I don't know what QUESTION you were trying to answer - I don't like to tune queries, I'd rather have the question)



as for


..
as for the execute count, see the recursive calls? You hard parsed the first time, we ran some SQL to figure out how to run your SQL - then second time we didn't have to.
...
I could not understand this, could you please elaborate? If there was a hard parse in the first run



do you know what a hard parse is? the first time a query is executed, when it is not in the shared pool, we have to parse and optimize it. In order to optimize/parse it we might have to run some sql ourselves. Recursive sql is simply sql that is executed in order to run YOUR sql. So, the first time, we ran some sql to parse/optimize your sql - the second time we didn't have to.




the parse statisics (parse time cpu, parse count(failures)) are just that, statistics. The parse happened fast, with little cpu, and it did not fail (failures, zero)

Reader, March 29, 2009 - 5:00 pm UTC

create table tst_ord
(dt_key number,
company_id varchar2(10),
person_name varchar2(100),
item varchar2(20),
side varchar2(4),
submittime timestamp(6),
closetime timestamp(6)
);

insert into tst_ord
values
(20081117,'ABC','Peter','Bag','B',to_timestamp('11/17/2008 10:27:16.765000 AM','mm/dd/yyyy hh:mi:ss.ff AM')
,to_timestamp('11/17/2008 10:27:18.739000 AM','mm/dd/yyyy hh:mi:ss.ff AM')
);

insert into tst_ord
values
(20081117,'ABC','Peter','Bag','B',to_timestamp('11/17/2008 10:21:41.018000 AM','mm/dd/yyyy hh:mi:ss.ff AM')
,to_timestamp('11/17/2008 10:21:41.354000 AM','mm/dd/yyyy hh:mi:ss.ff AM')
)

insert into tst_ord
values
(20081203,'ABC','Peter','Bag','S',to_timestamp('12/03/2008 12:36:33.810000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
,to_timestamp('12/03/2008 12:36:33.810000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
);

insert into tst_ord
values
(20081203,'ABC','Peter','Bag','S',to_timestamp('12/03/2008 3:51:33.083000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
,to_timestamp('12/03/2008 3:55:59.316000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
);

insert into tst_ord
values
(20090319,'ABC','Peter','Bag','S',to_timestamp('03/19/2009 4:51:33.083000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
,to_timestamp('03/19/2009 4:55:59.316000 PM','mm/dd/yyyy hh:mi:ss.ff AM')
);

commit;

CREATE OR REPLACE FUNCTION DWH_SYS.f_timestamp2secs(p_timestamp timestamp) RETURN NUMBER AS
-- take a timestamp and return the number of seconds since 1970
BEGIN
IF p_timestamp IS NOT NULL THEN
RETURN extract(day from (p_timestamp - to_date('01011970', 'ddmmyyyy'))) * 24 * 3600 +
extract(hour from (p_timestamp - to_date('01011970', 'ddmmyyyy'))) * 3600 +
extract(minute from (p_timestamp - to_date('01011970', 'ddmmyyyy'))) * 60 +
extract(second from (p_timestamp - to_date('01011970', 'ddmmyyyy'))) ;
ELSE
return null;
END IF;
END f_timestamp2secs;
/

select o.dt_key,o.company_id,o.person_name,o.item,o.side
,sum(f_timestamp2secs(closetime) - f_timestamp2secs(submittime)) duration
from tst_ord o
group by o.dt_key,o.company_id,o.person_name,o.item,o.side
order by dt_key;

DT_KEY COMPANY_ID PERSON_NAM ITEM SIDE DURATION
---------- ---------- ---------- ------ ---- ----------
20081117 ABC Peter Bag B 2.31
20081203 ABC Peter Bag S 266.233
20090319 ABC Peter Bag S 266.233


I have a date table called dt_tbl, which has all dates stored. I have to combine the above query with dt_tbl
to get data for any quarter.

create table dt_tbl
(dt_key number,
dt date,
day_of_wk number,
qtr_nm varchar2(2),
yt number);


insert into dt_tbl
values
(20081001, to_date('10/1/2008','mm/dd/yyyy'),4,'Q4',2008);

insert into dt_tbl
values
(20081203, to_date('12/3/2008','mm/dd/yyyy'),4,'Q4',2008);

insert into dt_tbl
values
(20081231, to_date('12/31/2008','mm/dd/yyyy'),4,'Q4',2008);

insert into dt_tbl
values
(20090101, to_date('1/1/2009','mm/dd/yyyy'),5,'Q1',2009);


insert into dt_tbl
values
(20090319, to_date('3/19/2009','mm/dd/yyyy'),5,'Q1',2009);

insert into dt_tbl
values
(20090331, to_date('3/31/2009','mm/dd/yyyy'),3,'Q1',2009);

commit;

I have to create a view by using the following query.

create view v_ord
(start_dt_key,end_Dt_key,company_id,total)
as
select start_dt_key,end_dt_key,company_id,
sum(case when duration >= 60 then 1 else 0 end) ord_total
from(
select o.dt_key,start_tbl.dt_key start_dt_key,end_tbl.dt_key end_dt_key
,o.company_id,o.person_name,o.item,o.side
,sum(f_timestamp2secs(closetime) - f_timestamp2secs(submittime)) duration
from tst_ord o
,dt_tbl start_tbl
,dt_tbl end_tbl
where o.dt_key between start_tbl.dt_key and end_tbl.dt_key
--and start_tbl.dt_key = 20090319 and end_tbl.dt_key = 20090319
group by o.dt_key,start_tbl.dt_key,end_tbl.dt_key,
o.company_id,o.person_name,o.item,o.side)
group by start_dt_key,end_dt_key,company_id

this query returs data in less than a second
select *
from v_ord
where start_Dt_key = 20090319 and end_dt_key = 20090319;

I have to use the same view to generate the data for previous quarter. Since I have lot of data, this query slows down.
Can you please advice, how this view's query can be improved to get the last quarter data?



select *
from v_ord
where start_Dt_key = TO_CHAR (TRUNC (TRUNC (sysdate, 'Q') - 1, 'Q'), 'yyyymmdd')
and end_dt_key= TO_CHAR (TRUNC (sysdate, 'Q') - 1, 'yyyymmdd');

A reader, March 30, 2009 - 8:33 am UTC

For the above question, I would have to change the view's query to retrun any date range-

For example: for last quarter, this is what I run
select *
from v_ord
where start_Dt_key = TO_CHAR (TRUNC (TRUNC (sysdate, 'Q') - 1, 'Q'), 'yyyymmdd')
and end_dt_key= TO_CHAR (TRUNC (sysdate, 'Q') - 1, 'yyyymmdd');

For date range:
select *
from v_ord
where start_Dt_key = 20090101 and end_dt_key=20090330;

The query takes too much time, because there is no join in the view for the dt_tbl. I am not sure, how these kind of things are handled in datawarehouse. Can you please advice?



Reader, March 30, 2009 - 4:33 pm UTC

Tom,
Can you please answer the above question?
Tom Kyte
March 30, 2009 - 6:05 pm UTC

I didn't really have anything to say, it was a lot of information and the request to "please tune my query" isn't something I respond to often.

How do I questions - sure...
Tune my query - not so much.

Reader, March 31, 2009 - 8:34 am UTC

Tom,
Can you please let me know how the time dimension table is used in data warehouse to get the data for any quarter or any business day? Probably, I can figure out how to tune my query, if I can get some hints with respect to the above question.


Thank you for your time.
Tom Kyte
March 31, 2009 - 9:10 am UTC

hmmm, well, you would have a table:

date_dim
---------
the_date primary key, the_qtr, the_fy ......


and you would join? Not sure what you mean.

Outer v inner join - dramatic performance difference

Sal, April 01, 2009 - 3:44 pm UTC

Tom,

I know that you want a completely reproducible case so you cas see what is happening. I have spent a lot of time trying to get a reproducible case but when I am unable to reproduce this problem. I have created the tables and populated them with dummy data with the same number of rows etc but just cant seem to reproduce it. Maybe you can give me pointers as to what I can investigate.

Basically, I have this query:

SELECT * /* new index */
FROM   (SELECT rownum rnum
,tab.*
FROM   (SELECT /*+FIRST_ROWS(25)*/
ACC.PROSPECT_NAME
,OPP.PROSPECT_ID
,OPP.OPPORTUNITY_ID
,OPP.NAME
,OPP.PROCESS_ID
,OPP.IS_HOT
,pt.process_name
,pt.process_sequence
FROM   SGAPP.OPPORTUNITY OPP
INNER  JOIN SGAPP.PROSPECT ACC ON OPP.PROSPECT_ID =
ACC.PROSPECT_ID
INNER JOIN process_tbl pt ON pt.process_id =
opp.process_id
WHERE  OPP.ARCHIVED_ON IS NULL AND
OPP.OPPORTUNITY_ID IN
(SELECT OT1.OPPORTUNITY_ID
FROM   SGAPP.OPPORTUNITY_TEAM OT1
WHERE  OT1.EMPLOYEE_ID = 101387 AND
OT1.ARCHIVED_ON IS NULL) AND
OPP.OPP_STATUS_ID = 8
ORDER  BY pt.process_sequence
,upper(prospect_name)
,upper(opp.NAME)) tab
WHERE  rownum <= 25)
WHERE  rnum >= 1


This query does 25000 IOs and produces this execution plan:

Rows Row Source Operation
------- ---------------------------------------------------
25 VIEW (cr=25459 pr=0 pw=0 time=155606 us)
25 COUNT STOPKEY (cr=25459 pr=0 pw=0 time=155524 us)
25 VIEW (cr=25459 pr=0 pw=0 time=155445 us)
25 SORT ORDER BY STOPKEY (cr=25459 pr=0 pw=0 time=155367 us)
4292 NESTED LOOPS (cr=25459 pr=0 pw=0 time=167533 us)
4292 NESTED LOOPS SEMI (cr=12302 pr=0 pw=0 time=64515 us)
4371 NESTED LOOPS (cr=3558 pr=0 pw=0 time=30710 us)
10 TABLE ACCESS BY INDEX ROWID PROCESS_TBL (cr=11 pr=0 pw=0 time=162 us)
10 INDEX RANGE SCAN I_PROCESS_TBL_LC_PK (cr=2 pr=0 pw=0 time=70 us)(object id 280948)
4371 TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=3547 pr=0 pw=0 time=21995 us)
4451 INDEX RANGE SCAN I_OPP_LC_PROC_ALERT (cr=36 pr=0 pw=0 time=8978 us)(object id 282047)
4292 INDEX RANGE SCAN I_EMP_ID_OPP_ID (cr=8744 pr=0 pw=0 time=32296 us)(object id 278064)
4292 TABLE ACCESS BY INDEX ROWID PROSPECT (cr=13157 pr=0 pw=0 time=77958 us)
4292 INDEX UNIQUE SCAN SYS_C0014254 (cr=8586 pr=0 pw=0 time=31649 us)(object id 63757)

Now, when I change the outer to an inner join (as it should be), the data remains the same as process_id is not null for the opportunity table. The query looks like this:

SELECT 
* /* inner join */
FROM   (SELECT rownum rnum
,tab.*
FROM   (SELECT /*+FIRST_ROWS(25)*/
ACC.PROSPECT_NAME
,OPP.PROSPECT_ID
,OPP.OPPORTUNITY_ID
,OPP.NAME
,OPP.PROCESS_ID
,OPP.IS_HOT
,pt.process_name
,pt.process_sequence
FROM   SGAPP.OPPORTUNITY OPP
INNER  JOIN SGAPP.PROSPECT ACC ON OPP.PROSPECT_ID =
ACC.PROSPECT_ID
right outer join process_tbl pt ON pt.process_id =
opp.process_id             
WHERE  OPP.ARCHIVED_ON IS NULL AND
OPP.OPPORTUNITY_ID IN
(SELECT OT1.OPPORTUNITY_ID
FROM   SGAPP.OPPORTUNITY_TEAM OT1
WHERE  OT1.EMPLOYEE_ID = 101387 AND
OT1.ARCHIVED_ON IS NULL) AND
OPP.OPP_STATUS_ID = 8
ORDER  BY pt.process_sequence
,upper(prospect_name)
,upper(opp.NAME)) tab
WHERE  rownum <= 25)
WHERE  rnum >= 1


And this performs 3700 IOs and the execution plan is much better:

Rows Row Source Operation
------- ---------------------------------------------------
25 VIEW (cr=3729 pr=0 pw=0 time=39859 us)
25 COUNT STOPKEY (cr=3729 pr=0 pw=0 time=39792 us)
25 VIEW (cr=3729 pr=0 pw=0 time=39713 us)
25 SORT ORDER BY STOPKEY (cr=3729 pr=0 pw=0 time=39633 us)
4292 HASH JOIN SEMI (cr=3729 pr=0 pw=0 time=40447 us)
4371 HASH JOIN (cr=3667 pr=0 pw=0 time=29973 us)
4371 HASH JOIN RIGHT OUTER (cr=3539 pr=0 pw=0 time=39804 us)
10 TABLE ACCESS BY INDEX ROWID PROCESS_TBL (cr=11 pr=0 pw=0 time=139 us)
10 INDEX RANGE SCAN I_PROCESS_TBL_LC_PK (cr=2 pr=0 pw=0 time=47 us)(object id 280948)
4371 TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=3528 pr=0 pw=0 time=17515 us)
4451 INDEX RANGE SCAN I_OPP_LC_PROC_ALERT (cr=17 pr=0 pw=0 time=4470 us)(object id 282047)
10550 INDEX RANGE SCAN I_LBLCOL_UPPER_PROS_NAME2 (cr=128 pr=0 pw=0 time=21139 us)(object id 282029)
9953 INDEX RANGE SCAN I_EMP_ID_OPP_ID (cr=62 pr=0 pw=0 time=19928 us)(object id 278064)

********************************************************************************


When I try to reproduce this problem, the inner join query returns the execution plan that I like - i.e. the one right above except with 'HASH JOIN RIGHT OUTER' turned into ' HASH JOIN'.

The cardinality on the process_id column is low and I tried creating a histogram to help but that did not change the plan either.

I have been looking at this for a while now - trying to solve to get a reproducible case but cannot seem to do either. I have inspected not null constraints and data types and they seem to be fine.

Are there any pointers or guidance you can provide? Thanks a lot for your help.

I do have a complete test example prepared for this but I am not pasting it because it does not reproduce the problem.
Tom Kyte
April 01, 2009 - 4:52 pm UTC

are you asking me "why do two entirely different queries have entirely different resource consumptions"?

I'm not sure what you are asking here really?

What is the "problem" exactly?

Left outer join

Sal, April 01, 2009 - 7:03 pm UTC

Tom,

I made a mistake in typing the wrong query in the message.

The 'outer join' query is actually a left outer join on process_tbl. In my rush to post, I was modifying the inner join query instead of pasting the outer join query from my IDE.

I have further trimmed down the problem and am working on a reproducible case.

But, here are the two queries and their row source operations from TKPROF. Just switching an inner join to an outer join on process_tbl reduces the IOs from about 28000 to 7000. The results are the same as opportunity.process_id is not null.

SELECT /*5*/
 *
FROM   (SELECT rownum rnum
       ,tab.*
    FROM   (SELECT /*+FIRST_ROWS(25)*/
         OPP.PROSPECT_ID
               FROM OPPORTUNITY OPP
           left outer
                --inner
                JOIN process_Tbl pt ON pt.process_id = opp.process_id
               WHERE  exists
            (SELECT OT1.OPPORTUNITY_ID
            FROM   SGAPP.OPPORTUNITY_TEAM OT1
            WHERE  OT1.EMPLOYEE_ID = 101387 AND
                OT1.ARCHIVED_ON IS NULL
                               and ot1.opportunity_id = OPP.OPPORTUNITY_ID)
         ORDER  BY upper(opp.NAME)
                ) tab
    WHERE  rownum <= 25)
WHERE  rnum >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.05       0.04          0       7986          0          25
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.05       0.05          0       7986          0          25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 533  

Rows     Row Source Operation
-------  ---------------------------------------------------
     25  VIEW  (cr=7986 pr=0 pw=0 time=49917 us)
     25   COUNT STOPKEY (cr=7986 pr=0 pw=0 time=49836 us)
     25    VIEW  (cr=7986 pr=0 pw=0 time=49756 us)
     25     SORT ORDER BY STOPKEY (cr=7986 pr=0 pw=0 time=49693 us)
   9953      HASH JOIN RIGHT OUTER (cr=7986 pr=0 pw=0 time=149887 us)
     10       INDEX RANGE SCAN I_PROCESS_TBL (cr=2 pr=0 pw=0 time=70 us)(object id 282049)
   9953       HASH JOIN SEMI (cr=7984 pr=0 pw=0 time=99181 us)
  10541        TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=7922 pr=0 pw=0 time=63283 us)
  10541         INDEX RANGE SCAN I_OPP_TEST (cr=31 pr=0 pw=0 time=10568 us)(object id 282048)
   9953        INDEX RANGE SCAN I_EMP_ID_OPP_ID (cr=62 pr=0 pw=0 time=19942 us)(object id 278064)

********************************************************************************

begin
  sys.dbms_output.get_line(line => :line, status => :status);
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           3
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          0          0           3

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 533  
********************************************************************************

SELECT /*5*/
 *
FROM   (SELECT rownum rnum
       ,tab.*
    FROM   (SELECT /*+FIRST_ROWS(25)*/
         OPP.PROSPECT_ID
               FROM OPPORTUNITY OPP
           --left outer
                inner
                JOIN process_Tbl pt ON pt.process_id = opp.process_id
               WHERE  exists
            (SELECT OT1.OPPORTUNITY_ID
            FROM   SGAPP.OPPORTUNITY_TEAM OT1
            WHERE  OT1.EMPLOYEE_ID = 101387 AND
                OT1.ARCHIVED_ON IS NULL
                               and ot1.opportunity_id = OPP.OPPORTUNITY_ID)
         ORDER  BY upper(opp.NAME)
                ) tab
    WHERE  rownum <= 25)
WHERE  rnum >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.15       0.14          0      29016          0          25
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.15       0.15          0      29016          0          25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 533  

Rows     Row Source Operation
-------  ---------------------------------------------------
     25  VIEW  (cr=29016 pr=0 pw=0 time=148007 us)
     25   COUNT STOPKEY (cr=29016 pr=0 pw=0 time=147925 us)
     25    VIEW  (cr=29016 pr=0 pw=0 time=147869 us)
     25     SORT ORDER BY STOPKEY (cr=29016 pr=0 pw=0 time=147792 us)
   9953      NESTED LOOPS SEMI (cr=29016 pr=0 pw=0 time=150229 us)
  10541       HASH JOIN  (cr=7924 pr=0 pw=0 time=137935 us)
     10        INDEX RANGE SCAN I_PROCESS_TBL (cr=2 pr=0 pw=0 time=62 us)(object id 282049)
  10541        TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=7922 pr=0 pw=0 time=73966 us)
  10541         INDEX RANGE SCAN I_OPP_TEST (cr=31 pr=0 pw=0 time=10569 us)(object id 282048)
   9953       INDEX RANGE SCAN I_EMP_ID_OPP_ID (cr=21092 pr=0 pw=0 time=75953 us)(object id 278064)

********************************************************************************




More information

Sal, April 01, 2009 - 7:55 pm UTC

Tom,

If I completely take the process_tbl join out, then I get a good execution plan. However, when I add that join in as an inner join, the execution plan reverts to the high IO.

The column opportunity.process_id is not null and an fk to process_tbl.process_id so the optimizer knows that there is no row elimination going on.

I cant figure out what is causing the optimizer to pick a sub-optimal execution plan on an inner join.

Here is the tkprof when I commented out the join to process_Tbl:

SELECT /*6*/
 *
FROM   (SELECT rownum rnum
       ,tab.*
    FROM   (SELECT /*+FIRST_ROWS(25)*/
         OPP.OPPORTUNITY_ID
               FROM OPPORTUNITY OPP
           --left outer
                --inner
                --JOIN process_Tbl pt ON pt.process_id = opp.process_id
               WHERE  exists
            (SELECT 'x'
            FROM   SGAPP.OPPORTUNITY_TEAM OT1
            WHERE  OT1.EMPLOYEE_ID = 101387 AND
                OT1.ARCHIVED_ON IS NULL
                               and ot1.opportunity_id = OPP.OPPORTUNITY_ID)
         ORDER  BY upper(opp.NAME)
                ) tab
    WHERE  rownum <= 25)
WHERE  rnum >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.04       0.04          0       7984          0          25
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.04       0.04          0       7984          0          25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 533  

Rows     Row Source Operation
-------  ---------------------------------------------------
     25   COUNT STOPKEY (cr=7984 pr=0 pw=0 time=42182 us)
     25    VIEW  (cr=7984 pr=0 pw=0 time=42127 us)
     25     SORT ORDER BY STOPKEY (cr=7984 pr=0 pw=0 time=42049 us)
   9953      HASH JOIN SEMI (cr=7984 pr=0 pw=0 time=96892 us)
  10541       TABLE ACCESS BY INDEX ROWID OPPORTUNITY (cr=7922 pr=0 pw=0 time=63309 us)
  10541        INDEX RANGE SCAN I_OPP_TEST (cr=31 pr=0 pw=0 time=10592 us)(object id 282048)
   9953       INDEX RANGE SCAN I_EMP_ID_OPP_ID (cr=62 pr=0 pw=0 time=19940 us)(object id 278064)


Tom Kyte
April 02, 2009 - 9:29 am UTC

let's see the explain plan - I want to see the estimated cardinalities.

A reader

gaozhiwen, April 02, 2009 - 4:28 am UTC

how about is it? more slow or the same or better?
Thank you very much.

/* Formatted on 2009/04/02 16:13 (Formatter Plus v4.8.8) */
SELECT /*6*/
*
FROM (SELECT ROWNUM rnum, tab.*
FROM (SELECT /*+FIRST_ROWS(25)*/ opp.opportunity_id
FROM (SELECT /*+FIRST_ROWS(25)*/
opp.opportunity_id
FROM opportunity opp
WHERE EXISTS (
SELECT 'x'
FROM sgapp.opportunity_team ot1
WHERE ot1.employee_id = 101387
AND ot1.archived_on IS NULL
AND ot1.opportunity_id =
opp.opportunity_id)
ORDER BY UPPER (opp.NAME))
LEFT OUTER JOIN
process_tbl pt ON pt.process_id = opp.process_id
) tab
WHERE ROWNUM <= 25)
WHERE rnum >= 1;

Explain plans

Sal, April 02, 2009 - 9:41 am UTC

Tom,

Thank you very much for looking at this.

Here are the two explain plans:


Left Outer Join Explain Plan (8000 ios):

Plan hash value: 1981074212
 
-----------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                 |    25 |   650 |   543   (1)| 00:00:07 |
|*  1 |  VIEW                             |                 |    25 |   650 |   543   (1)| 00:00:07 |
|*  2 |   COUNT STOPKEY                   |                 |       |       |            |          |
|   3 |    VIEW                           |                 |  1183 | 15379 |   543   (1)| 00:00:07 |
|*  4 |     SORT ORDER BY STOPKEY         |                 |  1183 | 92274 |   543   (1)| 00:00:07 |
|*  5 |      HASH JOIN RIGHT OUTER        |                 |  1183 | 92274 |   542   (1)| 00:00:07 |
|*  6 |       INDEX RANGE SCAN            | I_PROCESS_TBL   |     2 |    18 |     2   (0)| 00:00:01 |
|*  7 |       HASH JOIN SEMI              |                 |  1183 | 81627 |   539   (1)| 00:00:07 |
|   8 |        TABLE ACCESS BY INDEX ROWID| OPPORTUNITY     |  1533 | 78183 |   467   (0)| 00:00:06 |
|*  9 |         INDEX RANGE SCAN          | I_OPP_TEST      |  1533 |       |     7   (0)| 00:00:01 |
|* 10 |        INDEX RANGE SCAN           | I_EMP_ID_OPP_ID |  8180 |   143K|    71   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("RNUM">=1)
   2 - filter(ROWNUM<=25)
   4 - filter(ROWNUM<=25)
   5 - access("PROCESS_ID"(+)="PROCESS_ID")
   6 - access("LABELCOL"(+)=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
   7 - access("OT1"."OPPORTUNITY_ID"="OPPORTUNITY_ID")
   9 - access("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
  10 - access("OT1"."EMPLOYEE_ID"=101387 AND "OT1"."ARCHIVED_ON" IS NULL)
       filter("OT1"."ARCHIVED_ON" IS NULL)
       
       

Inner Join Explain Plan (30000 ios):

Plan hash value: 1529916325
 
-----------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                 |     8 |   208 |   493   (1)| 00:00:06 |
|*  1 |  VIEW                             |                 |     8 |   208 |   493   (1)| 00:00:06 |
|*  2 |   COUNT STOPKEY                   |                 |       |       |            |          |
|   3 |    VIEW                           |                 |     8 |   104 |   493   (1)| 00:00:06 |
|*  4 |     SORT ORDER BY STOPKEY         |                 |     8 |   624 |   493   (1)| 00:00:06 |
|   5 |      NESTED LOOPS SEMI            |                 |     8 |   624 |   492   (1)| 00:00:06 |
|*  6 |       HASH JOIN                   |                 |    11 |   660 |   470   (1)| 00:00:06 |
|*  7 |        INDEX RANGE SCAN           | I_PROCESS_TBL   |     2 |    18 |     2   (0)| 00:00:01 |
|   8 |        TABLE ACCESS BY INDEX ROWID| OPPORTUNITY     |  1533 | 78183 |   467   (0)| 00:00:06 |
|*  9 |         INDEX RANGE SCAN          | I_OPP_TEST      |  1533 |       |     7   (0)| 00:00:01 |
|* 10 |       INDEX RANGE SCAN            | I_EMP_ID_OPP_ID |  6311 |   110K|     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("RNUM">=1)
   2 - filter(ROWNUM<=25)
   4 - filter(ROWNUM<=25)
   6 - access("PROCESS_ID"="PROCESS_ID")
   7 - access("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
   9 - access("LABELCOL"=TO_NUMBER(SYS_CONTEXT('COMPANY_LABEL','CURRENT_LABEL')))
  10 - access("OT1"."EMPLOYEE_ID"=101387 AND "OT1"."OPPORTUNITY_ID"="OPPORTUNITY_ID" AND 
              "OT1"."ARCHIVED_ON" IS NULL)
       filter("OT1"."ARCHIVED_ON" IS NULL)


Tom Kyte
April 02, 2009 - 11:05 am UTC

see step #9 in the last plan. It estimates 1533 rows. It got 10,541 rows. That incorrect card=value throws off the cost computation entirely.

how many 'current-label' values do you have?

Additional information

Sal, April 02, 2009 - 9:43 am UTC

Tom,

Additionally, I would like to point out that the tables Opportunity and Process_tbl have a simple VPD predicate applied to them (where labelcol = sys_context('x','y')).

I am going to try to run these two queries from outside of VPD and apply the predicate explicitly to see if that changes anything.

Thanks again for looking at this.

Number of values

Sal, April 02, 2009 - 11:17 am UTC

Tom,

We currently have 680 different 'current_label' values.

How can I get the optimizer to come up with better estimates? Should I create a histogram on the labelcol field?

Also, does bind peeking hurt us where it will generate the plan based on the first labelcol value it sees for a particular query?

Thanks again for your help.
Tom Kyte
April 02, 2009 - 11:31 am UTC

... Should I create a
histogram on the labelcol field?
...

won't help, you have labelcol = function(), function is not known until after optimization happens. So, it is "labelcol = ?"

the labelcol data must be somewhat skewed though - is it?

Skewed data

Sal, April 02, 2009 - 11:47 am UTC

Tom,

The labelcol data is quite skewed. Basically, labelcol is like company_id. It identifies what company a particular record belongs to. The larger companies have lots of data, the smaller companies have lesser data.

Is the only solution then to have hash partition on labelcol?

Thanks!
Tom Kyte
April 02, 2009 - 12:49 pm UTC

... Is the only solution then to have hash partition on labelcol?...

that isn't going to work either. We'll know that it is "partition key/key", but we won't know until runtime what the value is - so we'll use global table statistics to optimize with. It'll come to the same conclusion.

and hash partition? that would have put a small company and a big company into the same partition.


is there a piece of logic you might have access to, such that when you set the context, you could sort of know that this is a "small", "medium" or "large" company?

We may need different plans for each and we may want to look at using query plan stability to help us do that.

Additional information

Sal, April 02, 2009 - 1:31 pm UTC

Tom,

We could conceivably create 'buckets' and assign each of our customers a 'small', 'medium' or 'large' value based on their data set.

Once we do that, how would you recommend generating different plans depending upon the size of the customer data?

We do issue a set_label call before a connection grab from the connection pool. So, we could do something there.

Really interested in what you have to suggest.

Thanks!




Tom Kyte
April 02, 2009 - 1:39 pm UTC

it would be non-trivial. You would have to 'fake out' the statistics with "big", "med" and "small" stats (using dbms_stats.set_xxxxx functions). enable stored outline capturing and run the queries to get the three sets of plans.

then when you grab the connection, you would set the label and alter session to use the appropriate outline category.

You would not need to do this for *every* query, just the problematic ones.



You can do this work on another machine (not in production) and move the outlines over later.

Thanks!

Sal, April 02, 2009 - 2:31 pm UTC

Tom,

Thanks for your help.

What if we 'unbound' and put a literal value in the VPD policy predicate? I know that you are not an advocate of doing that but for this case, would it help if we combined that and created a histogram on the labelcol column?

The reason I ask about this solution is because it seems to me that every time we are joining other tables to opportunity and then have the 'where exists' on opportunity team, we are seeing this issue. And we do that in a ton of places.

I know that having the literal bound into the VPD policy will cause 600 different entries into the shared pool for 600 labelcol values. But since we are in a 64-bit environment, we could size our SGA to handle that.

Would you even consider that as an option?

Thanks.
Tom Kyte
April 02, 2009 - 4:49 pm UTC

... What if we 'unbound' and put a literal value in the VPD policy predicate? ...

I don't want 600 copies in the shared pool, that is why I asked "how many"

It is not a matter of sizing your SGA - it is the matter that when you parse - every time you parse, you would latch the shared pool for a long time looking through 600 copies of SQL to find the one you want. It is a scalability thing - it is not fixable with cpu, disk or memory.

Alternate to UNION ALL

Sanji, April 02, 2009 - 4:12 pm UTC

Tom,

We have a table GLAMOUNTS, that stores debit begining balance, credit begining balance and debit/ credit amounts for each month, against a particular account number.

The requirement is

1> Calculate total amount "AMT" for every month
2> Calculate total amount "AMTYTD" for every month including amount from previous months.

Prototype of the table is

CREATE TABLE T1(
ACCOUNT NUMBER(6),
DB_BEG_BAL NUMBER(15,2),
CR_BEG_BAL NUMBER(15,2),
DB_AMOUNT_01 NUMBER(15,2),
DB_AMOUNT_02 NUMBER(15,2),
DB_AMOUNT_03 NUMBER(15,2),
CR_AMOUNT_01 NUMBER(15,2),
CR_AMOUNT_02 NUMBER(15,2),
CR_AMOUNT_03 NUMBER(15,2));

INSERT INTO T1 VALUES(99000,0,0,0,0,182.77, 0 ,1463.2, 0 );
INSERT INTO T1 VALUES(99001,0,0,0,0,0 ,-2992.42,0 , 0 );
INSERT INTO T1 VALUES(99002,0,0,0,0,0 ,-154604 ,0 , 0 );
INSERT INTO T1 VALUES(99003,0,0,0,0,1959.1, 0 ,0 ,-1959.41);


OPEN:SANJI:XFIN@DROVMGT>select * from t1;

ACCOUNT DB_BEG_BAL CR_BEG_BAL DB_AMOUNT_01 DB_AMOUNT_02 DB_AMOUNT_03 CR_AMOUNT_01 CR_AMOUNT_02 CR_AMOUNT_03
---------- ---------- ---------- ------------ ------------ ------------ ------------ ------------ ------------
99000 0 0 0 0 182.77 0 1463.2 0
99001 0 0 0 0 0 -2992.42 0 0
99002 0 0 0 0 0 -154604 0 0
99003 0 0 0 0 1959.1 0 0 -1959.41


The query has been written as


SELECT ACCOUNT,
0 AS MNTH,
(DB_BEG_BAL+CR_BEG_BAL) AS AMT,
0 AS AMTYTD
FROM T1
UNION ALL
SELECT ACCOUNT,
1 AS MNTH,
(DB_AMOUNT_01+CR_AMOUNT_01) AS AMT,
(DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) AS AMTYTD
FROM T1
UNION ALL
SELECT ACCOUNT,
2 AS MNTH,
(DB_AMOUNT_02+CR_AMOUNT_02) AS AMT,
(DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) AS AMTYTD
FROM T1
UNION ALL
SELECT ACCOUNT,
3 AS MNTH,
(DB_AMOUNT_03+ CR_AMOUNT_03) AS AMT,
(DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) + (DB_AMOUNT_03+CR_AMOUNT_03) AS AMTYTD
FROM T1
ORDER BY ACCOUNT, 2
/


ACCOUNT MNTH AMT AMTYTD
---------- ---------- ---------- ----------
99000 0 0 0
99000 1 0 0
99000 2 1463.2 1463.2
99000 3 182.77 1645.97
99001 0 0 0
99001 1 -2992.42 -2992.42
99001 2 0 -2992.42
99001 3 0 -2992.42
99002 0 0 0
99002 1 -154604 -154604
99002 2 0 -154604
99002 3 0 -154604
99003 0 0 0
99003 1 0 0
99003 2 0 0
99003 3 -.31 -.31

There are over 5 million records in this table and this query takes a considerable amount of time executing. I'm wondering how to utilize analytic functions here that would possibly make this one more efficient. Would appreciate your inputs.

Rgds
Tom Kyte
April 02, 2009 - 4:57 pm UTC

ops$tkyte%ORA10GR2> select account,
  2         (DB_BEG_BAL+CR_BEG_BAL) amt0, 0 amtytd0,
  3             (DB_AMOUNT_01+CR_AMOUNT_01) AS AMT1, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) AS AMTYTD1,
  4         (DB_AMOUNT_02+CR_AMOUNT_02) AS AMT2, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) AS AMTYTD2,
  5             (DB_AMOUNT_03+ CR_AMOUNT_03) AS AMT3, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) + (DB_AMOUNT_03+CR_AMOUNT_03) AS AMTYTD3
  6    from t1
  7   order by account
  8  /

   ACCOUNT       AMT0    AMTYTD0       AMT1    AMTYTD1       AMT2    AMTYTD2       AMT3    AMTYTD3
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
     99000          0          0          0          0     1463.2     1463.2     182.77    1645.97
     99001          0          0   -2992.42   -2992.42          0   -2992.42          0   -2992.42
     99002          0          0    -154604    -154604          0    -154604          0    -154604
     99003          0          0          0          0          0          0       -.31       -.31

ops$tkyte%ORA10GR2> with
  2  four as (select level-1 l from dual connect by level <= 4)
  3  select account, l mnth,
  4         decode( l, 0, amt0, 1, amt1, 2, amt2, 3, amt3 ) amt,
  5         decode( l, 0, amtytd0, 1, amtytd1, 2, amtytd2, 3, amtytd3 ) amtytd
  6    from (
  7  select account,
  8         (DB_BEG_BAL+CR_BEG_BAL) amt0, 0 amtytd0,
  9             (DB_AMOUNT_01+CR_AMOUNT_01) AS AMT1, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) AS AMTYTD1,
 10         (DB_AMOUNT_02+CR_AMOUNT_02) AS AMT2, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) AS AMTYTD2,
 11             (DB_AMOUNT_03+ CR_AMOUNT_03) AS AMT3, (DB_BEG_BAL+CR_BEG_BAL) + (DB_AMOUNT_01+CR_AMOUNT_01) + (DB_AMOUNT_02+CR_AMOUNT_02) + (DB_AMOUNT_03+CR_AMOUNT_03) AS AMTYTD3
 12    from t1
 13         ), four
 14   order by account, 2
 15  /

   ACCOUNT       MNTH        AMT     AMTYTD
---------- ---------- ---------- ----------
     99000          0          0          0
     99000          1          0          0
     99000          2     1463.2     1463.2
     99000          3     182.77    1645.97
     99001          0          0          0
     99001          1   -2992.42   -2992.42
     99001          2          0   -2992.42
     99001          3          0   -2992.42
     99002          0          0          0
     99002          1    -154604    -154604
     99002          2          0    -154604
     99002          3          0    -154604
     99003          0          0          0
     99003          1          0          0
     99003          2          0          0
     99003          3       -.31       -.31

16 rows selected.

Multi-tenant achitecture with skewed data

Sal, April 03, 2009 - 11:06 am UTC

Tom,

I have a couple of questions.

First, you said that histograms will not work because we have 'where labelcol = function'.

Even if we used labelcol = :x in the application layer in all queries, since we are using a bound value every time, we would face the same issue. Is my understanding correct?

To me it seems that my understanding is supported by this quote in Oracle documentation:

Histograms are not useful for columns with the following characteristics:

* All predicates on the column use bind variables.

So, it is not really an issue with using labelcol = sys_context. It is an issue with binding. If we use bind,s histograms are not useful. Is that correct?

If so, are there any enhancements in 11g to address data skewness? This seems like a pretty general problem and wouldn't anyone with a multi tenant architecture face such issues?

Thanks!
Tom Kyte
April 03, 2009 - 11:39 am UTC

...
Even if we used labelcol = :x in the application layer in all queries, since we
are using a bound value every time, we would face the same issue. Is my
understanding correct?
...

or worse, as you would have bind peeking - and the plans would flip flop "good to bad to good to bad".


In general, histograms are useful

a) when your application always queries the very selective data in the place where the bound query is executed or...

b) when your application always queries the non-selective data in the place where the bound query is executed or...

c) when you use literals, as you might in a warehouse


It is not true at all, not in the least, to say "if we use binds, histograms are not useful", not since 9i with bind peeking

http://asktom.oracle.com/Misc/tuning-with-sqltracetrue.html

which is a two edged sword in 9i/10g. Adaptive cursor sharing in 11g can address this bind peeking issue.

http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-sqlplanmanagement.html



Adaptive cursor sharing

Sal, April 03, 2009 - 12:11 pm UTC

Tom,

Adaptive cursor sharing seems like the exact solution I was thinking.

Your explanation of histograms is very well put. Histograms currently only work with bind variable peeking on the hard parse.

So, on subsequent soft parses, the plan would remain the same even though the bind variable value change might cause a sub-optimal execution plan. So, if the query is always expected to look at the data of similar 'selectivity', histograms + binds will work perfectly.

Adaptive cursor sharing seems to be the answer I was looking for where in the soft parse, it checks to make sure stuff is 'still alright'. Thanks for the link.

Is it possible in 11g to say that only inspect these SQL statements if the value of the binds on this particular column changes (or say if the bind value for columns with histograms on them changes)? Or would it work on all binds automatically?

Thanks!

Tom Kyte
April 03, 2009 - 12:30 pm UTC


Is it possible in 11g to say that only inspect these SQL statements if the
value of the binds on this particular column changes (or say if the bind value
for columns with histograms on them changes)? Or would it work on all binds
automatically?



step 1) the adaptive cursor sharing looks at the binds

step 2) if it determines that "different values of this bind would cause different plans" (based on the statistics available), it will mark that query as "bind sensitive" - a column in v$sql shows this flag

step 3) it'll watch as we execute the statement. If the observed work performed is radically different from the estimated work, it marks it as "aware of a problem" and will set up different cursors for different sets of bind inputs.

A reader, April 10, 2009 - 6:20 am UTC

Tom,
This is first time I am asking the question on this site.Please answer the following scenario.
I am trying somethings like this.

update party_name_japan a set a.full_name=(select d.cm_key from party_name_japan a inner join party_name
b on a.party_name_id=b.party_name_id
inner join xref_customer c on b.role_player_id=c.role_player_id
inner join xref_cm_key d on c.cm_key=d.cm_key)
where exists (select 1 from party_name_japan a inner join party_name b on a.party_name_id=b.party_name_id
)

Xref_cm_key and xref_customer are joined logically and not have physical primary/foreign key relationship.
Party_name_japan and Party_name are joined together through primary/foreign key.
role_player_id is logical relationship between xref_customer and party_name.
The above query is going slow.Any way to improve that.
Tom Kyte
April 13, 2009 - 4:27 pm UTC

no create tables
no create indexes...

not really easy to say anything - I cannot test a *thing*

Now, when I format this statement to make it readable, I see something "strange"

update party_name_japan a
   set a.full_name=(select d.cm_key
                      from party_name_japan a inner join party_name b on a.party_name_id=b.party_name_id
                                              inner join xref_customer c on b.role_player_id=c.role_player_id
                                              inner join xref_cm_key d on c.cm_key=d.cm_key
                   )
 where exists (select 1
                 from party_name_japan a inner join party_name b on a.party_name_id=b.party_name_id)



that where exists, that does *not* look right to me, does it look right to you??


Because basically that says "if there is ANY row in party_name_japan that has a match in party_name, then UPDATE ALL ROWS in party_name_japan"

That where exists will either

a) cause EVERY row to be updated
b) cause ZERO rows to be updated


is that what you mean to do?

Regarding "UPDATE"

Vis, April 15, 2009 - 4:51 am UTC

Hi Tom,
Which 2 steps are performed the first time any UPDATE stmt is issued after the instance is started? And the dumps say
1) Writing modified data to arch. redo log files
2) updating control file to indicate the most recent checkpoint.
I really don't understand the above ans. Would you please explain it?
Also, are these the common steps carried out every time any sql stmt is issued first time? If not, then wht are those steps?
Sorry if some one has already asked you this ques.
Tom Kyte
April 15, 2009 - 9:45 am UTC

well, #1 cannot be true since you don't need to have archiving enabled at all.

and #2 doesn't happen after "any update statement is issued"

I'm not really sure at all what you are asking - and I don't know what question your theoretical answers are too?

Vis, April 15, 2009 - 11:43 am UTC

Hi Tom,
Actually I am preparing for OCA and I found that ques in one of dump which I am refering to. And there were 2 ans. given in that dump which I gave in my prev post.
The other ans. are:
1) creating parse tree of stmt.
2) writing modified data block to data files
3) updating data file header to indicate most recent checkpt.
4) Reading blocks to database buffer cache if they are not already there.
And you need to choose an ans from given options only.
Tom Kyte
April 15, 2009 - 2:06 pm UTC


I don't know what a "dump" is. But those two 'points' aren't valid.


parsing the sql statement will happen (but the way they wrote it "creating parse tree of stmt" is poor).

reading blocks to the buffer cache will happen if needed.


Basic Sql Question

Samy, April 16, 2009 - 7:15 am UTC

might be basic question for you.

1) Does declaring variable in Procedure blocks the Memory Space say
if i declare v_Output Varchar(30000);
that much memory space is utilised.

2) i have 5 variable of same length in PL/Sql Block ,
Does type Declaration WITH ARRAY make the code efficent.

VFName1 Varchar(150);
VMName1 Varchar(50);
VLName1 Varchar(30);

VFName2 Varchar(150);
VMName2 Varchar(50);
VLName2 Varchar(30);

VFName3 Varchar(150);
VMName3 Varchar(50);
VLName3 Varchar(30);

VFName4 Varchar(150);
VMName4 Varchar(50);
VLName4 Varchar(30);

VFName5 Varchar(150);
VMName5 Varchar(50);
VLName5 Varchar(30);

3) Using Group By instead of Distinct make the code efficent..

Select Distinct EmpNo, Ename From Emp ;

Select EmpNo,Ename From Emp Group By EmpNo,Ename;

Tom Kyte
April 16, 2009 - 9:58 am UTC

1) it depends on the size of the variable. For strings under about 2000 bytes, the memory is statically allocated, for strings larger than that, we dynamically just what we need as we need it.

2) it sure makes the code easier.


3) in that case, since empno is the primary key - the only sensible query would just be select empno, ename from emp;

If you wanted distinct ename's, you would use......

distinct


you use group by to aggregate, not to distinct.

Basic Question

Samy, April 16, 2009 - 10:16 am UTC

Dear Tom, Thanks a Lot for your reply.

Regarding Group By and Distinct , its been told that Distinct take more time than Group by as in case of Distinct it needs to be Sorted is it true?.

Secondly do you have any link or Book that gives me in detail about how SQL query working architecture . when i run the SQL Query what all step it does, Say the Where condition executes and filter the records then Having condition executes and so on till it fetches the required result.


Tom Kyte
April 16, 2009 - 11:00 am UTC

well, group by might have to sort too!

they can do the same things.



ops$tkyte%ORA10GR2> /*
ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
ops$tkyte%ORA10GR2> as
ops$tkyte%ORA10GR2> select *
ops$tkyte%ORA10GR2>   from all_objects
ops$tkyte%ORA10GR2> /
ops$tkyte%ORA10GR2> */




select distinct owner
from
 t


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        3      0.02       0.02          0        690          0          30
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5      0.02       0.02          0        690          0          30

Rows     Row Source Operation
-------  ---------------------------------------------------
     30  HASH UNIQUE (cr=690 pr=0 pw=0 time=28064 us)
  49785   TABLE ACCESS FULL T (cr=690 pr=0 pw=0 time=49948 us)

********************************************************************************

select owner
from
 t group by owner


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        3      0.02       0.02          0        690          0          30
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5      0.02       0.02          0        690          0          30

Rows     Row Source Operation
-------  ---------------------------------------------------
     30  HASH GROUP BY (cr=690 pr=0 pw=0 time=22449 us)
  49785   TABLE ACCESS FULL T (cr=690 pr=0 pw=0 time=49852 us)




you can always test things, when someone says "X is true", ask them to put up or shut up (get some evidence that X is true)...




see part iv of this:

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm


if you want to read it in the way I write things - Effective Oracle By Design has a section on how sql statements are processed as well.

Reader, April 17, 2009 - 11:49 am UTC

CREATE TABLE dt_tab
( dt_key NUMBER,
dt DATE,
day_name varchar2(30),
day_of_week NUMBER(1),
yr NUMBER(4),
is_holiday VARCHAR2(1)
);

INSERT INTO dt_tab
SELECT TO_CHAR(dt,'yyyymmdd') as dt_key
,dt
,TO_CHAR(dt,'DY') day_name
,TO_CHAR(dt,'D') day_of_week
,TO_CHAR(dt,'YYYY') yr
,'N' as is_holiday
FROM (SELECT TO_DATE('01-jan-2009','dd-mon-yyyy')+ROWNUM-1 dt
FROM all_objects
WHERE ROWNUM <= (SYSDATE-TO_DATE('01-jan-2009','dd-mon-yyyy'))+1);

UPDATE dt_tab
SET is_holiday = 'Y'
WHERE dt_key in (20090101,20090119,20090216,20090410);

The following query is usded to insert data
into a temporary table
Get date for last 5th, 15th,20th,60th day
and also get date for prior 15th day(DP15) and prior 20th date(DP20)

when rnum is 20, the date is used as prior 15th day,
rnum is 40, the date is used as prior 20th day
rnum is 120, the date is used as prior 60th day
and so on.

create table tmp_dt_tbl
as
SELECT dt, rnum,
(CASE rnum WHEN 5 THEN 'DL5'
WHEN 15 THEN 'DL15'
WHEN 20 THEN 'DL20'
WHEN 60 THEN 'DL60'
ELSE NULL
END ) last_tag,
(CASE rnum WHEN 20 THEN 'DP15'
WHEN 40 THEN 'DP20'
ELSE NULL
END) prior_tag
FROM (select dt,rownum rnum
from (select dt from dt_tab where is_holiday='N' and day_of_week between 2 and 6
and dt >= trunc(sysdate) -110 and dt < trunc(sysdate) order by dt_key desc
)) ;

Below function is used to get the date

create or replace FUNCTION get_dt (p_type VARCHAR2, p_val VARCHAR2)
RETURN DATE
AS
l_date DATE;
BEGIN
IF (p_type = 'DP')
THEN
SELECT td INTO l_date
FROM tmp_dt_tbl WHERE prior_tag = p_tag AND ROWNUM < 2;
ELSIF (p_type = 'DL')
THEN
SELECT td INTO l_date FROM tmp_dt_tbl WHERE last_tag = tag_value AND ROWNUM < 2;
END IF;

RETURN l_date;
END get_dt;

The above function has been used to get the data from a t_activity table for reporting purpose. t_activity has millions of records.
Can you advice some ways to improve this process? The view v_t_activity takes too long to generate data.

create table t_activity(tid number,tdate date,order_cnt number)

create view v_t_activity
as
select tid,sum(case when tdate>=get_dt('DL', 'DL20') then order_cnt else 0 end) as last_20days_orders,
SUM(CASE when tdate >=get_dt('DP','DP20') and tdate < get_dt('DL', 'DL20') then order_cnt else 0 end) prior_20_days_orders
from t_activity
group by tid;

Reader, April 21, 2009 - 8:55 pm UTC

Tom,
Can you please give some advices on improving the above process?
Tom Kyte
April 23, 2009 - 11:53 am UTC

the logic made no sense to me at all.

it was not explained (in specification form), and I cannot reverse engineer everything everytime.

Database error ORA-00932:inconsistent data types:Expected number got blob

Hemalatha, April 24, 2009 - 6:08 am UTC

Hi Tom,

I create and inserted an image in the table.

I executed below steps:

CREATE OR REPLACE DIRECTORY Dtest1 AS 'G:\Test';

GRANT READ ON DIRECTORY Dtest1 TO PUBLIC;

CREATE TABLE Dtest_BLOBTABLE (BLOBID INTEGER, BLOBFORMAT VARCHAR2(3), BLOBLENGTH INTEGER, BLOBNAME VARCHAR2(50), BFILEFIELD BFILE, BLOBFIELD BLOB);


create or replace PROCEDURE dtest_INSERTBLOBFROMBFILE
( P_BLOBNAME IN dtest_BLOBTABLE.BLOBNAME%TYPE )
AS
P_BLOBID INTEGER;
P_BLOBFORMAT VARCHAR2(3);
P_BLOBLENGTH INTEGER;
P_BFILEFIELD BFILE;
P_BLOBFIELD BLOB;
BUFFER RAW(1024);
BUFFER_LENGTH INTEGER := 1024;
OFFSET INTEGER := 1;
CURRENT_WRITE INTEGER;
NUMBER_OF_WRITES INTEGER;
REMAINING_BYTES INTEGER;
BEGIN

SELECT (COUNT(BLOBID)+1) INTO P_BLOBID FROM Dtest_BLOBTABLE;
P_BLOBFORMAT := UPPER(SUBSTR(P_BLOBNAME,(LENGTH(P_BLOBNAME)-2),3));
P_BFILEFIELD := BFILENAME('DTEST1',P_BLOBNAME);
P_BLOBLENGTH := DBMS_LOB.GETLENGTH(P_BFILEFIELD);
INSERT INTO Dtest_BLOBTABLE VALUES (P_BLOBID,P_BLOBFORMAT,P_BLOBLENGTH,UPPER(P_BLOBNAME),P_BFILEFIELD,RAWTOHEX('0x00'));
SELECT BLOBFIELD INTO P_BLOBFIELD FROM Dtest_BLOBTABLE WHERE BLOBID = P_BLOBID FOR uPDATE;
DBMS_LOB.FILEOPEN(P_BFILEFIELD,DBMS_LOB.FILE_READONLY);
NUMBER_OF_WRITES := TRUNC(P_BLOBLENGTH/BUFFER_LENGTH);
FOR CURRENT_WRITE IN 1..NUMBER_OF_WRITES LOOP
DBMS_LOB.READ(P_BFILEFIELD,BUFFER_LENGTH,OFFSET,BUFFER);
DBMS_LOB.WRITE(P_BLOBFIELD,BUFFER_LENGTH,OFFSET,BUFFER);
OFFSET := OFFSET + BUFFER_LENGTH;
END LOOP;
REMAINING_BYTES := P_BLOBLENGTH-(BUFFER_LENGTH*NUMBER_OF_WRITES);
IF (REMAINING_BYTES > 0) THEN
DBMS_LOB.READ(P_BFILEFIELD,REMAINING_BYTES,OFFSET,BUFFER);
DBMS_LOB.WRITE(P_BLOBFIELD,REMAINING_BYTES,OFFSET,BUFFER);
END IF;
DBMS_LOB.FILECLOSE(P_BFILEFIELD);
COMMIT;
END;

I pulled the image field in the crystal reports 8.5 then it is giving error like
ORA-00932:inconsistent data types:Expected number got blob

Please help me out how to create and insert an image in oracle 9i or 10g.

Thanks,
Hemalatha

Tom Kyte
April 27, 2009 - 11:56 am UTC

<b> SELECT (COUNT(BLOBID)+1) INTO P_BLOBID FROM Dtest_BLOBTABLE;</b>
 P_BLOBFORMAT := UPPER(SUBSTR(P_BLOBNAME,(LENGTH(P_BLOBNAME)-2),3));
 P_BFILEFIELD := BFILENAME('DTEST1',P_BLOBNAME);
 P_BLOBLENGTH := DBMS_LOB.GETLENGTH(P_BFILEFIELD);
<b> INSERT INTO Dtest_BLOBTABLE VALUES 
(P_BLOBID, </b> P_BLOBFORMAT,P_BLOBLENGTH,UPPER(P_BLOBNAME),P_BFILEFIELD,RAWTOHEX('0x0
0'));



do you know how fragile that is - how bad that is - how much that just simply DOES NOT WORK??????? Not only is it horribly slow, it just does not work. Left as an exercise for the read to figure out why - with this hint: what happens when you have more than one user (and remember, we are NOT sqlserver, the blocking of sqlserver just doesn't happen here)

Your procedure is way too complex, there is an api to copy a file into the blob

ops$tkyte%ORA10GR2> CREATE TABLE Dtest_BLOBTABLE
  2  (BLOBID INTEGER,
  3   BLOBFORMAT VARCHAR2(3),
  4   BLOBLENGTH INTEGER,
  5   BLOBNAME VARCHAR2(50),
  6   BFILEFIELD BFILE,
  7   BLOBFIELD BLOB
  8  );

Table created.

ops$tkyte%ORA10GR2> create sequence blob_seq;

Sequence created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace PROCEDURE dtest_INSERTBLOBFROMBFILE
  2  ( P_BLOBNAME IN dtest_BLOBTABLE.BLOBNAME%TYPE )
  3  AS
  4      P_BLOBFORMAT VARCHAR2(3);
  5      P_BLOBLENGTH INTEGER;
  6      P_BFILEFIELD BFILE;
  7      P_BLOBFIELD BLOB;
  8  BEGIN
  9
 10   P_BLOBFORMAT := UPPER(SUBSTR(P_BLOBNAME,(LENGTH(P_BLOBNAME)-2),3));
 11   P_BFILEFIELD := BFILENAME('DTEST1',P_BLOBNAME);
 12   P_BLOBLENGTH := DBMS_LOB.GETLENGTH(P_BFILEFIELD);
 13
 14   INSERT INTO Dtest_BLOBTABLE VALUES
 15   (blob_seq.nextval, P_BLOBFORMAT, P_BLOBLENGTH,
 16    UPPER(P_BLOBNAME), P_BFILEFIELD, empty_blob() )
 17   returning blobfield into p_blobfield;
 18
 19   dbms_lob.fileopen(p_bfilefield,dbms_lob.file_readonly);
 20   dbms_lob.loadFromFile( p_blobfield, p_bfilefield, p_bloblength );
 21   dbms_lob.fileclose(p_bfilefield);
 22  end;
 23  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dtest_insertBlobFromBfile( 'expdat.dmp' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select blobid, blobformat, bloblength, blobname, dbms_lob.getlength(blobfield) len from Dtest_BLOBTABLE ;

    BLOBID BLO BLOBLENGTH BLOBNAME                                                  LEN
---------- --- ---------- -------------------------------------------------- ----------
         1 DMP     126976 EXPDAT.DMP                                             126976





now that said, it is not entirely clear at all where you are getting this error message - it does not seem to be coming from the stored procedure??? So I have no idea what is causing it (because you haven't told us)

Comma Seperated Query.

Rajeshwaran, Jeyabal, July 19, 2009 - 8:07 am UTC

Tom,

I have some data in this format.

CREATE TABLE T1(record_id NUMBER,record_Values VARCHAR2(50));

INSERT ALL INTO T1 (record_id,record_values)
SELECT 8001 AS record_id ,'P1 B1' AS record_values FROM DUAL 
UNION ALL
SELECT 8001 AS record_id ,'P2 B2' AS record_values FROM DUAL 
UNION ALL
SELECT 8001 AS record_id ,'P3 B3' AS record_values FROM DUAL 
UNION ALL
SELECT 8002 AS record_id ,'P1 B1' AS record_values FROM DUAL 
UNION ALL
SELECT 8002 AS record_id ,'P2 B2' AS record_values FROM DUAL 
UNION ALL
SELECT 8003 AS record_id ,'P1 B1' AS record_values FROM DUAL;

COMMIT;

scott@10G> select * from t1;

 RECORD_ID RECORD_VAL
---------- ----------
      8001 P1 B1
      8001 P2 B2
      8001 P3 B3
      8002 P1 B1
      8002 P2 B2
      8003 P1 B1

But i need the output like this.
 RECORD_ID RECORD_VALUES
---------- ------------------------------
      8001 P1,P2,P3    B2,B3,B1
      8002 P1,P2    B2,B1
      8003 P1     B1


I am using Oracle10gR2
Can you please show me how this can be done?

Thanks,
Rajesh.

Tom Kyte
July 24, 2009 - 11:30 am UTC

ops$tkyte%ORA10GR2> select record_id,
  2         max(substr(sys_connect_by_path(v1,','),2)) v1str,
  3         max(substr(sys_connect_by_path(v2,','),2)) v2str
  4    from (
  5  select record_id,
  6         substr(record_values,1,instr(record_values,' ')-1) v1,
  7         substr(record_values,instr(record_values,' ')+1) v2,
  8         row_number() over (partition by record_id order by rowid) rn
  9    from t1
 10         )
 11   start with rn = 1
 12  connect by prior record_id = record_id and prior rn+1 = rn
 13  group by record_id
 14  /

 RECORD_ID V1STR                V2STR
---------- -------------------- --------------------
      8001 P1,P2,P3             B1,B2,B3
      8002 P1,P2                B1,B2
      8003 P1                   B1


Query Question

David Piazza, July 25, 2009 - 10:35 pm UTC

Tom,

I have a table with the following data:

C_ID C_M_ID C_NAME C_CODE
---------- --------- ---------- ----------
1 1 ABC 12345
2 1 ABC 12345
3 1 ABC 23456
4 1 ABC 34567
5 1 XYZ 12345
6 1 XYZ 98765
7 1 QRS 45678
8 1 QRS 45678
9 1 WXY 12345
10 2 ABC 12345
11 2 QRS 23456
12 2 QRS 23456
13 3 ABC 12345
14 3 QRS 23456

I wish to create a report as follows:

1. Choose records with multiple C_M_ID's
2. Eliminate records with duplicate C_M_ID, C_NAME, C_CODE
3. Get records with same C_M_ID's, but C_NAME and C_CODE is
different.

Thus I would expect a report like:

1 1 ABC 12345
6 1 XYZ 98765
7 1 QRS 45678
10 2 ABC 12345
11 2 QRS 23456
13 3 ABC 12345
14 3 QRS 23456

I had to produce reports where there were multiple C_M_ID's and same C_NAME's but different C_CODE's and another where there were multiple C_M_ID's and different C_NAME's but same C_CODE's and I used analytics for them. But this report has me stumped. Below is the create table and index syntax:

create table t
(
c_id number not null,
c_m_id varchar2(9) not null,
c_name varchar2(10), not null,
c_code varchar2(10) not null
)
/

create unique index mid.t_pk on mid.t
(c_id);

alter table mid.t add (
constraint t_pk
primary key
(c_id));

INSERT INTO t values(1,'1','ABC','12345');
INSERT INTO t values(2,'1','ABC','12345');
INSERT INTO t values(3,'1','ABC','23456');
INSERT INTO t values(4,'1','ABC','34567');
INSERT INTO t values(5,'1','XYZ','12345');
INSERT INTO t values(6,'1','XYZ','98765');
INSERT INTO t values(7,'1','QRS','45678');
INSERT INTO t values(8,'1','QRS','45678');
INSERT INTO t values(9,'1','WXY','12345');
INSERT INTO t values(10,'2','ABC','12345');
INSERT INTO t values(11,'2','QRS','23456');
INSERT INTO t values(12,'2','QRS','23456');
INSERT INTO t values(13,'3','ABC','12345');
INSERT INTO t values(14,'3','QRS','23456');

COMMIT;

Any help is much appreciated... Thanks...
Tom Kyte
July 26, 2009 - 8:10 am UTC

1. Choose records with multiple C_M_ID's
2. Eliminate records with duplicate C_M_ID, C_NAME, C_CODE
3. Get records with same C_M_ID's, but C_NAME and C_CODE is
different.


why is this record in the answer?

   C_ID C_M_ID    C_NAME     C_CODE
---------- --------- ---------- ----------
1          1         ABC        12345



there is a duplicate C_M_ID, C_NAME, C_CODE for that record. You said clearly - get rid of records that have duplicates.


Further, why isn't

      C_ID C_M_ID    C_NAME     C_CODE            CNT
---------- --------- ---------- ---------- ----------
         3 1         ABC        23456               1


in your answer? c_m_id,c_name,c_code = (1,abc,23456) and that is NOT duplicated.


Your specification has miles to go before we can ascertain what you mean....

SELECT * FROM table - In a View

Rama Subramanian G, July 27, 2009 - 1:03 am UTC

Tom,

I have a table on which I have a view which selects all the columns of the table with a

SELECT * FROM table WHERE...

All was well until I altered the table to add a column.

Now, the columns in the view are not in the same order as in the table. I had to rewrite the view replacing the * with individual column names to retain the order of columns identical to that of the underlying table.

What could be a possible explanation for this behavior?

Tom Kyte
July 27, 2009 - 1:26 am UTC

when you create a view with "*", the "*" is expanded out for you. The view is frozen as of that point in time, insulated (by design) from changes to the base table.

The columns in the view should NOT change because you changed the order in the base table.


ops$tkyte%ORA10GR2> create or replace view v as select * from emp;

View created.

ops$tkyte%ORA10GR2> select text from user_views where view_name = 'V';

TEXT
-------------------------------------------------------------------------------
select "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" from emp


Query Question

David Piazza, July 27, 2009 - 2:02 am UTC

I apologize for the confusion. I didn't state 2. clearly. I meant save one of the duplicates and delete the others. For example: C_ID's of 1&2 are the same and 7&8 are the same so one of them would be in the result set. I'm thinking C_ID's 1 & 7 would be left if partitioning and keeping rownum=1. After step two the following records should be left:

C_ID C_M_ID C_NAME C_CODE
---------- --------- ---------- ----------
1 1 ABC 12345
3 1 ABC 23456
4 1 ABC 34567
5 1 XYZ 12345
6 1 XYZ 98765
7 1 QRS 45678
9 1 WXY 12345
10 2 ABC 12345
11 2 QRS 23456
13 3 ABC 12345
14 3 QRS 23456

3 1 ABC 23456 is not included because the C_NAME "ABC" is the same as
1 1 ABC 12345


Tom Kyte
July 27, 2009 - 5:52 am UTC

still fuzzy - but I think you mean this:

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select t.*,
  4         count(*) over (partition by c_m_id) cnt,
  5         row_number() over (partition by c_m_id, c_name, c_code order by c_id, rowid) rn
  6    from t
  7         )
  8   where cnt > 1
  9     and rn = 1
 10   order by c_id
 11  /

      C_ID C_M_ID    C_NAME     C_CODE            CNT         RN
---------- --------- ---------- ---------- ---------- ----------
         1 1         ABC        12345               9          1
         3 1         ABC        23456               9          1
         4 1         ABC        34567               9          1
         5 1         XYZ        12345               9          1
         6 1         XYZ        98765               9          1
         7 1         QRS        45678               9          1
         9 1         WXY        12345               9          1
        10 2         ABC        12345               3          1
        11 2         QRS        23456               3          1
        13 3         ABC        12345               2          1
        14 3         QRS        23456               2          1

11 rows selected.


Query-Not Quite

David Piazza, July 27, 2009 - 12:51 pm UTC

3. Get records with same C_M_ID's, but C_NAME and C_CODE is
different.

There should only be one distinct C_NAME and one distinct C_CODE in each group of C_M_ID's. I've taken the result set after applying requirement 2, and added an explanation to whether or not the record should be included in the final resultant set after applying requirement 3. Hopefully this makes it clearer.

C_ID C_M_ID C_NAME C_CODE EXPLANATION
---------- --------- ---------- ---------- -----------
1 1 ABC 12345 INCLUDED - 1st record in group C_M_ID=1
3 1 ABC 23456 NOT INCLUDED - C_NAME of "ABC" same as C_ID=1
4 1 ABC 34567 NOT INCLUDED - C_NAME of "ABC" same as C_ID=1
5 1 XYZ 12345 NOT INCLUDED - C_CODE of "12345" same as C_ID=1
6 1 XYZ 98765 INCLUDED - Both C_NAME of "XYZ" and "98765" different than CID=1
7 1 QRS 45678 INCLUDED - Both C_NAME of "QRS" and "45678" different than CID=1
9 1 WXY 12345 NOT INCLUDED - C_CODE of "12345" same as C_ID=1
10 2 ABC 12345 INCLUDED - 1st record in group C_M_ID=2
11 2 QRS 23456 INCLUDED - Both C_NAME of "QRS" and "23456" different than C_ID=10
13 3 ABC 12345 INCLUDED - 1st record in group C_M_ID=3
14 3 QRS 23456 INCLUDED - Both C_NAME of "QRS" and "23456" different than C_ID=13


Hopefully not confusing things even more, another example came to mind and I added a record 15 that wouldn't be included because the C_NAME isn't distinct in the group of C_M_ID=3. I wanted to show that the comparisons aren't just with the first record in the group.

C_ID C_M_ID C_NAME C_CODE EXPLANATION
---------- --------- ---------- ---------- -----------
1 1 ABC 12345 INCLUDED - 1st record in group C_M_ID=1
3 1 ABC 23456 NOT INCLUDED - C_NAME of "ABC" same as C_ID=1
4 1 ABC 34567 NOT INCLUDED - C_NAME of "ABC" same as C_ID=1
5 1 XYZ 12345 NOT INCLUDED - C_CODE of "12345" same as C_ID=1
6 1 XYZ 98765 INCLUDED - Both C_NAME of "XYZ" and "98765" different than CID=1
7 1 QRS 45678 INCLUDED - Both C_NAME of "QRS" and "45678" different than CID=1
9 1 WXY 12345 NOT INCLUDED - C_CODE of "12345" same as C_ID=1
10 2 ABC 12345 INCLUDED - 1st record in group C_M_ID=2
11 2 QRS 23456 INCLUDED - Both C_NAME of "QRS" and "23456" different than C_ID=10
13 3 ABC 12345 INCLUDED - 1st record in group C_M_ID=3
14 3 QRS 23456 INCLUDED - Both C_NAME of "QRS" and "23456" different than C_ID=13
15 3 QRS 89012 NOT INCLUDED - C_NAME of "QRS" same as C_ID=14

Tom Kyte
July 27, 2009 - 7:51 pm UTC

start over.

write everything as a requirement.


be so detailed and concise (in one place, don't make us page up, page down) in order to figure out what you mean. write it so your mother could write code from it.

don't give an example like that with an explanation of why or why not row X is in the result set - give us requirements - software requirements - to write/generate code from.

Query

David Piazza, July 29, 2009 - 3:30 pm UTC

I decided to write a small PERL script to achieve the results I wanted. Thanks for taking the time to help!
Tom Kyte
August 03, 2009 - 5:02 pm UTC


query

A reader, July 30, 2009 - 11:16 pm UTC

Tom:

DO you know why this query us giving me 9 records instead of 3. I want to get rank the records in STAGES and just ensure that book number exists in the STAGING table. but it is giving me 9 when i add that check.


SQL> select bkno,seq_no from stages where bkno=1000;

     BKNO     SEQ_NO
---------- ----------
     1000     171654
     1000     171636
     1000     171642


  1* select bkno,versioN_no from staging where bkno=1000

    BKNO VERSION_NO
---------- ----------
     1000          1
     1000          2
     1000          3

3 rows selected.

  1  select a.bkno, seq_no,dense_rank() over (partition by bkno order by seq_no desc) as rnk
  2      FROM stages a where bkno in (select distinct bkno from staging)
  3*    and a.bkno=1000
/

     BKNO      SEQ_NO        RNK
---------- ---------- ----------
     1000     171654          1
     1000     171654          1
     1000     171654          1
     1000     171642          2
     1000     171642          2
     1000     171642          2
     1000     171636          3
     1000     171636          3
     1000     171636          3

9 rows selected.





Tom Kyte
August 03, 2009 - 5:42 pm UTC

SMK, you have read this before:

no creates
no inserts
no look

Problem displaying zeroes

A Reader, July 31, 2009 - 10:58 am UTC

Hi Tom,

I need to get the values that retain one zero on the left of the decimal, and dont have any trailing zeroes at the end on right of a decimal?



create table test_num(num number);

insert into test_num values(1);
insert into test_num values(0.7);
insert into test_num values(0.187);
insert into test_num values(1.2);

commit;
select * from test_num;


SQL> select * from test_num;

       NUM
----------
         1
        .7
      .187
       1.2
       
To get the zeroes as needed, I can query as below:

select decode(instr(num,'.'),1,'0'||num,num) from test_num;

DECODE(INSTR(NUM,'.'),1,'0'||NUM,NUM)
-----------------------------------------
1
0.7
0.187
1.2


However, I know it is not the correct way and wanted to know if I can use the to_char or if there is any other better way?
With to_char, I was not able to get what I exactly wanted
to_char(num,'0.99999999') or other alternatives do not match exactly what I need.

Can you please help?

Thanks

Tom Kyte
August 04, 2009 - 12:23 pm UTC

you have what is called "not a number to store but a string". If you need leading/trailing zeroes to be "preserved", you do not have a number.

This format will work for your particular case, but not in general, you'd really have to know your number format to work in general - or use a string.


ops$tkyte%ORA10GR2> select rtrim(to_char( num, 'fm99999990.999' ),'.') from test_num;

RTRIM(TO_CHAR
-------------
1
0.7
0.187
1.2


query

A reader, July 31, 2009 - 7:45 pm UTC

Tom:

Concerning the issue above that returns cartesian product using the dense_rank is it possible to be due to a constraint or index between the tables.

select a.bkno, seq_no,dense_rank() over (partition by bkno order by seq_no desc) as rnk
FROM stages a where bkno in (select distinct bkno from staging)
and a.bkno=1000


When i clone each table and run the same query on the clone i get the correct result set which is 3 records.



Tom Kyte
August 04, 2009 - 12:33 pm UTC

SMK, you have read this before:

no creates
no inserts
no look

there is no cartesian join here, you have NO joins.

As with many of your other questions - you are not providing some salient bit of information - something is missing - and when and if you provide a full up reproducible test case - you'll find out what that is.


Lose that distinct in the subquery, it makes it look like you don't know how SQL works...

ops$tkyte%ORA10GR2> create table stages( bkno number, seq_no number );

Table created.

ops$tkyte%ORA10GR2> create table staging( bkno number, seq_no number );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into stages values (      1000, 171654 );

1 row created.

ops$tkyte%ORA10GR2> insert into stages values (      1000, 171636 );

1 row created.

ops$tkyte%ORA10GR2> insert into stages values (      1000, 171642 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into staging values (     1000, 1 );

1 row created.

ops$tkyte%ORA10GR2> insert into staging values (     1000, 2 );

1 row created.

ops$tkyte%ORA10GR2> insert into staging values (     1000, 3 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select a.bkno, seq_no,dense_rank() over (partition by bkno order by seq_no desc) as rnk
  2  FROM stages a where bkno in (select distinct bkno from staging)
  3  and a.bkno=1000
  4  /

      BKNO     SEQ_NO        RNK
---------- ---------- ----------
      1000     171654          1
      1000     171642          2
      1000     171636          3


make that reproduce your findings, when it does, you'll see what you are doing wrong.

query

A reader, August 13, 2009 - 6:26 pm UTC

Tom:

But what do you see wrong in my query above. you have same one with different results. I am using 9i and you are using 10g.

This has been puzzling to me. Would constraints or indexes between tables make a difference.
Tom Kyte
August 24, 2009 - 7:17 am UTC

again, I'll write:

SMK, you have read this before:

no creates
no inserts
no look



You have made a mistake somewhere, we won't know what mistake you made unless and until you give us a way to reproduce

query

sam, September 20, 2009 - 8:34 pm UTC

Tom:

How do you explain one query running with two different stats and elapsed times in TEST and PROD who are configure the same with same data too. it is taking 10-20 seconds in TEST and 1 second in PROD.

Do you see a better way to rewrite this too. would you write a subquery for the main table or use a table join for the MAX function.

SELECT a.bkno,a.prod,b.end_date,d.author,a.qty,a.Qty_recvd,a.sched_date,c.media
FROM dist a, prod@plink b, mbook@plink c, title@plink d
WHERE a.bkno = b.bkno
AND a.bkmedia = b.bkmedia
AND a.bkno = c.bkno
AND a.bkmedia = c.bkmedia
AND c.chartno = d.chartno
AND a.reship is null
AND (a.qty > a.qty_recvd or a.qty_recvd is null)
AND a.sched_date = (select max(sched_date) from dist where bkno=a.bkno and orgcd='CA11')
AND b.cntr_no in (Select cntr_no from contracts@plink where prdr=a.prod)
AND b.end_date between (sysdate-90) and (sysdate-60)
AND b.prodstage in ('XY','PR','ZC')
AND a.orgcd='CA11'
order by bkno



Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=104 Card=2 Bytes=406
)

1 0 SORT (ORDER BY) (Cost=98 Card=2 Bytes=406)
2 1 FILTER
3 2 FILTER
4 3 NESTED LOOPS (SEMI) (Cost=96 Card=2 Bytes=406)
5 4 NESTED LOOPS (Cost=92 Card=2 Bytes=388)
6 5 NESTED LOOPS (Cost=90 Card=2 Bytes=246)
7 6 NESTED LOOPS (Cost=74 Card=8 Bytes=584)
8 7 TABLE ACCESS (BY INDEX ROWID) OF 'DIST'
(Cost=8 Card=33 Bytes=1320)

9 8 INDEX (RANGE SCAN) OF 'DIST_IDX
' (NON-UNIQUE) (Cost=2 Card=14529)

10 7 REMOTE* PLINK

11 6 REMOTE* PLINK

12 5 REMOTE* PLINK

13 4 REMOTE* PLINK

14 2 SORT (AGGREGATE)
15 14 TABLE ACCESS (BY INDEX ROWID) OF 'DIST' (Cost=3
Card=1 Bytes=18)

16 15 INDEX (RANGE SCAN) OF 'DIST_IDX' (NON-U
NIQUE) (Cost=2 Card=1)



10 SERIAL_FROM_REMOTE SELECT "BKNO","BKMEDIA","PRODSTAGE","CNTR_NO","
END_DATE" FROM "PROD" "B" WHERE :1="B

11 SERIAL_FROM_REMOTE SELECT "BKNO","BKMEDIA","CHARTNO","MEDIA" LEN" FROM "MBOOKS" "C" WHERE :1="BKSE

12 SERIAL_FROM_REMOTE SELECT "CHARTNO","AUTH" FROM "TITLE
"D" WHERE :1="CHARTNO"

13 SERIAL_FROM_REMOTE SELECT "CNTR_NO","PRDR" FROM "CONTRACTS"
WHERE :1="CNTR_NO" AND "PRDR"=:2

Stats in TEST

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
18789 consistent gets
11933 physical reads
0 redo size
1331 bytes sent via SQL*Net to client
275 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed

Stats in PROD


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3530 consistent gets
0 physical reads
0 redo size
1727 bytes sent via SQL*Net to client
282 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
16 rows processed

Tom Kyte
September 28, 2009 - 12:48 pm UTC

it is obvious

the plans are different

because the data is different

because the stats are different.


they are totally different, I don't even see how you think you could compare them?

query

A reader, October 07, 2009 - 9:41 am UTC

Tom:

I have a query that derives some stats and displays results in a table as abelow.
Is it possible to use analystics or SQL to add another derived row that is based on the two values on the previous two rows. I want to compute (% rejected) after the "R" or rejection stats row which is the value for that stage divided by (total approved and rejected for that month)

For first cell below it would be (34/430+34) * 100%




SELECT Stage, Status
, count(*) total
, sum(case when to_char (a.compdt, 'YYYYMM') = '200801' then 1 else 0 end) "JAN 2008"
, sum(case when to_char (a.compdt, 'YYYYMM') = '200802' then 1 else 0 end) "FEB 2008"
, sum(case when to_char (a.compdt, 'YYYYMM') = '200803' then 1 else 0 end) "MAR 2008"
, sum(case when to_char (a.compdt, 'YYYYMM') = '200804' then 1 else 0 end) "APR 2008"
--- etc etc etc ---
from qprod a
join cntr b
on a.cntr_no = b.cntr_no
where a.vendor_id = 'ABC'
AND a.stage = 0
AND a.status = 'A'
AND trunc(a.compdt) >= '01-JAN-2008'
and trunc(a.compdt) <= '31-DEC-2009'
group by Stage, Status





Q Q TOTAL JUL 2008 AUG 2008 SEP 2008 OCT 2008 NOV 2008 DEC 2008 JAN 2009 FEB 2009
- - ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- --------
0 A 430 0 0 27 41 31 32 57 37
0 R 34 0 0 0 1 2 9 3 0
1 A 534 34 39 53 36 23 29 57 66
1 R 17 1 2 2 3 0 0 2 0
2 A 427 60 33 25 53 27 26 26 22
2 R 12 3 1 1 0 1 1 1 1
3 A 224 60 33 24 44 63 0 0 0
3 R 2 2 0 0 0 0 0 0 0
4 A 189 38 49 32 38 26 1 0 0
4 R 7 1 1 4 0 1 0 0 0
5 A 2117 2 5 4 1 0 1789 55 67

Q Q TOTAL JUL 2008 AUG 2008 SEP 2008 OCT 2008 NOV 2008 DEC 2008 JAN 2009 FEB 2009
- - ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- --------
5 R 9 0 0 0 0 0 0 1 1


CNTR
-----
cntr_no varchar2(10) PK,
vendor_id varchar2(10)

QPROD
-----
bkno number(10),
cntr_no varchar2(10),
stage Number(1), (0....6)
status varchar2(1), (A,R)
compdt date

GROUP BY clause

reader, October 21, 2009 - 10:48 am UTC

Hi Tom,

I have a SQL query question for you. I noticed the following piece of code in our application that is time consuming:

desc x

Name Null? Type
------------------------ -------- -------
a NOT NULL NUMBER(15)
b NOT NULL VARCHAR2(40)
c NOT NULL VARCHAR2(50)
d NOT NULL NUMBER(3)
e VARCHAR2(120)
f NUMBER(11)
g NUMBER(25)

unique index on x(a,b,c,d)

insert into x(a,b,c,d,e,f)
select d1, d2, d3, d4, min(d5), max(d6)
from
(select d1, d2, d3, d4, min(d5), max(d6)
from a
where dt = :date_field
and id = :id
and d2 is not null
and d3 is not null
group by d1, d2, d3, d4
union
select d1, d2, d8, d9, min(d5), max(d6)
from a
where dt = :date_field
and id = :id
and d2 is not null
and d3 is not null
group by d1, d2, d8, d9
)
group by d1, d2, d3, d4

It has three groups - I personally think the outer group is not needed as the union takes care of it. But...is there any clever way of getting the same output using analytic functions?

GROUP BY clause

reader, October 21, 2009 - 10:52 am UTC

Sorry, here's the query again ..corrected the typo

insert into x(a,b,c,d,e,f)
select d1, d2, d3, d4, min(d5), max(d6)
from
(select d1, d2, d3, d4, min(d5), max(d6)
from a
where dt = :date_field
and id = :id
and d2 is not null
and d3 is not null
group by d1, d2, d3, d4
union
select d1, d2, d8, d9, min(d5), max(d6)
from a
where dt = :date_field
and id = :id
and d8 is not null
and d9 is not null
group by d1, d2, d8, d9
)
group by d1, d2, d3, d4

Tom Kyte
October 23, 2009 - 12:23 pm UTC

well, it could be:

insert into x(a,b,c,d,e,f)
select d1, d2, decode(r,1,d3,d8) d3, decode(r,1,d4,d9) d4, max(d5), max(d6)
  from a, (select 1 r from dual union all select 2 r from dual) X
 where dt = :date_field
   and id = :id
   and ((d2 is not null and d3 is not null and r=1)
         or
         (d8 is not null and d9 is not null and r=2))
 group by d1, d2, decode(r,1,d3,d8), decode(r,1,d4,d9)
/


the cartesian join of AxX will return two rows for every row in A where dt=:d and id=:i, then it'll keep the first of the two if d2/d3 aren't null and it'll keep the second row if d8/d9 are not null.

Then'll it'll group by and output the result.

GROUP BY clause

reader, October 23, 2009 - 4:07 pm UTC

You know what - that is bloody ingenious! I would never have thought of that in a million years! Mind you, they do say ..good dba's are bad developers!

Thanks so much Tom! That piece of code has made my day! :-)

OK

Rama, January 02, 2010 - 12:39 pm UTC

Hi Tom,
I need your help in putting a query which should fetch the first non null
value(amount)from a table after performing a lookup in the lookup table
based on disease sub code.
Disease sub code repeats for disease code.

If I get the first non-null value for a particular disease sub code(dsesub_cd) for
particular Health network(say 1) I need to insert that record into another
table and repeat the same process for other health networks namely 2,3.

Dsesub_order must be obtained in the same order while doing lookups
as mentioned in the order of inserts

So for each disease sub code I must get only 3 records for insert into another table.

CREATE table statements and sample inserts are provided here.

Please let me how to achieve this.


create table dse_lookup(dse_cd number,dse_desc varchar2(60),
dsesub_cd number,dsesub_order number)
/
insert into dse_lookup values(12,'Malignant tumour',1000,20)
/
insert into dse_lookup values(12,'Malignant tumour',1005,40)
/
insert into dse_lookup values(12,'Malignant tumour',1011,60)
/
insert into dse_lookup values(12,'Malignant tumour',1006,80)
/
insert into dse_lookup values(12,'Malignant tumour',1012,100)
/
commit
/
insert into dse_lookup values(30,'Arthritis',2574,20)
/
insert into dse_lookup values(30,'Arthritis',2509,40)
/
insert into dse_lookup values(30,'Arthritis',8546,60)
/
insert into dse_lookup values(30,'Arthritis',2500,80)
/
insert into dse_lookup values(30,'Arthritis',8530,100)
/

create table dse_amt(health_plan varchar2(10),health_network char(1),
dsesub_cd number,amt number)
/
insert into dse_amt values('102','1',1000,50)
/
insert into dse_amt values('102','3',1000,50)
/
insert into dse_amt values('102','3',1000,50)
/
insert into dse_amt values('102','2',1000,25)
/
insert into dse_amt values('102','3',1000,15)
/
insert into dse_amt values('102','3',1000,1250)
/
insert into dse_amt values('102','2',1000,50)
/
insert into dse_amt values('102','3',1000,50)
/
insert into dse_amt values('102','3',1000,50)
/
insert into dse_amt values('102','2',1000,50)
/
insert into dse_amt values('102','2',1000,50)
/
insert into dse_amt values('102','3',2509,50)
/
insert into dse_amt values('102','2',2509,50)
/
insert into dse_amt values('102','2',2509,50)
/
insert into dse_amt values('102','3',2509,50)
/
insert into dse_amt values('102','3',2509,50)
/
insert into dse_amt values('102','3',2509,1250)
/
insert into dse_amt values('102','2',2509,25)
/
insert into dse_amt values('102','3',2509,50)
/
insert into dse_amt values('102','3',2509,50)
/
insert into dse_amt values('102','2',2509,50)
/
insert into dse_amt values('102','1',2509,50)
/
commit
/
Tom Kyte
January 04, 2010 - 11:10 am UTC

funny, is it just me or are there no nulls in the data? If your PRIMARY goal was to get the "first not null" value - why wouldn't your test case be LITTERED with nulls??????? Where are the nulls?


the rest of your specification is not clear, it almost sounds like you want to use a lookup record at most three times? Be a lot more specific please.

you seem to be missing things like instructions on how to join (no primary/foreign keys??)

SQL required

Sven Ollaffson, January 26, 2010 - 5:27 am UTC

Given a set of always ascending numbers in a string for example: "1,2,5", I have the task of splitting them into a left subset and a right subset with the following rules.

1. Every left subset (L) is the non-empty subset of original set. Note that the subset L must also adhere to the rule of having ascending members.
2. Every right subset (R) is just "Original Set MINUS Left (L)" where MINUS is a set operation.

So for example.
1,2,5 should give me rows as follows

LEFT RIGHT
----- ---------
1 2,5
2 1,5
5 1,2
1,2 5
1,5 2
2,5 1

Is it possible to do this in the single SQL? Maybe using a connect by recursive clause or the new WITH clause? Up for suggestions.

Thanks,
Sven

SQL Query

Sven Ollofson, January 28, 2010 - 4:34 am UTC

I was wondering if my previous query can be resolved with a single SQL or does it need a PL/SQL function to do this?

Thanks,
Sven
Tom Kyte
January 29, 2010 - 3:34 pm UTC

I didn't see an immediate way to do it.

SQL Query

Sven Ollafson, January 30, 2010 - 12:56 am UTC

Me neither :) But do you have any suggestions for the PL/SQL function that I could write for it?
I am wondering how I can get BOTH L and R subsets showing up in the same row returned from the SQL. I think using a connect by or the new WITH clause should help retrieve the combinations I am after, but then there has to be some sort of a set minus operation in Oracle that I should be able to perform. Let me know if you have any nice ideas... I am all ears.

Thanks,
Sven
Tom Kyte
February 01, 2010 - 9:39 am UTC

only with a three table almost cartesian join.

and it would be four table with four numbers

and so on.


connect by and with are about hierarchies/recursion - they are not the solution to every problem and I don't see a way here with a minutes thought. sorry.

SQL Query

Sven Ollafson, January 30, 2010 - 1:02 am UTC

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5487498576505#49540907042508

This was the post I found quite handy to generate the combinations that I was looking for. Just wondering how I can also generate the RIGHT hand column for each of this combination row. Will think of something.

@Sven re: SQL query

Stew Ashton, January 30, 2010 - 3:28 am UTC


Sven,

If you have "n" elements in your set, you want power(2, "n") - 2 lines. Use the line number as a bitmap to determine which elements go left and which go right. The following code does that but I don't string up the left and right elements because I don't have access to 11GR2. You can use LISTAGG to finish up. BITAND will not work with too big a number, so for large sets you may have to work around that.
with elements as (
select 1 one from dual
union all select 2 one from dual
union all select 5 one from dual
), bitmap as (
select level bitmap from dual
connect by level < power(2, (select count(*) from elements)) - 1
), bits as (
select one, bitmap, bitand (bitmap
, power(2, row_number() over(partition by bitmap order by one) - 1)) bit
from elements, bitmap
), left_right as (
select bitmap line, decode(bit, 0, null, one) left, decode(bit, 0, one, null) right
from bits
)
select * from left_right
/

LINE LEFT RIGHT
1 1 -
1 -  2
1 -  5
2 -  1
2 2 -
2 -  5
3 1 -
3 2 -
3 -  5
4 -  1
4 -  2
4 5 -
5 1 -
5 -  2
5 5 -
6 -  1
6 2 -
6 5 -

SQL Query

Sven Ollafson, January 30, 2010 - 7:54 am UTC

Thanks for that but I am not sure I understand what is going on there? Can you maybe put in some explanation of what you are doing with the bitands etc?

@Sven re: bitmaps

Stew Ashton, January 30, 2010 - 11:38 am UTC


Sven, I can tell you were never an assembler or C programmer; we ate bitmaps for lunch :) I will try to explain.

A bitmap is just a binary number. In our case, the first line is 1 and the last one is 6. Binarily, those numbers are
0 0 1 (equivalent to 0 + 0 + 1)
1 1 0 (equivalent to 4 + 2 + 0)
Each column is a "bit" that contains either zero or one. The idea is if a bit is one, the element goes on your left; if the bit is zero, the element goes on your right. So for line 6:
5 2 1 list of elements
1 1 0 BITMAP (decimal 6)
5 2 - LEFT
- - 1 RIGHT
For each element, how do you know that it's "bit" is one or zero? That's where BITAND comes in. BITAND compares bits in two bitmaps and returns a new bitmap. If a bit is one in the new bitmap, that bit was one in both the old bitmaps.
BITAND(0,0) is 0
BITAND(0,1) is 0
BITAND(1,0) is 0
BITAND(1,1) is 1
Now, how do we create the two bitmaps we want to compare? One is the line number, 6 in our example. The other has one in the bit that corresponds to an element; all the other bits are zero. Let's look at our elements again:
5 2 1 ELEMENT
3 2 1 Position (using row_number())
1 1 0 BITMAP (binary 6)
1 0 0 BIT for position 3 (decimal 4)
0 1 0 BIT for position 2 (decimal 2)
0 0 1 BIT for position 1 (decimal 1)
Let's zoom on 5, which is your third element. It's "bit" is the third one from the right, which is decimal 4. You want to know if that bit is one, so do BITAND(6,4) which will return either 4 or 0.

Now check this out: 4 is POWER(2,2) or POWER(2,3-1), and 3 is the position of your element. So based on position, you calculate your second bitmap and compare to the first, which is the line number. Let's show this again for element 5 on line 6:
5 ELEMENT
3 POSITION (= row-number() over(partition by bitmap order by one))
6 BITMAP (line number)
4 BIT (= power(2, POSITION - 1))
4 RESULT
-> result is non-zero, so 5 goes on the left
Hope this helps.

Query

A reader, February 10, 2010 - 6:19 am UTC

HI tom

i have below query and output for this .
SELECT
country,
pegged_currency,
SUM(eq_delta) AS eq_delta,
SUM(eq_gamma) AS eq_gamma,
SUM(eq_pl) AS eq_pl,
SUM(eq) AS eq
FROM ( SELECT rpt.country,
c.currency_peg AS pegged_currency ,
SUM(DECODE(rpt.agg_code,'EQ_CASH_DELTA',pl_on+pl_off,0)) AS eq_delta,
SUM(DECODE(rpt.agg_code,'EQ_CASH_GAMMA',pl_on+pl_off,0)) AS eq_gamma,
SUM(DECODE(rpt.agg_code,'EQ_PRE_PL',pl_on+pl_off,0)) AS eq_pl,
SUM(CASE WHEN rpt.agg_code IN('EQ_CASH_GAMMA','EQ_CASH_DELTA','EQ_PRE_PL') THEN pl_on+pl_off ELSE 0 END) AS eq
FROM ers_det_rpt rpt,country c,
ers_cube_head ec
WHERE rpt.rep_id = ec.det_rep_id
AND ec.pkg_id = 8868899
AND ec.pkg_mkd_id = 5107499
AND rpt.country = c.rpl_name
AND TRUNC(ec.created_date) = '31-DEC-2009'
GROUP BY rpt.country,c.currency_peg)
GROUP BY country, pegged_currency

Output
------

COUNTRY PEGGED_CURRENCY EQ_DELTA EQ_GAMMA EQ_PL EQ
RUSSIA N -23746871.85 21953998.86 -6625082.882 -8417955.876
ALGERIA 0 0 0 0
NIGERIA N 260535.9546 0 0 260535.9546
BOTSWANA N 0 0 0 0
CZECH REPUBLIC -604376.304 -29645.72603 -47423.89626 -681445.9263
HONG KONG N 78251983.6 8089020.972 -3759389.637 82581614.94
INDIA -41516798.45 -15306643.66 -36957571.89 -93781013.99

calclation for eq is EQ_CASH_DELTA+EQ_CASH_DELTA+EQ_PRE_PL.that is report requirment to add all and make a value for eq.
but these agg_code are dynamic ,after some time there are some new values for agg_code in ers_det_rpt table .
so my report look like this and also we have all the values for newly entered agg_code like pl_on and pl_off
so the output would be like this

want like this
--------------

COUNTRY PEGGED_CURRENCY EQ_DELTA EQ_GAMMA EQ_PL EQ New_agg_code1 New_agg_code2
RUSSIA N -23746871.85 21953998.86 -6625082.882 -8417955.876 Values Values1
ALGERIA 0 0 0 0 Values Values1
NIGERIA N 260535.9546 0 0 260535.9546 Values Values1
BOTSWANA N 0 0 0 0 Values Values1
CZECH REPUBLIC -604376.304 -29645.72603 -47423.89626 -681445.9263 Values Values1
HONG KONG N 78251983.6 8089020.972 -3759389.637 82581614.94 Values Values1
INDIA -41516798.45 -15306643.66 -36957571.89 -93781013.99 Values Values1

for each new agg_code we have pl_on and pl_off value in ers_det_rpt table
so the calculation for new ag_code will be

rpt.agg_code IN(NEW_AGG_CODE) THEN pl_on+pl_off ELSE 0 END) AS NEW_AGG_CODE

how can we transform the above query in such a way that when a new agg_code come the it should calculate automatic .



Tom Kyte
February 15, 2010 - 3:32 pm UTC

... how can we transform the above query in such a way that when a new agg_code
come the it should calculate automatic .
...


I don't know because

a) I see no create tables
b) I see no insert intos
c) no output that is readable/formatted
d) no specification, I see a "picture" of some output data - I see something:

rpt.agg_code IN(NEW_AGG_CODE) THEN pl_on+pl_off ELSE 0 END) AS NEW_AGG_CODE

that means nothing to me, but not much else.


Alexander the ok, February 17, 2010 - 9:17 am UTC

Tom,

I need help with a large query, what's the limit for lines of code you'll look at, it's pretty big.
Tom Kyte
February 17, 2010 - 10:59 am UTC

I cannot really take a really long query - I don't like to reverse engineer others SQL when the SQL is many times wrong or comes with lots of implied assumptions that are not 'true'

if you can phrase it succinctly with a few tables and a specification - we can take a look maybe.

Alexander the ok, February 17, 2010 - 11:05 am UTC

I basically want to rewrite it to not use scalar subqueries and function calls. I just want to logical equivalent, but I failed trying to do so the results were totally different after I got my hands on it. The format sucks too so it's stretched out over 400 lines (but would be a lot less if I had a tool to format nicely.)

If you'd be willing to just tweak those areas I am interested in, I'll dance at your wedding. I would of course pick up were you left off, have it tested etc I don't even care if your version compiles or not (I know you care....but I understand the risk).
Tom Kyte
February 17, 2010 - 11:48 am UTC

just give me an idea of it here... for example:

select a, b, c, 
       (select x from t2 where t2.key = Z1.key),
       (select y from t2 where t2.key = Z1.key),
       (select m from t3 where t3.something = Z2.something),
       ....
  from z1, z2, ... zn
 where (join conditions)


would become:

select a, b, c, t2.x, t2.y, t3.m
  from z1, z2, ... zn, t1, t2
 where (join conditions from above)
   and t2.key(+) = z1.key 
   and t3.something(+) = z2.something ...



just outer join to the tables (if the outer join is necessary - it might not be, you decide).



Alexander the ok, February 17, 2010 - 12:07 pm UTC

Thanks, I did have one example I was using. I will continue to work on it. In the meantime I trimmed up the statement to reflect what I am trying to achieve. There were some huge case statements that I'm not concerned about.

What I was stumped on, is the scalar subqueries that are select from the same column/table, but with different criteria. I couldn't figure out a way to rewrite that (lines 56-66).

1  SELECT   NUMBERPRGN,
  2                  PLANNED_START,
  3                  PLANNED_END,
  4                  DOWN_START,
  5                  DOWN_END,
  6                  SUBCATEGORY,
  7                  CURRENT_PHASE,
  8                  GROUPPRGN,
  9                  DESCRIPTION,
 10                  LM_IMPACT_HISTORY,
 11                  LM_CI_TYPE,
 12                  LOCATION_CODE,
 13                  LOCATION_NAME,
 14                  CI_DOWN,
 15                  INITIAL_IMPACT,
 16                  SEVERITY,
 17                  EMERGENCY,
 18                  LM_ENVIRONMENT,
 19                  STATUS,
 20                  APPROVAL_STATUS,
 21                  RISK_ASSESSMENT,
 22                  LM_MGR_FULLNAME,
 23                  LM_COORD_FULLNAME,
 24                  LM_CI_SUBTYPE,
 25                  CURRENT_PENDING_GROUPS,
 26                  BRIEF_DESCRIPTION,
 27                  NAME,
 28                  CATEGORY,
 29                  COMPLETION_CODE,
 30                  ORIG_DATE_ENTERED,
 31                  ASSOCIATED_CI,
 32                  RELATED_CHANGE_RELEASE,
 33                  PRIORITY,
 34                  TEST_ENV,
 35                  REF_NUMBER,
 36             LM_CC_ARRAY
 37           FROM   (SELECT   AL1.NUMBERPRGN,
 38                            AL1.PLANNED_START,
 39                            AL1.PLANNED_END,
 40                            AL1.DOWN_START,
 41                            AL1.DOWN_END,
 42                            AL1.SUBCATEGORY,
 43                            AL1.CURRENT_PHASE,
 44                            AL1.GROUPPRGN,
 45                            AL1.DESCRIPTION,
 46                            CASE
 47                               WHEN AL1.CATEGORY = 'Release' THEN AL1.JUSTIFICATION
 48                               ELSE AL1.LM_IMPACT_HISTORY
 49                            END
 50                               AS LM_IMPACT_HISTORY,
 51                            AL1.LM_CI_TYPE,
 52                            AL1.LOCATION_CODE,
 53                            AL2.LOCATION_NAME,
 54                            CASE WHEN AL1.CI_DOWN = 't' THEN 'Yes' ELSE 'No' END
 55                               CI_DOWN,
 56                            (SELECT   CODE_DESC
 57                               FROM   SC_COMMON_CODE_LKP_T LKP
 58                              WHERE       TABLE_NAME = 'CM3RM1'
 59                                      AND FIELD_NAME = 'INITIAL_IMPACT'
 60                                      AND CODE_VALUE = AL1.INITIAL_IMPACT)
 61                               INITIAL_IMPACT,
 62                            (SELECT   CODE_DESC
 63                               FROM   SC_COMMON_CODE_LKP_T LKP
 64                              WHERE       TABLE_NAME = 'CM3RM1'
 65                                      AND FIELD_NAME = 'SEVERITY'
 66                                      AND CODE_VALUE = AL1.SEVERITY)
 67                               SEVERITY,
 68                            CASE
 69                               WHEN AL1.LM_EMERGENCY = 't' THEN 'Yes'
 70                               ELSE 'No'
 71                            END
 72                               EMERGENCY,
 73                            AL1.LM_ENVIRONMENT,
 74                            AL1.STATUS,
 75                            AL1.APPROVAL_STATUS,
 76                            (SELECT   DISTINCT CD.DESCRIPTION
 77                               FROM   SERVICE.LMCODETABLEM1 CD
 78                              WHERE       CD.TABLEPRGN = 'cm3r'
 79                                      AND CD.CODETYPE = 'risk.assessment'
 80                                      AND CD.CODE = AL1.RISK_ASSESSMENT)
 81                               RISK_ASSESSMENT,
 82                            AL1.LM_MGR_FULLNAME,
 83                            (AL6.LAST_NAME || ', ' || AL6.FIRST_NAME)
 84                               LM_COORD_FULLNAME,
 85                            AL1.LM_CI_SUBTYPE,
 86                            AL3.CURRENT_PENDING_GROUPS,
 87                            AL1.BRIEF_DESCRIPTION,
 88                            AL3.NAME,
 89                            AL1.CATEGORY,
 90                            (SELECT   CODE_DESC
 91                               FROM   SC_COMMON_CODE_LKP_T LKP
 92                              WHERE       TABLE_NAME = 'CM3RM1'
 93                                      AND FIELD_NAME = 'COMPLETION_CODE'
 94                                      AND CODE_VALUE = AL1.COMPLETION_CODE)
 95                               COMPLETION_CODE,
 96                            AL1.ORIG_DATE_ENTERED,
 97                            SC_GET_ASSOCIATED_CI_FUNC (AL1.NUMBERPRGN, 2)
 98                               ASSOCIATED_CI,
 99                            SC_GET_RELATED_TICKETS_FUNC (AL1.NUMBERPRGN, 'cm3r', 1)
100                               RELATED_CHANGE_RELEASE,
101                            (SELECT   CODE_DESC
102                               FROM   SC_COMMON_CODE_LKP_T LKP
103                              WHERE       TABLE_NAME = 'CM3RM1'
104                                      AND FIELD_NAME = 'PRIORITY_CODE'
105                                      AND CODE_VALUE = AL1.PRIORITY_CODE)
106                               PRIORITY,
107                            AL1.LM_TEST_ENV TEST_ENV,
108                            AL1.REF_NUMBER,
109                                               LM_CC_ARRAY,
110                            COUNT( * )
111                               OVER (
112                                  PARTITION BY AL1.NUMBERPRGN,
113                                               AL3.CURRENT_PENDING_GROUPS
114                               )
115                               cnt,
116                            ROW_NUMBER ()
117                               OVER (
118                                  PARTITION BY AL1.NUMBERPRGN,
119                                               AL3.CURRENT_PENDING_GROUPS
120                                  ORDER BY AL4.RECORD_NUMBER
121                               )
122                               seq
123                     FROM   SERVICE.CM3RM1 AL1,
124                            SERVICE.LOCATIONM1 AL2,
125                            SERVICE.APPROVALA2 AL3,
126                            SERVICE.CM3RA2 AL4,
127                            SERVICE.DEVICEM1 AL5,
128                            SERVICE.CONTACTSM1 AL6
129                    WHERE   (AL1.LOCATION_CODE = AL2.LOCATION(+)
130                             AND AL1.NUMBERPRGN = AL3.UNIQUE_KEY(+))
131                            AND AL1.NUMBERPRGN = AL4.NUMBERPRGN(+)
132                            AND AL4.ASSETS = AL5.LOGICAL_NAME(+)
133                            AND AL1.COORDINATOR = AL6.CONTACT_NAME)
134          WHERE   seq = cnt
135     START WITH   seq = 1
136     CONNECT BY   PRIOR seq + 1 = seq AND PRIOR NUMBERPRGN = NUMBERPRGN
137                  AND PRIOR NVL (CURRENT_PENDING_GROUPS, '1') =
138*                       NVL (CURRENT_PENDING_GROUPS, '1')
SQL>

Tom Kyte
February 18, 2010 - 7:51 am UTC

you would use two outer joins (outer if and only if NECESSARY). In your case, your developers have used that "awesome" (I use quotes for sarcasm) single end all, be all code lookup table that is so flexible.

In any real life, it would have been two tables, so just pretend it is.

Whats the problem?

JerryQ, February 18, 2010 - 4:11 am UTC

Hi Alexander
What do you think the problem is here? Looks like sql in lines 56/66 is accessing a LookUp table for 2 types of values - this would be a fairly standard way of implementing it. You could also use an outer join (or a join if those columns are not null and value is always in LookUp table), but you would still have to go to the lookup table twice - once for each type of value you need. Once your lookup table has the correct indexes, then there shouldn't be an issue.

You could also implement this lookup as a package/function call - e.g.
select ...,
...,
LkUp_pkg.getVal('CM1RM','Severity',AL1.INITIAL_IMPACT)
...

Then, in the LkUp_pkg package, you can cache the values in a persistent array - this may give you some performance improvement - but then again, the switching from SQL to pl/sql may use up the gain. Easy enough to try though.


Tom Kyte
February 18, 2010 - 9:42 am UTC

... this would be a fairly
standard way of implementing it. ..

standard but bad.



... Then, in the LkUp_pkg package, you can cache the values in a persistent array -
this may give you some performance improvement - but then again ...

we already do that with scalar subqueries for you - but NOT for function calls, do not replace scalar subqueries with hand written plsql - do the other way (replace hand written plsql with scalar subqueries) but not this way.

Alexander the ok, February 18, 2010 - 8:55 am UTC

Thanks Tom & Jerry ;)

After a tkprof, I think the issue might be more related to the function calls as opposed to the scalar subqueries. Any thoughts Tom?

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        6      0.36       0.35          0          0          0           0
Execute      7      0.01       0.01          0        117         67          44
Fetch    53266    138.79     149.70     682163     417960     768412       53334
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53279    139.18     150.07     682163     418077     768479       53378

Misses in library cache during parse: 5
Misses in library cache during execute: 3


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       47      0.04       0.05          0          1         18           0
Execute 489919      4.76       4.18          6        162        117          44
Fetch   504433     18.39      17.74         41    6829875         40     3106783
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   994399     23.20      21.98         47    6830038        175     3106827

Misses in library cache during parse: 28
Misses in library cache during execute: 28

   17  user  SQL statements in session.
  156  internal SQL statements in session.
  173  SQL statements in session.
********************************************************************************
Trace file: ocp25t_ora_16729.trc
Trace file compatibility: 10.01.00
Sort options: default

       4  sessions in tracefile.
      41  user  SQL statements in trace file.
     312  internal SQL statements in trace file.
     173  SQL statements in trace file.
      37  unique SQL statements in trace file.
 1049135  lines in trace file.
     245  elapsed seconds in trace file.


Tom Kyte
February 18, 2010 - 9:51 am UTC

why do you say that?

Alexander the ok, February 18, 2010 - 10:48 am UTC

Look at all that overhead for recursive calls. Not to mention I already know calling pl/sql from sql is bad and to be avoided because of the context switching. It's mentioned in Conner's book in the first 10 pages or so....
Tom Kyte
February 18, 2010 - 7:33 pm UTC

I saw the recursive calls, it is only 14% of the total - so I said "eh, so"

Because moving it into a scalar subquery or a join isn't going to get rid of it entirely, it'll still be some (probably double digit) percentage of the runtime.

and not all of the recursive sql was from plsql functions - it could have other causes.


it wasn't the low hanging fruit to me, it was small potatoes.

Tell you what, replace the function calls with CAST(null as <function_return_type>) and that'll tell you the *best case* scenario

and then realize that the best case isn't the case you'll get, it'll still take some amount of that CPU to retrieve the missing data.

Alexander, February 18, 2010 - 9:20 pm UTC

Ok so where do you think I should focus then? I can post the functions, they're small. I can post the plan, I would provide create tables but they're huge, tons of columns, tons of LOBs.
Tom Kyte
February 25, 2010 - 12:08 am UTC

start here first, see if it would even make a difference:

... Tell you what, replace the function calls with CAST(null as <function_return_type>) and that'll tell you the *best case* scenario ...

also, you should probably always:

select column, column, (select f(x) from dual), column, ....

instead of

select column, column, f(x), column, ......


to take advantage of scalar subquery caching if you are not already.

Alexander, February 25, 2010 - 9:38 am UTC

Tom,

The CAST helped a ton. I don't understand how those are equivalent at all though? Can you explain that?

Just so you have some vague clue as to what is happening, the function:

CREATE OR REPLACE FUNCTION SC_GET_ASSOCIATED_CI_FUNC ( CHANGENUMBER IN VARCHAR2, STEP IN NUMBER)
RETURN VARCHAR2
IS
 ChangeString VARCHAR2(4000) := NULL;
BEGIN
 FOR cur_rec IN (SELECT AL4.LOGICAL_NAME || ' ' || AL4.NETWORK_NAME || '    DR:' ||          
                   CASE WHEN AL4.DEVICE_DR_DEPENDENT = 't' then 'Y'  else 'N' END || ' ' || '   IN_S:' || 
                   CASE WHEN AL4.DEVICE_ASSET_INSCOPE = 't' then 'Y' else 'N' END CI  
                   FROM SERVICE.CM3RA2 AL3, SERVICE.DEVICEM1 AL4  
                  WHERE AL3.NUMBERPRGN = CHANGENUMBER 
                    AND AL3.ASSETS = AL4.LOGICAL_NAME) LOOP

  IF LENGTH(ChangeString) < 3000 or ChangeString IS NULL THEN
            ChangeString := ChangeString || ',  ' || cur_rec.CI;
  END IF;
END LOOP;
RETURN LTRIM(ChangeString, ',');
END IF;
END;
/

Before and after your suggestion:

Elapsed: 00:17:44.03

Execution Plan
----------------------------------------------------------
Plan hash value: 349680963

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |   103K|  1097M|       | 29931   (1)| 00:07:00 |
|*  1 |  FILTER                       |            |       |       |       |            |          |
|*  2 |   CONNECT BY WITH FILTERING   |            |       |       |       |            |          |
|*  3 |    VIEW                       |            |   103K|  1107M|       | 33246   (1)| 00:07:46 |
|   4 |     WINDOW SORT               |            |   103K|   137M|   293M| 33246   (1)| 00:07:46 |
|   5 |      NESTED LOOPS OUTER       |            |   103K|   137M|       | 14381   (1)| 00:03:22 |
|*  6 |       HASH JOIN RIGHT OUTER   |            |   103K|   136M|       | 14367   (1)| 00:03:22 |
|   7 |        TABLE ACCESS FULL      | APPROVALA2 |  2034 | 79326 |       |     5   (0)| 00:00:01 |
|*  8 |        HASH JOIN RIGHT OUTER  |            |   103K|   132M|       | 14361   (1)| 00:03:22 |
|   9 |         TABLE ACCESS FULL     | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 10 |         HASH JOIN RIGHT OUTER |            |   103K|   130M|  3232K| 14350   (1)| 00:03:21 |
|  11 |          TABLE ACCESS FULL    | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 12 |          HASH JOIN            |            | 62062 |    77M|    11M| 10824   (1)| 00:02:32 |
|  13 |           TABLE ACCESS FULL   | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  14 |           TABLE ACCESS FULL   | CM3RM1     | 62062 |    74M|       |  4821   (1)| 00:01:08 |
|* 15 |       INDEX UNIQUE SCAN       | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
|* 16 |    HASH JOIN                  |            |       |       |       |            |          |
|  17 |     CONNECT BY PUMP           |            |       |       |       |            |          |
|  18 |     VIEW                      |            |   103K|  1097M|       | 29931   (1)| 00:07:00 |
|  19 |      WINDOW SORT              |            |   103K|   119M|   248M| 29931   (1)| 00:07:00 |
|  20 |       NESTED LOOPS OUTER      |            |   103K|   119M|       | 13496   (1)| 00:03:09 |
|* 21 |        HASH JOIN RIGHT OUTER  |            |   103K|   118M|       | 13482   (1)| 00:03:09 |
|  22 |         TABLE ACCESS FULL     | APPROVALA2 |  2034 | 67122 |       |     5   (0)| 00:00:01 |
|* 23 |         HASH JOIN RIGHT OUTER |            |   103K|   115M|       | 13476   (1)| 00:03:09 |
|  24 |          TABLE ACCESS FULL    | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 25 |          HASH JOIN RIGHT OUTER|            |   103K|   113M|  3232K| 13465   (1)| 00:03:09 |
|  26 |           TABLE ACCESS FULL   | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 27 |           HASH JOIN           |            | 62062 |    66M|    11M| 10382   (1)| 00:02:26 |
|  28 |            TABLE ACCESS FULL  | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  29 |            TABLE ACCESS FULL  | CM3RM1     | 62062 |    64M|       |  4821   (1)| 00:01:08 |
|* 30 |        INDEX UNIQUE SCAN      | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
|  31 |    VIEW                       |            |   103K|  1106M|       | 33246   (1)| 00:07:46 |
|  32 |     WINDOW SORT               |            |   103K|   137M|   293M| 33246   (1)| 00:07:46 |
|  33 |      NESTED LOOPS OUTER       |            |   103K|   137M|       | 14381   (1)| 00:03:22 |
|* 34 |       HASH JOIN RIGHT OUTER   |            |   103K|   136M|       | 14367   (1)| 00:03:22 |
|  35 |        TABLE ACCESS FULL      | APPROVALA2 |  2034 | 79326 |       |     5   (0)| 00:00:01 |
|* 36 |        HASH JOIN RIGHT OUTER  |            |   103K|   132M|       | 14361   (1)| 00:03:22 |
|  37 |         TABLE ACCESS FULL     | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 38 |         HASH JOIN RIGHT OUTER |            |   103K|   130M|  3232K| 14350   (1)| 00:03:21 |
|  39 |          TABLE ACCESS FULL    | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 40 |          HASH JOIN            |            | 62062 |    77M|    11M| 10824   (1)| 00:02:32 |
|  41 |           TABLE ACCESS FULL   | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  42 |           TABLE ACCESS FULL   | CM3RM1     | 62062 |    74M|       |  4821   (1)| 00:01:08 |
|* 43 |       INDEX UNIQUE SCAN       | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SEQ"="CNT")
   2 - access("NUMBERPRGN"=PRIOR "NUMBERPRGN" AND NVL("CURRENT_PENDING_GROUPS",'1')=PRIOR
              NVL("CURRENT_PENDING_GROUPS",'1'))
       filter("SEQ"=PRIOR "SEQ"+1)
   3 - filter("SEQ"=1)
   6 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
   8 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  10 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  12 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  15 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))
  16 - access("NUMBERPRGN"=PRIOR "NUMBERPRGN" AND NVL("CURRENT_PENDING_GROUPS",'1')=PRIOR
              NVL("CURRENT_PENDING_GROUPS",'1'))
  21 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
  23 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  25 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  27 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  30 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))
  34 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
  36 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  38 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  40 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  43 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))


Statistics
----------------------------------------------------------
   21808662  recursive calls
     711131  db block gets
  107871432  consistent gets
    2346053  physical reads
          0  redo size
  144506365  bytes sent via SQL*Net to client
   65330581  bytes received via SQL*Net from client
     193840  SQL*Net roundtrips to/from client
          2  sorts (memory)
          4  sorts (disk)
      61106  rows processed

Elapsed: 00:02:21.45

Execution Plan
----------------------------------------------------------
Plan hash value: 349680963

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |   103K|   998M|       | 29931   (1)| 00:07:00 |
|*  1 |  FILTER                       |            |       |       |       |            |          |
|*  2 |   CONNECT BY WITH FILTERING   |            |       |       |       |            |          |
|*  3 |    VIEW                       |            |   103K|  1009M|       | 32908   (1)| 00:07:41 |
|   4 |     WINDOW SORT               |            |   103K|   135M|   293M| 32908   (1)| 00:07:41 |
|   5 |      NESTED LOOPS OUTER       |            |   103K|   135M|       | 14291   (1)| 00:03:21 |
|*  6 |       HASH JOIN RIGHT OUTER   |            |   103K|   134M|       | 14277   (1)| 00:03:20 |
|   7 |        TABLE ACCESS FULL      | APPROVALA2 |  2034 | 79326 |       |     5   (0)| 00:00:01 |
|*  8 |        HASH JOIN RIGHT OUTER  |            |   103K|   130M|       | 14271   (1)| 00:03:20 |
|   9 |         TABLE ACCESS FULL     | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 10 |         HASH JOIN RIGHT OUTER |            |   103K|   128M|  3232K| 14259   (1)| 00:03:20 |
|  11 |          TABLE ACCESS FULL    | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 12 |          HASH JOIN            |            | 62062 |    75M|    11M| 10779   (1)| 00:02:31 |
|  13 |           TABLE ACCESS FULL   | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  14 |           TABLE ACCESS FULL   | CM3RM1     | 62062 |    73M|       |  4821   (1)| 00:01:08 |
|* 15 |       INDEX UNIQUE SCAN       | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
|* 16 |    HASH JOIN                  |            |       |       |       |            |          |
|  17 |     CONNECT BY PUMP           |            |       |       |       |            |          |
|  18 |     VIEW                      |            |   103K|   998M|       | 29931   (1)| 00:07:00 |
|  19 |      WINDOW SORT              |            |   103K|   119M|   248M| 29931   (1)| 00:07:00 |
|  20 |       NESTED LOOPS OUTER      |            |   103K|   119M|       | 13496   (1)| 00:03:09 |
|* 21 |        HASH JOIN RIGHT OUTER  |            |   103K|   118M|       | 13482   (1)| 00:03:09 |
|  22 |         TABLE ACCESS FULL     | APPROVALA2 |  2034 | 67122 |       |     5   (0)| 00:00:01 |
|* 23 |         HASH JOIN RIGHT OUTER |            |   103K|   115M|       | 13476   (1)| 00:03:09 |
|  24 |          TABLE ACCESS FULL    | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 25 |          HASH JOIN RIGHT OUTER|            |   103K|   113M|  3232K| 13465   (1)| 00:03:09 |
|  26 |           TABLE ACCESS FULL   | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 27 |           HASH JOIN           |            | 62062 |    66M|    11M| 10382   (1)| 00:02:26 |
|  28 |            TABLE ACCESS FULL  | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  29 |            TABLE ACCESS FULL  | CM3RM1     | 62062 |    64M|       |  4821   (1)| 00:01:08 |
|* 30 |        INDEX UNIQUE SCAN      | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
|  31 |    VIEW                       |            |   103K|  1008M|       | 32908   (1)| 00:07:41 |
|  32 |     WINDOW SORT               |            |   103K|   135M|   293M| 32908   (1)| 00:07:41 |
|  33 |      NESTED LOOPS OUTER       |            |   103K|   135M|       | 14291   (1)| 00:03:21 |
|* 34 |       HASH JOIN RIGHT OUTER   |            |   103K|   134M|       | 14277   (1)| 00:03:20 |
|  35 |        TABLE ACCESS FULL      | APPROVALA2 |  2034 | 79326 |       |     5   (0)| 00:00:01 |
|* 36 |        HASH JOIN RIGHT OUTER  |            |   103K|   130M|       | 14271   (1)| 00:03:20 |
|  37 |         TABLE ACCESS FULL     | LOCATIONM1 |  1247 | 31175 |       |    10   (0)| 00:00:01 |
|* 38 |         HASH JOIN RIGHT OUTER |            |   103K|   128M|  3232K| 14259   (1)| 00:03:20 |
|  39 |          TABLE ACCESS FULL    | CM3RA2     |   103K|  2018K|       |    66   (4)| 00:00:01 |
|* 40 |          HASH JOIN            |            | 62062 |    75M|    11M| 10779   (1)| 00:02:31 |
|  41 |           TABLE ACCESS FULL   | CONTACTSM1 |   250K|  9043K|       |  2271   (1)| 00:00:32 |
|  42 |           TABLE ACCESS FULL   | CM3RM1     | 62062 |    73M|       |  4821   (1)| 00:01:08 |
|* 43 |       INDEX UNIQUE SCAN       | DEVICEM1_P |     1 |     9 |       |     0   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SEQ"="CNT")
   2 - access("NUMBERPRGN"=PRIOR "NUMBERPRGN" AND NVL("CURRENT_PENDING_GROUPS",'1')=PRIOR
              NVL("CURRENT_PENDING_GROUPS",'1'))
       filter("SEQ"=PRIOR "SEQ"+1)
   3 - filter("SEQ"=1)
   6 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
   8 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  10 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  12 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  15 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))
  16 - access("NUMBERPRGN"=PRIOR "NUMBERPRGN" AND NVL("CURRENT_PENDING_GROUPS",'1')=PRIOR
              NVL("CURRENT_PENDING_GROUPS",'1'))
  21 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
  23 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  25 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  27 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  30 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))
  34 - access("AL1"."NUMBERPRGN"="AL3"."UNIQUE_KEY"(+))
  36 - access("AL1"."LOCATION_CODE"="AL2"."LOCATION"(+))
  38 - access("AL1"."NUMBERPRGN"="AL4"."NUMBERPRGN"(+))
  40 - access("AL1"."COORDINATOR"="AL6"."CONTACT_NAME")
  43 - access("AL4"."ASSETS"="AL5"."LOGICAL_NAME"(+))


Statistics
----------------------------------------------------------
        714  recursive calls
     711131  db block gets
     371722  consistent gets
     772881  physical reads
          0  redo size
  140682927  bytes sent via SQL*Net to client
   65330642  bytes received via SQL*Net from client
     193840  SQL*Net roundtrips to/from client
          2  sorts (memory)
          4  sorts (disk)
      61106  rows processed


Tom Kyte
March 01, 2010 - 11:05 am UTC

... The CAST helped a ton. I don't understand how those are equivalent at all though? Can you explain that? ...

I think you missed my point. I wrote:

Tell you what, replace the function calls with CAST(null as <function_return_type>) and that'll tell you the *best case* scenario

I was simply trying to quantify what, if any, return on investment you would get. Using cast(null as function-return-type) just removed entirely the function call - that is the best case.

Now, use

select ...., (select f(x) from dual), .....

instead of f(x) and see is scalar subquery caching (which will tend to MINIMIZE - but cannot remove - the number of times your function is executed) helps - since you've shown that 15 out of 17 minutes of runtime is spent in the functions.

Seeing the massive reduction in physical IO's though:

2346053 to 772881

I'm not sure you'll see that entire 15 minutes, or even most of it, since this appears to be a case of "data isn't in the cache", unless by reducing the number of function calls, you massively reduce the number of times we have to re-read a block.

Alexander the ok, March 01, 2010 - 11:42 am UTC

Yeah while you were kicking it in Russia I also tried the select from dual trick. It helped a lot in our reporting database, not at all in the online system. I'm completely stumped as to why at the moment, I'd be rid of this issue otherwise.

The major difference between the two is the block size, reporting has 16k, online 8k. Here are the stats (the plans are the same):

Elapsed: 00:27:06.38

Statistics
----------------------------------------------------------
     373101  recursive calls
     988287  db block gets
    2877318  consistent gets
    6486927  physical reads
          0  redo size
  203613743  bytes sent via SQL*Net to client
   90001644  bytes received via SQL*Net from client
     196104  SQL*Net roundtrips to/from client
        123  sorts (memory)
          5  sorts (disk)
      61671  rows processed

Elapsed: 00:03:41.43

Statistics
----------------------------------------------------------
     370261  recursive calls
     715157  db block gets
    2307738  consistent gets
    2286972  physical reads
          0  redo size
  145873328  bytes sent via SQL*Net to client
   65992315  bytes received via SQL*Net from client
     195704  SQL*Net roundtrips to/from client
         41  sorts (memory)
          4  sorts (disk)
      61571  rows processed


What information would you need to explain the difference? v$parameter? tkprof?

Thanks in advance.
Tom Kyte
March 01, 2010 - 12:15 pm UTC

the online system does a lot more physical IO (even if you divide it in half, it does 140% of the physical IO of the other system).


I'd say "trace it", see what exactly it is waiting on in "online"




To Alexander

A reader, March 01, 2010 - 12:27 pm UTC

Hi Alexander,

Now set arraysize to something about 200 in your session and try.
Tom Kyte
March 02, 2010 - 6:37 am UTC

... Just doesn't sound right. ....

prove me wrong? And in doing so - you'll find out WHAT IS :)

trace with waits
get the tkprofs...

Alexander the ok, March 01, 2010 - 1:37 pm UTC

I did, I didn't really see any smoking guns. It just looks like the reporting database's bigger SGA is accounting for this, 10GB vs 6GB. However, the online system is RAC, so we have two 6GB instances. I just find it really hard to believe pio's are causing it to take an extra 23 minutes. Just doesn't sound right.

Good:

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      1.06       1.35          0          0          0           0
Execute      6      0.02       0.04          0        123         58          44
Fetch    61769    138.14     208.60    2288472     373812     717731       61837
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    61780    139.22     210.00    2288472     373935     717789       61881

Misses in library cache during parse: 4
Misses in library cache during execute: 3


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      217      0.22       0.29          0          1          2           0
Execute 370565      1.63       7.43         10        771         98          45
Fetch   372097      2.76      14.25        306    1944603         23      340016
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   742879      4.61      21.98        316    1945375        123      340061

Misses in library cache during parse: 59
Misses in library cache during execute: 60

   16  user  SQL statements in session.
 1065  internal SQL statements in session.
 1081  SQL statements in session.
********************************************************************************
Trace file: ocp21p_ora_8237270.trc
Trace file compatibility: 10.01.00
Sort options: default

       4  sessions in tracefile.
      37  user  SQL statements in trace file.
    2985  internal SQL statements in trace file.
    1081  SQL statements in trace file.
      66  unique SQL statements in trace file.
  812229  lines in trace file.
     259  elapsed seconds in trace file.

Not good:


********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.70       0.73          0          0          0           0
Execute      5      0.03       0.04          0        162         73          44
Fetch    61836    336.30    1359.91    6489072     841271     991171       61904
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    61846    337.04    1360.69    6489072     841433     991244       61948

Misses in library cache during parse: 4
Misses in library cache during execute: 2


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      350      0.09       0.10          0          1          6           0
Execute 371671      5.56       5.40         13        918        127          45
Fetch   373134     11.39      12.48        186    2449410         41      345714
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   745155     17.04      18.00        199    2450329        174      345759

Misses in library cache during parse: 69
Misses in library cache during execute: 70

   15  user  SQL statements in session.
 1738  internal SQL statements in session.
 1753  SQL statements in session.
********************************************************************************
Trace file: ocp25p1_ora_14178.trc
Trace file compatibility: 10.01.00
Sort options: default

       3  sessions in tracefile.
      33  user  SQL statements in trace file.
    4303  internal SQL statements in trace file.
    1753  SQL statements in trace file.
      66  unique SQL statements in trace file.
  818971  lines in trace file.
    1390  elapsed seconds in trace file.

Alexander the ok, March 02, 2010 - 8:17 am UTC

Did you reply to the wrong post? The trace is there?
Tom Kyte
March 02, 2010 - 12:38 pm UTC

I wrote:

... trace with waits
get the tkprofs... ...

where are the wait events - where is the information that'll tell us where 1,000 seconds were spent in the second one?

are these machines identical in other respects - eg: same exact, precisely the same exact, cpu type/speed?

Alexander the ok, March 02, 2010 - 2:07 pm UTC

Can you tell me what command to use so I've giving you exactly what you want?

No the machines are not the same.

Reporting db:
AIX LPAR,
10.2.0.2 Ent Edition
10GB SGA,
4GB PGA,
16k block size,
4 CPU 1900 mhz

Online:
Linux Redhat 5.3
10.2.0.3 RAC 2 Node
6GB SGA
2GB PGA
8k block size
2 CPU 2000 mhz (each node)

Now I know there's an apples vs flying toaster oven analogy coming, but really, they're not hugely different in computing capacity especially considering online is RAC.
Tom Kyte
March 02, 2010 - 2:11 pm UTC

... Can you tell me what command to use so I've giving you exactly what you want?...

use dbms_monitor session_trace_enable with waits=>true


ops$tkyte%ORA11GR2> exec dbms_monitor.session_trace_enable( waits => true );

PL/SQL procedure successfully completed




You cannot compare these two machines - especially CPU time, they are completely different architectures (RISC vs CISC). They are not even a little comparable at the detail level.

Alexander the ok, March 02, 2010 - 3:38 pm UTC

Got the waits, looks like I should focus on "direct path read temp". Haven't found what that's indicative of though yet.


********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.83       1.02         15        709          0           0
Execute      6      0.13       0.50         25       1057         73          45
Fetch    62368    560.79    2291.75    6712176    3322601    1010683       62436
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    62379    561.76    2293.28    6712216    3324367    1010756       62481

Misses in library cache during parse: 4
Misses in library cache during execute: 3

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                  198872        0.00          0.22
  SQL*Net message from client                198872       64.42         92.45
  SQL*Net more data from client                3919        0.00          0.02
  library cache lock                             25        0.00          0.00
  row cache lock                                 49        0.00          0.01
  db file sequential read                      2437        0.14          6.00
  gc cr grant 2-way                             661        0.00          0.14
  gc current block 2-way                        401        0.00          0.15
  library cache pin                               9        0.00          0.00
  rdbms ipc reply                                10        0.00          0.00
  SQL*Net more data to client                 18909        0.00          0.19
  gc cr block 2-way                               6        0.00          0.00
  gc cr multi block request                    3267        0.00          0.28
  direct path write temp                       2048        0.01          6.12
  direct path read temp                     6567057        0.37       1816.31
  latch free                                     27        0.00          0.01
  gc current grant busy                           1        0.00          0.00
  gc current grant 2-way                          1        0.00          0.00
  enq: CF - contention                          589        0.05          0.29
  control file sequential read                 6523        0.01          2.79
  KSV master wait                              1782        0.98          7.36
  db file single write                          593        0.00          0.17
  control file parallel write                  1779        0.01          1.04
  DFS lock handle                              1894        0.16         40.22
  local write wait                             2372        0.01          1.52
  enq: HW - contention                            1        0.00          0.00
  latch: ges resource hash list                  28        0.00          0.00
  KJC: Wait for msg sends to complete             1        0.00          0.00
  latch: enqueue hash chains                      1        0.00          0.00
  resmgr:cpu quantum                             39        0.01          0.04
  latch: cache buffers lru chain                  1        0.00          0.00
  log file sync                                   1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      364      0.09       0.12          0        288          6           0
Execute 374574      5.96       5.99         13       1144        125          45
Fetch   375940     12.77      14.08        177    2476516         43      348080
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   750878     18.84      20.21        190    2477948        174      348125

Misses in library cache during parse: 61
Misses in library cache during execute: 63

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  library cache lock                             50        0.00          0.01
  row cache lock                                427        0.00          0.08
  db file sequential read                       178        0.06          1.37
  gc cr grant 2-way                             104        0.00          0.02
  gc current block 2-way                        656        0.00          0.23
  library cache pin                              12        0.00          0.00
  rdbms ipc reply                                12        0.00          0.00
  gc cr block 2-way                               8        0.00          0.00
  gc cr multi block request                      20        0.00          0.00
  direct path write temp                          1        0.00          0.00
  direct path read temp                           3        0.00          0.00
  latch free                                      6        0.00          0.00
  enq: HW - contention                            5        0.00          0.00

   16  user  SQL statements in session.
 1461  internal SQL statements in session.
 1477  SQL statements in session.
********************************************************************************
Trace file: ocp25p2_ora_22155.trc
Trace file compatibility: 10.01.00
Sort options: default

       4  sessions in tracefile.
      37  user  SQL statements in trace file.
    3549  internal SQL statements in trace file.
    1477  SQL statements in trace file.
      67  unique SQL statements in trace file.
 7837263  lines in trace file.
    2340  elapsed seconds in trace file.

Tom Kyte
March 02, 2010 - 3:42 pm UTC

... "direct path read temp". ...


you are sorting/hashing to disk - that is what that is.


On the 'smaller' system, you are not having the same PGA size and probably you have more concurrent users, therefore the pga workareas will be smaller on that machine.

Alexander the ok, March 03, 2010 - 11:41 am UTC

This is aggravating. I QUADRUPLED the pga, and it's still no where's near what I'm seeing in the reporting database.

What would you suggest, should I re-trace with waits with the extra pga, see if that temp usage is down?
Tom Kyte
March 03, 2010 - 11:44 am UTC

did you quadruple the ram available to the machine too???

do you really have that ram or did you overcommit the machine?

and look at the details, aggregates are always fuzzy. Is there anything in particular using temp - it would be of use to you to know, then you could look at that and perhaps ask yourself "why"


...What would you suggest, should I re-trace with waits with the extra pga, see if that temp usage is down? ..

sure, absolutely - that is the way to know if you did anything for the problem.

Alexander the ok, March 03, 2010 - 12:11 pm UTC

Yeah we have available memory. 16GB nodes, currently 6GB is being used for the sga. Nothing else on there.

This does change pga dynamically right?

alter system set pga_aggregate_target=8G scope=both SID='*';
Tom Kyte
March 03, 2010 - 12:48 pm UTC

yes, that would change it dynamically.

trace it again, and look at the details, not just the aggregate. You might notice something in the details that you don't see in the aggregate.

Alexander the ok, March 03, 2010 - 1:45 pm UTC

It's the meat of the query using it up. I really don't know what else to look at, the same query using the same schema is doing 2-3x more pios. Could that mean too small a buffer cache? If I run it repeatedly though I still see a high number of pios. This is the culprit from the trace:

<<HUGE QUERY>>

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.36       0.38          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    62545    516.08    1559.92    6717092    3331748    1010836       62544
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    62547    516.45    1560.30    6717092    3331748    1010836       62544

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74

Rows     Row Source Operation
-------  ---------------------------------------------------
  62544  FILTER  (cr=3331748 pr=6717092 pw=285340 time=21931449305 us)
 105481   CONNECT BY WITH FILTERING (cr=3331748 pr=6717092 pw=285340 time=1224386203 us)
  62544    VIEW  (cr=1110868 pr=58516 pw=53599 time=42218573 us)
 105481     WINDOW SORT (cr=284760 pr=58516 pw=53599 time=16189030 us)
 105481      NESTED LOOPS OUTER (cr=284760 pr=1 pw=0 time=2480489 us)
 105481       HASH JOIN RIGHT OUTER (cr=75114 pr=1 pw=0 time=1531143 us)
   2059        TABLE ACCESS FULL APPROVALA2 (cr=31 pr=1 pw=0 time=1087 us)
 104728        HASH JOIN RIGHT OUTER (cr=75083 pr=0 pw=0 time=1518569 us)
   1247         TABLE ACCESS FULL LOCATIONM1 (cr=46 pr=0 pw=0 time=40 us)
 104728         HASH JOIN RIGHT OUTER (cr=75037 pr=0 pw=0 time=1305759 us)
 106440          TABLE ACCESS FULL CM3RA2 (cr=500 pr=0 pw=0 time=27 us)
  62347          HASH JOIN  (cr=74537 pr=0 pw=0 time=1049111 us)
 250558           TABLE ACCESS FULL CONTACTSM1 (cr=15213 pr=0 pw=0 time=250616 us)
  63548           TABLE ACCESS FULL CM3RM1 (cr=59324 pr=0 pw=0 time=762931 us)
 104823       INDEX UNIQUE SCAN DEVICEM1_P (cr=209646 pr=0 pw=0 time=689003 us)(object id 93092)
 105481    HASH JOIN  (cr=1110439 pr=57761 pw=52853 time=31273580 us)
  62544     CONNECT BY PUMP  (cr=0 pr=0 pw=0 time=8 us)
 105481     VIEW  (cr=1110439 pr=57761 pw=52853 time=30781974 us)
 105481      WINDOW SORT (cr=284760 pr=57761 pw=52853 time=16013988 us)
 105481       NESTED LOOPS OUTER (cr=284760 pr=0 pw=0 time=2196706 us)
 105481        HASH JOIN RIGHT OUTER (cr=75114 pr=0 pw=0 time=1352839 us)
   2059         TABLE ACCESS FULL APPROVALA2 (cr=31 pr=0 pw=0 time=68 us)
 104728         HASH JOIN RIGHT OUTER (cr=75083 pr=0 pw=0 time=1344954 us)
   1247          TABLE ACCESS FULL LOCATIONM1 (cr=46 pr=0 pw=0 time=1278 us)
 104728          HASH JOIN RIGHT OUTER (cr=75037 pr=0 pw=0 time=1028163 us)
 106440           TABLE ACCESS FULL CM3RA2 (cr=500 pr=0 pw=0 time=28 us)
  62347           HASH JOIN  (cr=74537 pr=0 pw=0 time=925476 us)
 250558            TABLE ACCESS FULL CONTACTSM1 (cr=15213 pr=0 pw=0 time=63 us)
  63548            TABLE ACCESS FULL CM3RM1 (cr=59324 pr=0 pw=0 time=572294 us)
 104823        INDEX UNIQUE SCAN DEVICEM1_P (cr=209646 pr=0 pw=0 time=622336 us)(object id 93092)
 105481    VIEW  (cr=1110441 pr=58515 pw=53599 time=33487190 us)
 105481     WINDOW SORT (cr=284762 pr=58515 pw=53599 time=18402800 us)
 105481      NESTED LOOPS OUTER (cr=284762 pr=0 pw=0 time=2287395 us)
 105481       HASH JOIN RIGHT OUTER (cr=75116 pr=0 pw=0 time=1443537 us)
   2059        TABLE ACCESS FULL APPROVALA2 (cr=31 pr=0 pw=0 time=42 us)
 104728        HASH JOIN RIGHT OUTER (cr=75085 pr=0 pw=0 time=1435442 us)
   1247         TABLE ACCESS FULL LOCATIONM1 (cr=46 pr=0 pw=0 time=25 us)
 104728         HASH JOIN RIGHT OUTER (cr=75039 pr=0 pw=0 time=1118592 us)
 106440          TABLE ACCESS FULL CM3RA2 (cr=500 pr=0 pw=0 time=24 us)
  62347          HASH JOIN  (cr=74539 pr=0 pw=0 time=977350 us)
 250558           TABLE ACCESS FULL CONTACTSM1 (cr=15213 pr=0 pw=0 time=69 us)
  63548           TABLE ACCESS FULL CM3RM1 (cr=59326 pr=0 pw=0 time=699365 us)
 104823       INDEX UNIQUE SCAN DEVICEM1_P (cr=209646 pr=0 pw=0 time=666250 us)(object id 93092)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  library cache lock                              6        0.00          0.00
  row cache lock                                 15        0.00          0.00
  library cache pin                               1        0.00          0.00
  rdbms ipc reply                                 2        0.00          0.00
  SQL*Net message to client                   62546        0.00          0.08
  SQL*Net more data to client                 14290        0.00          0.17
  SQL*Net message from client                 62546        0.00          4.03
  gc cr multi block request                      11        0.00          0.00
  db file sequential read                         1        0.00          0.00
  gc cr block 2-way                               9        0.00          0.00
  enq: TT - contention                            5        0.00          0.00
  direct path write temp                       2183        0.02          5.61
  gc current block 2-way                          1        0.00          0.00
  direct path read temp                     6570030        0.14       1181.49
  latch free                                      7        0.00          0.00
  SQL*Net more data from client                2912        0.00          0.01
  resmgr:cpu quantum                             49        0.00          0.00
********************************************************************************


Tom Kyte
March 04, 2010 - 7:05 am UTC

think I see it - with this plan.

Contact support, reference bug 5065418

to test - alter your session and set the workarea policy to manual, set sort area size/hash area size high - and rerun. If it goes fast without spilling to disk - then it is likely this, an issue with connect by filtering.

To: Alexandar the ok

Narendra, March 04, 2010 - 3:20 am UTC

Alexandar,

I am not sure if that is the case but following details, read together, appear to be suggesting that direct path read temp is reading one block at a time whereas I guess, in theory, it is expected to read multiple blocks at a time.
Event waited on                             Times   Max. Wait  Total Waited
----------------------------------------   Waited  ----------  ------------
direct path read temp                     6570030        0.14       1181.49

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Fetch    62545    516.08    1559.92    6717092    3331748    1010836       62544

I remember reading somewhere that this was a bug in the version of oracle database that you are using. I guess Jonathan Lewis and Greg Rahn confirmed about this bug and Greg informed that it was fixed in 10.2.0.4.
Online:
Linux Redhat 5.3
10.2.0.3 RAC 2 Node

Not sure if I am shooting in the dark, but Tom will validate this.

To: Alexander the ok

Narendra, March 04, 2010 - 3:53 am UTC

Here is the link to the post I was talking about:
http://forums.oracle.com/forums/thread.jspa?messageID=4078469�

Alexander, March 04, 2010 - 10:10 am UTC

Thanks, sure sounds like my issue but I can't prove it. I ran these:

alter session set WORKAREA_SIZE_POLICY=MANUAL;
alter session set HASH_AREA_SIZE=2048576000;
alter session set SORT_AREA_SIZE =2048576000;


No improvement. I'm having difficulty finding any information on this bug. I searched in metalink, it won't display it.

I noticed we don't have filesystemio set, async io seems to come up when I research this wait event.
Tom Kyte
March 04, 2010 - 11:20 am UTC

please open a service request with the information you have so far...

this wait even is simply due to reading from temp, do you see the 6+ million IO's? coming out of the connect by filter? I'm surprised it goes this fast with 6+ million IOs

To - Alexander

Kaparwan, March 05, 2010 - 12:08 am UTC

Alexander


....set the workarea policy to manual, set sort area size/hash area size high - and rerun


make sure you have sort_area_retained_size at least equal to ( or more than) sort_area_size

and then re-run...and see

(If sort area retained is set to default smaller value then also we can see huge PIO)

Tom Kyte
March 05, 2010 - 5:38 am UTC

or, do not set it at all (as presumed), and it'll be the same.

Alexander, March 09, 2010 - 2:58 pm UTC

Tom, I'm just curious, why does coding the function to select from dual enable the use of caching when just calling the function does not?
Tom Kyte
March 09, 2010 - 4:18 pm UTC

A reader, March 10, 2010 - 11:34 am UTC

Hi Tom,

if we define function as deterministic then also it cache the results which will use if same input is given to the function


Thanks
Tom Kyte
March 11, 2010 - 8:06 am UTC

in 10g and above *it might*, but typically not nearly as well as scalar subquery caching.

And the function would have to be deterministic - there is that as well. Is your function deterministic? (hint: if it includes any SQL in it, it is most definitely NOT deterministic, unless the sql is totally not relevant to the answer returned).

so, odds are your function isn't deterministic, and if it is - scalar subquery caching is still "better". Even if your function is RESULT CACHED, scalar subquery caching is superior


From "least work" to "most work"

a) scalar subquery caching of a result cache function
b) scalar subquery caching of any type of function
c) calling the function

ops$tkyte%ORA11GR2> create or replace function f( x in varchar2 ) return number
  2  as
  3  begin
  4          dbms_application_info.set_client_info(userenv('client_info')+1 );
  5          return length(x);
  6  end;
  7  /
Function created.

<b>function increments counter every time called...</b>

ops$tkyte%ORA11GR2> variable cpu number
ops$tkyte%ORA11GR2> clear screen
ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, f(owner) from stage;
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        50 71482

<b>it was called once per row in this case, expected.... Using scalar subquery caching:</b>

ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, (select f(owner) from dual) f from stage;
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        22 64

<b>it was only called 64 times, pretty large reduction in CPU time (and if the function did real work, did something larger than just increment a counter, you can imagine the cpu going down more)</b>

ops$tkyte%ORA11GR2> clear screen
ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, (select f(owner) from dual) f
  2    from (select owner, rownum r from stage order by owner);
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        19 30

<b>here, we improved the cachability of the function return value by sorting the data first - by the inputs to the function.  If the function was really expensive to invoke, this might have been worth it - we cut in half the number of calls - if the function took a cpu second to execute - that would have paid off nicely - here, it wasn't better or worse really...</b>

ops$tkyte%ORA11GR2> create or replace function f( x in varchar2 ) return number
<b>  2  DETERMINISTIC</b>
  3  as
  4  begin
  5          dbms_application_info.set_client_info(userenv('client_info')+1 );
  6          return length(x);
  7  end;
  8  /
Function created.

ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, f(owner) from stage;
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        32 8179

<b>deterministic helps, but not as much as scalar subquery caching did...</b>

ops$tkyte%ORA11GR2> create or replace function f( x in varchar2 ) return number
  2  RESULT_CACHE
  3  as
  4  begin
  5          dbms_application_info.set_client_info(userenv('client_info')+1 );
  6          return length(x);
  7  end;
  8  /
Function created.

ops$tkyte%ORA11GR2> clear screen
ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, f(owner) from stage;
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;
 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        24 30

<b>that helped...</b>
  
ops$tkyte%ORA11GR2> clear screen
ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, f(owner) from stage;
71482 rows selected.
ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        24 0

<b>No change...</b>

ops$tkyte%ORA11GR2> exec :cpu := dbms_utility.get_cpu_time; dbms_application_info.set_client_info(0);

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select owner, (select f(owner) from dual) from stage;
71482 rows selected

ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> select dbms_utility.get_cpu_time-:cpu cpu_hsecs, userenv('client_info') from dual;

 CPU_HSECS USERENV('CLIENT_INFO')
---------- ----------------------------------------------------------------
        22 0

<b>a little bit better...</b>




A reader, March 11, 2010 - 10:55 am UTC

Excellent!!!

Alexander, March 15, 2010 - 10:41 am UTC

Tom,

I opened a case with support. It's not going something so well at the moment, but I want to share something with you.

You know how you told me to change the work_area_policy etc to see if I hit the bug? I stumbled over this somehow, that if I set it after I run the query, it works. Look at this, do you have any idea what the deal is?

(x223kdc:oracle)> sqlplus /

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Mar 15 10:20:47 2010

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> alter session set WORKAREA_SIZE_POLICY=MANUAL;

Session altered.

SQL> alter session set HASH_AREA_SIZE=1048576000;

Session altered.

SQL> alter session set SORT_AREA_SIZE =1048576000;

Session altered.

SQL> set pagesize 9999
SQL> set linesize 160
SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> @nasty_query.sql

63740 rows selected.

Elapsed: 00:31:49.81

Statistics
----------------------------------------------------------
     383094  recursive calls
    1074055  db block gets
    2565255  consistent gets
    8011841  physical reads
          0  redo size
  211856504  bytes sent via SQL*Net to client
   93844975  bytes received via SQL*Net from client
     204384  SQL*Net roundtrips to/from client
          0  sorts (memory)
          6  sorts (disk)
      63740  rows processed

SQL> @nasty_query.sql

63740 rows selected.

Elapsed: 00:32:38.34

Statistics
----------------------------------------------------------
     383089  recursive calls
    1074055  db block gets
    2565255  consistent gets
    8012222  physical reads
          0  redo size
  211856504  bytes sent via SQL*Net to client
   93844975  bytes received via SQL*Net from client
     204384  SQL*Net roundtrips to/from client
          0  sorts (memory)
          6  sorts (disk)
      63740  rows processed

SQL>  alter session set HASH_AREA_SIZE=1048576000;

Session altered.

Elapsed: 00:00:00.00
SQL> alter session set SORT_AREA_SIZE =1048576000;

Session altered.

Elapsed: 00:00:00.00
SQL> @nasty_query.sql

63740 rows selected.

Elapsed: 00:01:06.43

Statistics
----------------------------------------------------------
     381562  recursive calls
          0  db block gets
    2565072  consistent gets
          0  physical reads
          0  redo size
  211856504  bytes sent via SQL*Net to client
   93844975  bytes received via SQL*Net from client
     204384  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
      63740  rows processed




Tom Kyte
March 15, 2010 - 11:36 am UTC

look at the update here:
http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html

When I saw you did it twice, I was reminded of this issue

Alexander, March 15, 2010 - 11:48 am UTC

That says it's in 10.2.0.4 though? Could it be in 10.2.0.3 (apparently)?
Tom Kyte
March 15, 2010 - 11:51 am UTC

it was 10.2.0.3

Alexander, March 15, 2010 - 12:03 pm UTC

Alright, well JL's workaround (just run it twice) seems to work. God bless that man, if I lived to be a 1000 I wouldn't be as good as him.

So I have views that are hitting this problem, would it be ok to throw these alter sessions in there?
Tom Kyte
March 15, 2010 - 12:09 pm UTC

as a workaround for a problem to get it "fixed right now" - yes, definitely.


as a "standard operating procedure for everything" - no, definitely not :)

Alexander, March 15, 2010 - 12:44 pm UTC

If you ever wonder why people come to you when they should go to support, he's a good example. You helped me resolve my issue infinitely faster just by taking a few stabs at it. They're still asking me questions that I answered in my SR when I filled it out. They don't even bother to read what you have written in the boxes that are mandatory.

(Oh and I asked them to escalate it, so yeah I did that.)

Alexander, March 15, 2010 - 1:57 pm UTC

Do I have to use a trigger to do this? I've never attempted to run dynamic sql in a view before, it doesn't appear recognize "execute immediate" though.
Tom Kyte
March 15, 2010 - 3:33 pm UTC

you'd have to have it done in the session BEFORE the query began somehow. You would not be doing this in a view, that is for sure.

Alexander, March 15, 2010 - 4:00 pm UTC

Yep, I whipped up a logon trigger

if user = 'blah'
then alter session....

Did a quick test with my id, seems to work.

I should ask, since sort & hash area is before my time with Oracle, can I do any serious damage by over allocating across many sessions? The reporting users are pretty limited, but I'd still like to know.

Thanks again for all your help.
Tom Kyte
March 15, 2010 - 5:09 pm UTC

..can I do any serious damage by over allocating across many sessions? ..

that is why pga aggregate target was introduced, yes. Now - all concurrent sessions can allocate up to your sort/hash area size workareas - for each open query they have. Do the math - are you ok with

N sessions * M open queries * Hash_Area_size

being allocated at once?

Alexander, March 15, 2010 - 5:29 pm UTC

My question is more, what happens when it's over allocated. Will that user when they run out of memory get an error, will the server grind to a halt, etc.
Tom Kyte
March 15, 2010 - 7:12 pm UTC

if you over commit the machine, if n*m*hash area exceeds what the machine can allocate (swap and otherwise), you will get an ora-4030... unable to allocate xx bytes of process memory.

although, the machine will have ground to a virtual halt before then. As it'll tend to be swapping like mad.

SQL Query

Neo, March 20, 2010 - 12:53 pm UTC

Hi Tom,

I have a table with following a column varchar2(100)

I want to write a query to find out which columns have a new line character either at the start or the end.

Thanks in anticipation

Tom Kyte
March 22, 2010 - 8:23 am UTC

where column like chr(10)||'%' or column like '%'||chr(10);

be prepared and expect and don't be annoyed at a full scan, it'll be the most efficient way (unless you want to create a function based index that is... which you would only do if you are going to do this a lot)



create index foo
on t (
case when substr(column,1,1) = chr(10) or column like '%'||chr(10) then 1 end )
/

and then

where case when substr(column,1,1) = chr(10) or column like '%'||chr(10) then 1 end = 1;


Hmmm.. Why not...

A reader, March 22, 2010 - 8:31 pm UTC


I understand the logic behind the create index syntax.

create index foo
on t (
case when substr(column,1,1) = chr(10) or column like '%'||chr(10) then 1 end )
/

But why not something like the following:

create index foo
on t (
case when column like chr(10)||'%' or column like '%'||chr(10) then 1 end )

and the resulting where clause could be something like:

where case when column like chr(10)||'%' or column like '%'||chr(10) then 1 end = 1;

Also, there is another way to create the same index using a combination of substr & instr for the chr(10) at the end.


Tom Kyte
March 23, 2010 - 1:26 am UTC

why not?

why not indeed. There are possibly an infinite number of ways to code it.

I was trying to show there are many different ways of saying the same thing...

why not

Another reader, March 23, 2010 - 5:17 am UTC

create index foo on t(
( ascii(substr(colum,1,1)) - 10) * ( ascii(substr(reverse(colum),1,1)) - 10 )
)

select * from foo where 0 =
( ascii(substr(colum,1,1)) - 10) * ( ascii(substr(reverse(colum),1,1)) - 10 )

sql

J.shiva rama krishna, March 26, 2010 - 10:22 am UTC

hi tom,
I want to find out how many a's in my name by using instr() function
Tom Kyte
March 26, 2010 - 3:37 pm UTC

ok, go ahead, it would not be efficient or a useful waste of time, but you can try.

I can do it, I can do it in plsql (that is trivial), I can do it in sql...

ops$tkyte%ORA10GR2> select sum( case when instr(upper(name), 'A',1,level)>0 then 1 end)
  2    from (select 'Thomas Kyte' name
  3            from dual)
  4  connect by level <= length(name)
  5  /

SUM(CASEWHENINSTR(UPPER(NAME),'A',1,LEVEL)>0THEN1END)
-----------------------------------------------------
                                                    1

ops$tkyte%ORA10GR2> select sum( case when instr(upper(name), 'A',1,level)>0 then 1 end)
  2    from (select 'Alexander Allen' name
  3            from dual)
  4  connect by level <= length(name)
  5  /

SUM(CASEWHENINSTR(UPPER(NAME),'A',1,LEVEL)>0THEN1END)
-----------------------------------------------------
                                                    3



I'm sure there are many many more, I know of at least a few more than that one sql...

A reader, March 26, 2010 - 10:35 am UTC

create table tst
(val varchar2(10)
,id1 number
,id2 number);

insert into tst values ('A',1,0);
insert into tst values ('A',0,1);
insert into tst values ('A',0,0);
insert into tst values ('B',0,1);
insert into tst values ('B',0,1);
insert into tst values ('C',0,0);
insert into tst values ('C',0,0);


select * from tst order by val;

VAL ID1 ID2
---------- ---------- ----------
A 1 0
A 0 1
A 0 0
B 0 1
B 0 1
C 0 0
C 0 0

I need as below -

VAL ID1 ID2
---------- ---------- ----------
A 1 1
B 0 1
C 0 0

For example, for VAL = 'A'
Record 1:
ID1 = 1, ID2 = 0
Record 2:
ID1=0, ID2 = 1

if there is 1 of ID1, ID2 in any of the records, the end result should be A 1 1.

Can you please tell me how to get this?

Tom Kyte
March 26, 2010 - 3:39 pm UTC

you don't say what should happen if there is something OTHER than 0 or 1 in there, but assuming that 0 and 1 are the only values


select val, max(id1), max(id2) from table group by val;


pretty easy...

Reader, March 26, 2010 - 10:36 am UTC

create table tst
(val varchar2(10)
,id1 number
,id2 number);

insert into tst values ('A',1,0);
insert into tst values ('A',0,1);
insert into tst values ('A',0,0);
insert into tst values ('B',0,1);
insert into tst values ('B',0,1);
insert into tst values ('C',0,0);
insert into tst values ('C',0,0);


select * from tst order by val;

VAL ID1 ID2
---------- ---------- ----------
A 1 0
A 0 1
A 0 0
B 0 1
B 0 1
C 0 0
C 0 0

I need as below -

VAL ID1 ID2
---------- ---------- ----------
A 1 1
B 0 1
C 0 0

For example, for VAL = 'A'
Record 1:
ID1 = 1, ID2 = 0
Record 2:
ID1=0, ID2 = 1

if there is 1 of ID1, ID2 in any of the records, the end result should be A 1 1.

Can you please tell me how to get this?

Reader, March 26, 2010 - 12:50 pm UTC

For the above question, I wrote the query as -

select val,
max(id1),
max(id2)
from tst
group by val;

Is there any better way to do this?
Tom Kyte
March 26, 2010 - 3:44 pm UTC

that is about it, you need to aggregate to get those N rows turned into less than or equal to N rows.

Need help in my query

Aby, April 12, 2010 - 8:06 am UTC

Dear Mr Tom
I need ur help,I need to link from this query to RA_CUSTOMER_TRX_ALL for trxnumber
and to HZ_PARTIES for party name pls help me,i shall wait for your replay
Regards
Aby

select
item_code
,sum(in_qty) in_qty
,abs(sum(out_qty)) out_qty
,round(sum(correct_in_value),2) correct_in_value
,abs(round(sum(correct_out_value),2)) correct_out_value
,inventory_item_id
,product_code
,description
,concatenated_segments
,division
,group_of_suppliers
,suppliers
,group_of_franchisee
,franchise
,organization_id
,unit_of_measure
,transaction_date
,transaction_type_name
,transaction_source_id
,transaction_source_type_id
--,transaction_id
, rcv_transaction_id
,source
,transaction_reference
,transaction_type_id
,transfer_organization_id
,source_line_id
from
(
select msi.inventory_item_id
,msi.organization_id
,msi.segment1||'.'||msi.segment2||'.'||msi.segment3 item_code
,msi.attribute1 product_code
,msi.attribute2 description
,mc.concatenated_segments
,mc.category_id
,mc.structure_id
,mic.category_set_id
,mmt.primary_quantity in_qty
,0 out_qty
,nvl(mmt.primary_quantity*mmt.actual_cost - nvl(mmt.variance_amount,0),0) correct_in_value
,0 correct_out_value
,mc.segment1 division
,mc.segment2 group_of_suppliers
,mc.segment3 suppliers
,mc.segment4 group_of_franchisee
,mc.segment5 franchise
,uom.unit_of_measure
,mmt.transaction_date
,mtt.transaction_type_name
,mmt.transaction_source_id
,mmt.transaction_source_type_id
,(mmt.transaction_source_name) source
,(mmt.transaction_reference) transaction_reference
--,mmt.transaction_id
,mmt.rcv_transaction_id
,mmt.transaction_type_id
,transfer_organization_id
,source_line_id
from mtl_system_items_b msi
,mtl_category_sets mcs
,mtl_categories_kfv mc
,mtl_item_categories mic
,mtl_material_transactions mmt
,mtl_transaction_types mtt
,mtl_units_of_measure_vl uom
,mtl_secondary_inventories msub
-- where mmt.organization_id = msi.organization_id
where msi.organization_id = :p_organization_id
and upper(category_set_name) = 'PURCHASING'
and mc.structure_id = mcs.structure_id
and mic.category_set_id = mcs.category_set_id
and mic.category_id = mc.category_id
and mic.inventory_item_id = msi.inventory_item_id
and mic.organization_id = msi.organization_id
and mmt.inventory_item_id = msi.inventory_item_id
and mmt.organization_id = msi.organization_id
and mtt.transaction_type_id = mmt.transaction_type_id
and mmt.primary_quantity > 0
and mmt.transaction_action_id <> 24
and nvl(mmt.logical_transaction,-1) <> 1
and nvl(mmt.owning_tp_type,2) = 2
and msi.primary_uom_code = uom.uom_code
and msub.secondary_inventory_name = mmt.subinventory_code
and msub.organization_id = mmt.organization_id
and ( ( msub.asset_inventory = 1) or msub.asset_inventory = 2) and (msub.quantity_tracked=1)
and uom.language='US'
and msi.segment1='M06'
and msi.segment2='200839'
and msi.segment3 in('A','B')
and mtt.transaction_type_name in('RMA Receipt','Sales order issue')
-- &p_item_where
-- &lp_date
union all
select msi.inventory_item_id
,msi.organization_id
,msi.segment1||'.'||msi.segment2||'.'||msi.segment3 item_code
,msi.attribute1 product_code
,msi.attribute2 description
,mc.concatenated_segments
,mc.category_id
,mc.structure_id
,mic.category_set_id
,0 in_qty
,mmt.primary_quantity
,0 correct_in_value
,nvl(mmt.primary_quantity*mmt.actual_cost - nvl(mmt.variance_amount,0),0) correct_out_value
,mc.segment1 division
,mc.segment2 group_of_suppliers
,mc.segment3 suppliers
,mc.segment4 group_of_franchisee
,mc.segment5 franchise
,uom.unit_of_measure
,mmt.transaction_date
,mtt.transaction_type_name
,mmt.transaction_source_id
,mmt.transaction_source_type_id
,(mmt.transaction_source_name) source
,(mmt.transaction_reference) transaction_reference
-- ,mmt.transaction_id
,mmt.rcv_transaction_id
,mmt.transaction_type_id
,transfer_organization_id
,source_line_id
from mtl_system_items_b msi
,mtl_category_sets mcs
,mtl_categories_kfv mc
,mtl_item_categories mic
,mtl_material_transactions mmt
,mtl_transaction_types mtt
,mtl_units_of_measure_vl uom
,mtl_secondary_inventories msub
-- where mmt.organization_id = msi.organization_id
where msi.organization_id = :p_organization_id
and upper(category_set_name) = 'PURCHASING'
and mc.structure_id = mcs.structure_id
and mic.category_set_id = mcs.category_set_id
and mic.category_id = mc.category_id
and mic.inventory_item_id = msi.inventory_item_id
and mic.organization_id = msi.organization_id
and mmt.inventory_item_id = msi.inventory_item_id
and mmt.organization_id = msi.organization_id
and mtt.transaction_type_id = mmt.transaction_type_id
and mmt.primary_quantity < 0
and mmt.transaction_action_id <> 24
and msi.primary_uom_code = uom.uom_code
and nvl(mmt.logical_transaction,-1) <> 1
and nvl(mmt.owning_tp_type,2) = 2
and msub.secondary_inventory_name = mmt.subinventory_code
and msub.organization_id = mmt.organization_id
and ( ( msub.asset_inventory = 1) or msub.asset_inventory = 2) and (msub.quantity_tracked=1)
and uom.language='US'
and msi.segment1='M06'
and msi.segment2='200839'
and msi.segment3 in('A','B')
and mtt.transaction_type_name in('RMA Receipt','Sales order issue')
--&p_cat_where
--&p_item_where
--&lp_date
union all
select msi.inventory_item_id
,msi.organization_id
,msi.segment1||'.'||msi.segment2||'.'||msi.segment3 item_code
,msi.attribute1 product_code
,msi.attribute2 description
,mc.concatenated_segments
,mc.category_id
,mc.structure_id
,mic.category_set_id
,0 in_qty
,0 out_qty
,nvl(mmt.quantity_adjusted*(mmt.new_cost - mmt.prior_cost),0) correct_in_value
,0 correct_out_value
,mc.segment1 division
,mc.segment2 group_of_suppliers
,mc.segment3 suppliers
,mc.segment4 group_of_franchisee
,mc.segment5 franchise
,uom.unit_of_measure
,mmt.transaction_date
,mtt.transaction_type_name
,mmt.transaction_source_id
,mmt.transaction_source_type_id
,(mmt.transaction_source_name) source
,(mmt.transaction_reference) transaction_reference
-- ,mmt.transaction_id
,mmt.rcv_transaction_id
,mmt.transaction_type_id
,transfer_organization_id
,source_line_id
from mtl_system_items_b msi
,mtl_category_sets mcs
,mtl_categories_kfv mc
,mtl_item_categories mic
,mtl_material_transactions mmt
,mtl_transaction_types mtt
,mtl_units_of_measure_vl uom
,mtl_secondary_inventories msub
-- where mmt.organization_id = msi.organization_id
where msi.organization_id = :p_organization_id
and upper(category_set_name) = 'PURCHASING'
and mc.structure_id = mcs.structure_id
and mic.category_set_id = mcs.category_set_id
and mic.category_id = mc.category_id
and mic.inventory_item_id = msi.inventory_item_id
and mic.organization_id = msi.organization_id
and mmt.inventory_item_id = msi.inventory_item_id
and mmt.organization_id = msi.organization_id
and mtt.transaction_type_id = mmt.transaction_type_id
and mmt.transaction_action_id = 24
and msi.primary_uom_code = uom.uom_code
and uom.language='US'
and nvl(mmt.logical_transaction,-1) <> 1
and nvl(mmt.owning_tp_type,2) = 2
and msub.secondary_inventory_name = mmt.subinventory_code
and msub.organization_id = mmt.organization_id
and ( ( msub.asset_inventory = 1) or msub.asset_inventory = 2) and (msub.quantity_tracked=1)
and msi.segment1='M06'
and msi.segment2='200839'
and msi.segment3 in('A','B')
and mtt.transaction_type_name in('RMA Receipt','Sales order issue')
--&p_cat_where
--&p_item_where
--&lp_date
union all -- other items not transacted in the date range
select msi.inventory_item_id
,msi.organization_id
,msi.segment1||'.'||msi.segment2||'.'||msi.segment3 item_code
,msi.attribute1 product_code
,msi.attribute2 description
,mc.concatenated_segments
,mc.category_id
,mc.structure_id
,mic.category_set_id
,null in_qty
,null out_qty
,null correct_in_value
,null correct_out_value
,mc.segment1 division
,mc.segment2 group_of_suppliers
,mc.segment3 suppliers
,mc.segment4 group_of_franchisee
,mc.segment5 franchise
,uom.unit_of_measure
,null
,null
,null
,null
,null source
,null
-- ,null
,null
,null
,null
,null
from mtl_system_items_b msi
,mtl_category_sets mcs
,mtl_categories_kfv mc
,mtl_item_categories mic
,mtl_material_transactions mmt
,mtl_transaction_types mtt
,mtl_units_of_measure_vl uom
,mtl_secondary_inventories msub
-- where mmt.organization_id = msi.organization_id
where msi.organization_id = :p_organization_id
and upper(category_set_name) = 'PURCHASING'
and mc.structure_id = mcs.structure_id
and mic.category_set_id = mcs.category_set_id
and mic.category_id = mc.category_id
and mic.inventory_item_id = msi.inventory_item_id
and mic.organization_id = msi.organization_id
and mmt.inventory_item_id = msi.inventory_item_id
and mmt.organization_id = msi.organization_id
and mtt.transaction_type_id = mmt.transaction_type_id
and mmt.primary_quantity > 0
and mmt.transaction_action_id <> 24
and nvl(mmt.logical_transaction,-1) <> 1
and nvl(mmt.owning_tp_type,2) = 2
and msi.primary_uom_code = uom.uom_code
and msub.secondary_inventory_name = mmt.subinventory_code
and msub.organization_id = mmt.organization_id
and ( ( msub.asset_inventory = 1) or msub.asset_inventory = 2) and (msub.quantity_tracked=1)
and uom.language='US'
and msi.segment1='M06'
and msi.segment2='200839'
and msi.segment3 in('A','B')
and mtt.transaction_type_name in('RMA Receipt','Sales order issue')
--&p_cat_where
--&p_item_where
/*and not exists
( select 1 from mtl_material_transactions mmtt
where mmtt.inventory_item_id = msi.inventory_item_id
and mmtt.organization_id = msi.organization_id
&lp_date
)
and
exists (select 1 from mtl_material_transactions mmt_open where mmt_open.inventory_item_id = mmt.inventory_item_id
and mmt_open.organization_id = mmt.organization_id and trunc(mmt_open.transaction_date) < (trunc(:p_date_from)-1) having sum(mmt_open.primary_quantity) > 0
)*/
and
exists (select 1 from mtl_material_transactions mmt_open,mtl_secondary_inventories msub
where mmt_open.inventory_item_id = mmt.inventory_item_id
and mmt_open.organization_id = mmt.organization_id
and trunc(mmt_open.transaction_date) between '01-JAN-2010' and '31-JAN-2010'
--(trunc('01-JAN-2010')-1)
and mmt_open.transaction_action_id <> 24
and nvl(mmt_open.logical_transaction,-1) <> 1
and nvl(mmt_open.owning_tp_type,2) = 2
and msub.secondary_inventory_name = mmt.subinventory_code
and msub.organization_id = mmt.organization_id
and ( ( msub.asset_inventory = 1) or msub.asset_inventory = 2) and (msub.quantity_tracked=1)
having sum(mmt_open.primary_quantity) > 0
)
and not exists
( select 1 from mtl_material_transactions mmtt
where mmtt.inventory_item_id = msi.inventory_item_id
and mmtt.organization_id = msi.organization_id
--&lp_date
)
)
having nvl(sum(in_qty)+sum(out_qty),1) <> 0
group by
inventory_item_id
,item_code
,product_code
,description
,concatenated_segments
,concatenated_segments
,division
,group_of_suppliers
,suppliers
,group_of_franchisee
,franchise
,organization_id
,unit_of_measure
,transaction_date
,transaction_type_name
,transaction_source_id
,transaction_source_type_id
--,transaction_id
, rcv_transaction_id
,source
,transaction_reference
,transaction_type_id
,transfer_organization_id
,source_line_id
order by item_code asc

Query is giving "object no longer exists"

Karthi, April 23, 2010 - 1:26 am UTC

Hi Tom,

I have queried one table in my DB. It is giving the error as “object no longer exists”. 

SQL> select * from CUSTOMER;
select * from CUSTOMER
              *
ERROR at line 1:
ORA-08103: object no longer exists


I am able to find the below points.

A. I am able to see the structure of the table using DESC command.
B. I have tried to check the count. It is coming fine. I have checked the explain plan and it is getting the data from index. 

SQL> select count(*) from CUSTOMER;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     INDEX (FAST FULL SCAN) OF 'CUSTOMER_PK' (UNIQUE) (Co
          st=5 Card=767865)

C. So I have tried the query the table with the column name which is indexed. It is giving the data. I have checked the explain plan it is going for index scan.


SQL> select CUSTOMER_KEY from CUSTOMER;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=767865 Bytes=
          5375055)

   1    0   INDEX (FAST FULL SCAN) OF 'CUSTOMER_PK' (UNIQUE) (Cost
          =5 Card=767865 Bytes=5375055)

D.  I have checked the table in the all_objects table. It is available. 

SQL> select object_name,Object_type from all_objects
  2  where object_name = 'CUSTOMER'
  3  and object_type = 'TABLE';

OBJECT_NAME                    OBJECT_TYPE
------------------------------ ------------------
CUSTOMER                   TABLE


My Question:

1. Could you please explain in which situation we face these kind of issue.
2. Whether table got crashed or memory blocks got corrupted.  Please explain me how it is happening and how to rectify this one.

Could you guide me on this.

Regards,
Karthi 


Tom Kyte
April 23, 2010 - 9:06 am UTC

this should not happen, it is not "normal", please contact support. It looks like a failed drop - the thing is partially 'dropped'

Retrieve Multiple data rows from the bills

Jabeer, April 26, 2010 - 11:38 am UTC

Tom,
I have the following tables and data structure

Table 1

ITEM ITEM_ID ORGNIZATION_ID sequence_id
===================================================
A 133 379 1
B 129 379 2

Table 2

ITEM ITEM_ID ORGNIZATION_ID sequence_id
===================================================
B 129 379 1
C 382 379 1
D 511 379 1
E 586 379 1
AA 383 379 2
AB 576 379 2

ITEMS

ITEM ITEM_ID ORGNIZATION_ID
=================================
A 133 379
B 129 379
C 382 379
D 511 379
E 576 379
AA 383 379
AB 576 379

Using the following query i will get the below results
SELECT it12.item TOPITEM,t1.item_id TOPITEMID,t2.sequence_id,
it11.item lowitem, t2.item_id lowitemid,
bbm.organization_id org_id
FROM table1 t1,
table1 t2,
items it1,
items it12
WHERE t1.bill_sequence_id = t2.bill_sequence_id
AND t2.component_item_id = it1.inventory_item_id
AND t1.organization_id = it1.organization_id
AND t1.assembly_item_id = it12.inventory_item_id
AND t1.organization_id = it12.organization_id
AND t1.organization_id = 379
and t1.assembly_item_id=133

TOPITEM TOPITEMID SEQUENCE_ID LOWITEM LOWERITEM_ID ORG_ID
---------------------------------------------------------
A 133 1 B 129 379
A 133 1 C 382 379
A 133 1 D 511 379
A 133 1 E 586 379

Using the following query i will get the following results
SELECT it12.item TOPITEM,t1.item_id TOPITEMID,t2.sequence_id,
it11.item lowitem, t2.item_id lowitemid,
bbm.organization_id org_id
FROM table1 t1,
table1 t2,
items it1,
items it12
WHERE t1.bill_sequence_id = t2.bill_sequence_id
AND t2.component_item_id = it1.inventory_item_id
AND t1.organization_id = it1.organization_id
AND t1.assembly_item_id = it12.inventory_item_id
AND t1.organization_id = it12.organization_id
AND t1.organization_id = 379
and t1.assembly_item_id=129

TOPITEM TOPITEMID SEQUENCE_ID LOWITEM LOWERITEM_ID ORG_ID
---------------------------------------------------------
B 129 2 AA 383 379
B 129 2 AB 576 379

But i should get all the results in single query when I send the top item as 133 it should recursively find all the items and sub/lowitems (this sub/lowitems can acts as topitem those can have multiple sub/lowitems).

TOPITEM TOPITEMID SEQUENCE_ID LOWITEM LOWERITEM_ID ORG_ID
---------------------------------------------------------
A 133 1 B 129 379
A 133 1 C 382 379
A 133 1 D 511 379
A 133 1 E 586 379
B 129 2 AA 383 379
B 129 2 AB 576 379


Tom Kyte
April 26, 2010 - 11:45 am UTC

hmm, when I run those queries, all I get is:

ops$tkyte%ORA11GR2> SELECT  it12.item TOPITEM,t1.item_id TOPITEMID,t2.sequence_id,
  2          it11.item lowitem, t2.item_id lowitemid,
  3          bbm.organization_id org_id
  4      FROM  table1 t1,
  5            table1 t2,
  6            items it1,
  7            items it12
  8      WHERE t1.bill_sequence_id = t2.bill_sequence_id
  9        AND t2.component_item_id = it1.inventory_item_id
 10        AND t1.organization_id = it1.organization_id
 11        AND t1.assembly_item_id = it12.inventory_item_id
 12        AND t1.organization_id = it12.organization_id
 13        AND t1.organization_id = 379
 14        and t1.assembly_item_id=133
 15  /
          items it12
          *
ERROR at line 7:
ORA-00942: table or view does not exist


is anyone else able to run it? I must be missing *something*

Table Structure

BillC, April 27, 2010 - 8:35 am UTC

I can't run it at all. The table structure don't inducate an inventory_item_it or an assembly_item_id. There are mor eto these tables than he lets on.

Table Structure

BillC, April 27, 2010 - 8:36 am UTC

I can't run it at all. The table structure doesn't indicate an inventory_item_it or an assembly_item_id. There are mor eto these tables than he lets on.

why changes insert the execution plan?

A reader, April 27, 2010 - 8:51 am UTC

Hi Tom,

I'm running on 11gR1. I've a complex query including some inline views and union all's. The query is running fine, the optimizer does a fast final hash join.

When I add a simple "insert into as [exactly the same query]" the execution plan is changed. The optimizer changes the order of the execution blocks and does a slow final filter operation.

It' an ordonary heap organized table with 3 non-unique indexes I'm inserting into. No constraints, no triggers.

Question: I can't understand why the optimizer changes the whole execution plan in such a case. The query is still the same. Can you shed some light on such things?

Thanks,
Michael

Extract from EBS

A reader, April 29, 2010 - 11:40 am UTC

I am doing an GL extract where i have written a sql query, my reconciliation of balance amounts are showing correct values for one month say 'APR-09' but if i verify for other months the balance amounts does not match.
Any help?
My query:
------------
SELECT GCC.SEGMENT1 COMPANY
, GCC.SEGMENT2 LOCATION
, GCC.SEGMENT3 DEPARTMENT
, GCC.SEGMENT4 ACCOUNT
, GCC.SEGMENT5 PRODUCT
, GCC.SEGMENT6 FUTURE
, FFVT.DESCRIPTION ACCOUNT_NAME
, GB.CURRENCY_CODE CURRENCY
, (((GB.BEGIN_BALANCE_CR + GB.PERIOD_NET_CR) - (GB.BEGIN_BALANCE_DR + GB.PERIOD_NET_DR))*-1)
BALANCE
, GB.PERIOD_NUM PERIOD
, GB.PERIOD_YEAR YEAR
FROM GL_BALANCES GB
, GL_CODE_COMBINATIONS GCC
, GL_CODE_COMBINATIONS_KFV GCCK
, FND_ID_FLEXS FIF
, FND_ID_FLEX_SEGMENTS FIFS
, FND_FLEX_VALUE_SETS FFVS
, FND_FLEX_VALUES_TL FFVT
, FND_FLEX_VALUES FFV
WHERE GCC.CODE_COMBINATION_ID = GB.CODE_COMBINATION_ID
AND GB.CODE_COMBINATION_ID = GCCK.CODE_COMBINATION_ID
AND GCCK.CONCATENATED_SEGMENTS LIKE 'xxx.%.xxxx.%.%.%'
AND GB.PERIOD_NUM = 'XX' --'APR-09'
AND GB.PERIOD_YEAR = 'XXXX'
AND GCCK.SEGMENT4 = FFV.FLEX_VALUE
AND FIF.ID_FLEX_NAME = 'Accounting Flexfield'
AND FIFS.ID_FLEX_CODE = FIF.ID_FLEX_CODE
AND FIFS.APPLICATION_ID = FIF.APPLICATION_ID
AND FIFS.FLEX_VALUE_SET_ID = FFVS.FLEX_VALUE_SET_ID
AND FFVS.FLEX_VALUE_SET_ID = FFV.FLEX_VALUE_SET_ID
AND FFV.FLEX_VALUE_ID = FFVT.FLEX_VALUE_ID
AND GB.ACTUAL_FLAG = 'X'
AND GB.TEMPLATE_ID IS NULL
GROUP BY GCC.SEGMENT1, GCC.SEGMENT2, GCC.SEGMENT3, GCC.SEGMENT4, GCC.SEGMENT5, GCC.SEGMENT6, FFVT.DESCRIPTION, GB.Currency_code, (((GB.BEGIN_BALANCE_CR + GB.PERIOD_NET_CR) - (GB.BEGIN_BALANCE_DR + GB.PERIOD_NET_DR))*-1) , GB.PERIOD_NUM, GB.PERIOD_YEAR, GCCK.CONCATENATED_SEGMENTS
-- HAVING SUM( NVL(GB.PERIOD_NET_DR,0) - NVL(GB.PERIOD_NET_CR,0)) <> 0
HAVING COUNT(GCCK.CONCATENATED_SEGMENTS) > = 1
ORDER BY 1,2,3,4,5,6,7,8,9;



A reader, May 13, 2010 - 12:09 am UTC

My Question :-
===================================================
Table1

Date Inv No. Sr.No.

01/04/2009 1 1
01/04/2009 2 2
0304/2009 3 1
04/04/2009 4 1
04/4/2009 5 2

table2
01/04/2009 1 1
01/04/2009 1 1
01/04/2009 1 1
01/04/2009 2 2
01/04/2009 2 2
01/04/2009 2 2
0304/2009 3 1
04/04/2009 4 1
04/04/2009 4 1
04/4/2009 5 2


I want sql query or PL/SQL Block to update the Ser_No. field automatically as shown in the above tables. Ser_No is not entered Manually. This ser_No. should be updated by selecting the date range i.e. from date and two date.

...

The Script...
create table t1 (inv_date date,inv_no number,ser_no number)
/
create table t2 (inv_date date,inv_no number,ser_no number)
/
insert into t1 values (to_date('01/04/2009','DD/MM/YYYY'),1,1)
/
insert into t1 values (to_date('01/04/2009','DD/MM/YYYY'),2,2)
/
insert into t1 values (to_date('03/04/2009','DD/MM/YYYY'),3,1)
/
insert into t1 values (to_date('04/04/2009','DD/MM/YYYY'),4,1)
/
insert into t1 values (to_date('04/04/2009','DD/MM/YYYY'),5,2)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),1,1)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),1,1)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),1,1)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),2,2)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),2,2)
/
insert into t2 values (to_date('01/04/2009','DD/MM/YYYY'),2,2)
/
insert into t2 values (to_date('03/04/2009','DD/MM/YYYY'),3,1)
/
insert into t2 values (to_date('04/04/2009','DD/MM/YYYY'),4,1)
/
insert into t2 values (to_date('04/04/2009','DD/MM/YYYY'),4,1)
/
insert into t2 values (to_date('04/04/2009','DD/MM/YYYY'),5,2)
/
commit;
/

SQL> select * from t1;

INV_DATE      INV_NO     SER_NO
--------- ---------- ----------
01-APR-09          1          1
01-APR-09          2          2
03-APR-09          3          1
04-APR-09          4          1
04-APR-09          5          2

SQL> select * from t2;

INV_DATE      INV_NO     SER_NO
--------- ---------- ----------
01-APR-09          1          1
01-APR-09          1          1
01-APR-09          1          1
01-APR-09          2          2
01-APR-09          2          2
01-APR-09          2          2
03-APR-09          3          1
04-APR-09          4          1
04-APR-09          4          1
04-APR-09          5          2

...

Query with NVL

Ashok, May 17, 2010 - 3:45 pm UTC

Tom
Following query cause pl sql program to hang . Query with this program runs daily but this happenes almost once or twice in six month . Event shows during that time "kksfbs child completion"

In the following query do you see any problem if the p_f_name was null and it has been changed with UNKNOWN in the program .

I guess my question is
1.Is UNKNOWN a keyword in oracle .I have gone through all the documentation but don't see any
2.If I changed this query to (f_name = p_f_name or p_f_name is null ) , going to be better resolution .



SELECT DISTINCT dur_patient_id
INTO v_dur_patient_id
FROM Table_name
WHERE c_key = p_ckey
AND member_id = p_memid
AND ((f_name = p_f_name AND l_name = p_l_name) OR date_of_birth = p_dob)
AND add_date = (SELECT MAX(add_date)
FROM table_name e
WHERE e.c_key = p_ckey
AND e.member_id = p_memid
AND ((e.f_name = p_f_name AND e.l_name = p_l_name)
OR e.date_of_birth = p_dob)
);
Tom Kyte
May 24, 2010 - 9:07 am UTC

why would it matter if p_f_name was assigned the value 'UNKNOWN' (or 'SELECT' or 'FROM' or anything at all)?????




this question has a subject "query with NVL", and yet - i don't see NVL in it anywhere?? so, I'm confused I guess....


but in any case, if you have a query stuck in that wait for a measurable period of time, you have an issue - one that you need to work with support on, it should not be happening.

You might find this query to be "better"
select dur_patient_id
  from (select dur_patient_id
          from table_name
         where c_key = :p_ckey
           and member_id = :p_memid
           and ((f_name = :p_f_name and l_name = :p_l_name) or date_of_birth=:p_dob)
          order by add_date DESC)
 where rownum = 1;


from a performance perspective.


Create group for similar records

DC, May 28, 2010 - 8:46 am UTC

Hi Tom,
I have a table, say t1 (create table t1 (c1 number, c2 number)). The column C1 hold a set of persons who are similar to C2 but their names are
a bit different, say for example a person whose id is stored in column c1 is D.Chakraborty, and the id which is stored in C2 is for Dibyendu
Chakraborty. Those are similar. So I need to put them in the same group, then based on certain conditions we will take one.
So, I want to create groups who are similar.

The insert script for sample data are as below:

insert into t1 values(120, 110);
insert into t1 values(130, 110);
insert into t1 values(130, 170);
insert into t1 values(170, 180);
insert into t1 values(220, 210);
insert into t1 values(220, 200);
insert into t1 values(220, 190);
insert into t1 values(230, 220);
insert into t1 values(280, 270);
insert into t1 values(280, 260);
insert into t1 values(280, 250);

So, the records in the target table (create table t2(groupid number, c3 number)) will be like:

groupid c3

------------------------------------------
1 120
1 110
1 130
1 170
1 180
2 220
2 210
2 200
2 190
2 230
3 280
3 270
3 260
3 250


I wrote a pl/sql block, inside that I wrote a hierarchical query to get all the similar values, but that is taking long time to execute.

The original table number of records are around 50K.

Please let me know how can we do that.

What is wrong in creating table like this??

aditi jain, June 13, 2010 - 6:25 am UTC

create table emp3(
ename varchar2(20) NOTNULL, eno number(5), salary number(5,2) sat_c CHECK(salary>1000),
deptno number(4),
address varchar2(15) NOTNULL,
dname varchar2(10) NOTNULL,
PRIMARY KEY(eno),
FOREIGN KEY(deptno)REFERENCES dept(did));
Tom Kyte
June 22, 2010 - 7:55 am UTC

notnull should be NOT NULL


sat_c, not sure what sat_c is or why it is there, it is wrong.

ops$tkyte%ORA11GR2> create table emp3(
  2   ename varchar2(20) NOT NULL, eno number(5), salary number(5,2)
  3   CHECK(salary>1000),
  4    deptno number(4),
  5    address varchar2(15) NOT NULL,
  6    dname varchar2(10) NOT NULL,
  7    PRIMARY KEY(eno),
  8    FOREIGN KEY(deptno)REFERENCES dept(did));

Table created.


works.

Suppressing SQL

A Reader, July 08, 2010 - 7:00 am UTC

Tom,
Thanks for your time.

I can see the following SQL is being executed by application.

select * from T;


Is there any way this SQL can be intercepted in backend and a filter
is added to this query.
something like....


where 1=2


i.e. Actual sql processed would be


select * from T where 1=2;


So that no rows are retruned to the program who has issued the above sql.
(mainly wanted to add a predicate which is FALSE hence no results set
is returned.)

Tom basically here :

I am not able to find which is piece of code is issuing
( sorry about this... i dont know my data/application .. :( )

select * from T;


This statement is bad as there is no predicate. and table has 1 million rows.

So I just want to suppress it somehow.


your comments.
regards

Tom Kyte
July 08, 2010 - 12:49 pm UTC

there would be fine grained access control - but that would affect EVERY query against table T, not just this one.


A more gut response from me is however:

are you out of your mind? If some application - that is part of some system - that was designed to do something by someone - 'needs' that data cutting it off would by definition BREAK it. Something, somewhere thinks it needs this data - just not sending would be a really bad idea.


Perhaps you could put a reasonable resource profile in place? Limit the number of logical reads by a session or something - if this session does above and beyond the norm?


that would cause it to FAIL with "noise", not silently fail with "no data"

A reader, July 23, 2010 - 10:54 am UTC

Hi Tom,


BEGIN

FOR X IN (
SELECT ORDERID,
A.PTT_ACCOUNTNO UAN,
A.REFCLI,
A.RESELLER_ACC_NO RESELLER_ACC_NO,
A.CREATED
FROM LR_PROV_ORDERHEADER A,
LR00_OPS_LRACCMASTER_CEASED B
WHERE A.PTT_ACCOUNTNO = B.PTT_ACCNO
AND A.REFCLI = B.REF_CLI
AND A.RESELLER_ACC_NO = B.RESELLER_ACC_NO
AND NVL(B.EXPIRATION_DATE,B.CREATIONDATE) < SYSDATE - 1095
AND A.CREATED < SYSDATE - 1095
)

LOOP
SELECT COUNT(*) INTO V_COUNT
FROM LR00_OPS_LINERENTALACCMASTER LACC
WHERE X.UAN = LACC.PTT_ACCNO
AND X.RESELLER_ACC_NO = LACC.RESELLER_ACC_NO
AND X.REFCLI = LACC.REF_CLI;

SELECT COUNT(*) INTO V_EXIST
FROM TEMP_CEASED_ARCHIVE TCA
WHERE TCA.ORDERID = X.ORDERID;

IF V_COUNT = 0 AND V_EXIST = 0 THEN
INSERT INTO TEMP_CEASED_ARCHIVE
VALUES (X.ORDERID,X.UAN,X.REFCLI,X.CREATED,X.RESELLER_ACC_NO);
END IF;
END LOOP;



Could you please advise alternative SQL for the above FOR loop ? Can we do it effieciently using MERGE ? thanks
Tom Kyte
July 23, 2010 - 12:22 pm UTC

why use merge? you only insert, you never update, never delete. insert as select is more than sufficient.



insert into temp_chased_archive
( orderid, uan, refcli, created, reseller_acc_no )
SELECT ORDERID,
               A.PTT_ACCOUNTNO UAN,
               A.REFCLI,
               A.CREATED, 
               A.RESELLER_ACC_NO RESELLER_ACC_NO
        FROM   LR_PROV_ORDERHEADER A, 
               LR00_OPS_LRACCMASTER_CEASED B 
        WHERE  A.PTT_ACCOUNTNO = B.PTT_ACCNO 
        AND    A.REFCLI = B.REF_CLI 
        AND    A.RESELLER_ACC_NO = B.RESELLER_ACC_NO 
        AND    NVL(B.EXPIRATION_DATE,B.CREATIONDATE) < SYSDATE - 1095 
        AND    A.CREATED < SYSDATE - 1095
        and not exists (select null FROM LR00_OPS_LINERENTALACCMASTER LACC 
            WHERE a.UAN = LACC.PTT_ACCNO 
            AND a.RESELLER_ACC_NO = LACC.RESELLER_ACC_NO 
            AND a.REFCLI = LACC.REF_CLI)
        and not exists (select null  FROM TEMP_CEASED_ARCHIVE TCA 
            WHERE TCA.ORDERID = X.ORDERID );


Counting 2 hrs Interval data's

Rajeshwaran, Jeyabal, July 27, 2010 - 6:17 am UTC

Hi Tom:

I have some data's like this.

create table t(x timestamp);

insert into T values (systimestamp );
insert into t values (systimestamp - interval '1' hour);
insert into t values (systimestamp - interval '2' hour);
insert into T values (systimestamp - interval '3' hour);
insert into t values (systimestamp - interval '4' hour);
insert into T values (systimestamp - interval '5' hour);
insert into t values (systimestamp - interval '6' hour);
insert into T values (systimestamp - interval '7' hour);
insert into T values (systimestamp - interval '8' hour);
insert into T values (systimestamp - interval '9' hour);
insert into T values (systimestamp - interval '10' hour);
insert into T values (systimestamp - interval '11' hour);
insert into T values (systimestamp - interval '12' hour);
insert into T values (systimestamp - interval '13' hour);
insert into T values (systimestamp - interval '14' hour);
insert into T values (systimestamp - interval '15' hour);
insert into T values (systimestamp - interval '16' hour);
insert into T values (systimestamp - interval '17' hour);
insert into T values (systimestamp - interval '18' hour);
insert into T values (systimestamp - interval '19' hour);
insert into T values (systimestamp - interval '20' hour);
insert into T values (systimestamp - interval '21' hour);
insert into T values (systimestamp - interval '22' hour);
insert into T values (systimestamp - interval '23' hour);
commit;

If i want to count the data's for every hour, Then i can use the below query.

select TO_CHAR(X,'hh24') as hrs,COUNT(*)
from t
group by TO_CHAR(X,'hh24');

But, If i need to group the data's of every two hour Interval, how that can be done. can you please help me?
Tom Kyte
July 27, 2010 - 12:25 pm UTC

a little math.


I would say however that your approach above is flawed, you lose the days, you should use trunc itself.


...
ops$tkyte%ORA10GR2> insert into t select * from t where rownum <= 10;

10 rows created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select trunc( x, 'hh' ), count(*)
  2    from t
  3   group by trunc( x, 'hh' )
  4   order by trunc( x, 'hh' )
  5  /

TRUNC(X,'HH')          COUNT(*)
-------------------- ----------
26-jul-2010 14:00:00          1
26-jul-2010 15:00:00          1
26-jul-2010 16:00:00          1
26-jul-2010 17:00:00          1
26-jul-2010 18:00:00          1
26-jul-2010 19:00:00          1
26-jul-2010 20:00:00          1
26-jul-2010 21:00:00          1
26-jul-2010 22:00:00          1
26-jul-2010 23:00:00          1
27-jul-2010 00:00:00          1
27-jul-2010 01:00:00          1
27-jul-2010 02:00:00          1
27-jul-2010 03:00:00          1
27-jul-2010 04:00:00          2
27-jul-2010 05:00:00          2
27-jul-2010 06:00:00          2
27-jul-2010 07:00:00          2
27-jul-2010 08:00:00          2
27-jul-2010 09:00:00          2
27-jul-2010 10:00:00          2
27-jul-2010 11:00:00          2
27-jul-2010 12:00:00          2
27-jul-2010 13:00:00          2

24 rows selected.

ops$tkyte%ORA10GR2> select trunc( x, 'hh' ) - (case when mod(to_number(to_char(x,'hh24')),2) = 0 then interval '1' hour else interval '0' hour end) , count(*)
  2    from t
  3   group by trunc( x, 'hh' ) - (case when mod(to_number(to_char(x,'hh24')),2) = 0 then interval '1' hour else interval '0' hour end)
  4   order by trunc( x, 'hh' ) - (case when mod(to_number(to_char(x,'hh24')),2) = 0 then interval '1' hour else interval '0' hour end)
  5  /

TRUNC(X,'HH')-(CASEW   COUNT(*)
-------------------- ----------
26-jul-2010 13:00:00          1
26-jul-2010 15:00:00          2
26-jul-2010 17:00:00          2
26-jul-2010 19:00:00          2
26-jul-2010 21:00:00          2
26-jul-2010 23:00:00          2
27-jul-2010 01:00:00          2
27-jul-2010 03:00:00          3
27-jul-2010 05:00:00          4
27-jul-2010 07:00:00          4
27-jul-2010 09:00:00          4
27-jul-2010 11:00:00          4
27-jul-2010 13:00:00          2

13 rows selected.

REcursive SQL

Ramchandra Joshi, August 11, 2010 - 8:22 am UTC

Hi Tom,

I'm struggling with my approach :

Consider :

SQL> create table test_credit(code varchar2(10),amount integer );

Table created

SQL> insert into test_credit values ('CR',1);

1 row inserted

SQL> insert into test_credit values ('CR',2);

1 row inserted

SQL> insert into test_credit values ('CR',3);

1 row inserted

SQL> insert into test_credit values ('CR',4);

1 row inserted

SQL> select * from test_credit;

CODE                                        AMOUNT
---------- ---------------------------------------
CR                                               1
CR                                               2
CR                                               3
CR                                               4

Now what I need is some way in which I can get all the possible combinations of the amount .

So those would be :
1
1+2
1+3
1+4
2+3
2+4
3+4
1+2+3
1+2+4
1+3+4
2+3+4
1+2+3+4

Basically I'm looking out for combinations which should not consider repeats ( 1+2 is same as 2+1 )

I know I have to use recusrive SQL + analytics But somehow not getting the right idea to do it.

Can you please suggest some starting guidelines so that I can take that further .

I even wrote a PL/SQL routine which generates all possible combinations , But when there are (say) 200 records in the table the posssible 5 digit combinations would be 200C5 = 1460534423040 !!

And that would blow up my memory :( , Hence Not going for PL/SQL approach. 

Just wondering If its possible in SQL at first ?

Thanks in advance,
Ram.

if record exist update not exist insert new record

dinesh kakani, September 04, 2010 - 5:40 am UTC

Hi Tom,
Consider the following table:

srno sname status
1 xyz n
2 abc n
3 pqr n
4 def n
5 stu n

In my application I get three rows/values to be inserted in to table, i.e.

sname
xyz
abc
mno

I need to check if they already exist, if yes i need to update there 'status' 'Y' else if there is no such record I need to insert it into the table.

After running such a query I should get an output as below:

srno sname status
1 xyz Y
2 abc Y
3 pqr N
4 def N
5 stu N
6 mno N -- New Record


Thanks

Dinesh
Tom Kyte
September 09, 2010 - 7:33 pm UTC

I could have shown you how to use MERGE to do this, but unfortunately - I don't have your table with the original data... And I don't create them myself, I expect you to provide the create tables and inserts - just like I DO for you in my examples.....

so, read about merge, it does this.

good

kishore, September 04, 2010 - 10:49 pm UTC


DATE FORMATE - qutar

Nazmul Hoque, September 05, 2010 - 5:35 am UTC

Hi Tom,

I have some data Date wise, i can show data monthly like by useing form to_char('yyyy-mm')

2010-01 2010-02
101 300
320 200

i need show like

2010(01-15/01) 2010(16-30/01) 2010(01-15/02) 2010(16-28/02)
51 50 200 100
120 200 150 50

Please advise.
Tom Kyte
September 09, 2010 - 7:44 pm UTC

ops$tkyte%ORA11GR2> select dt,
  2         to_char(dt,'yyyy-mm'),
  3         case when to_char(dt,'dd')<='15'
  4                  then to_char(dt,'yyyy"(01-15/"mm")"')
  5                  else to_char(dt,'yyyy"(16-"') ||
  6                               to_char(last_day(dt),'dd"/"') ||
  7                                   to_char(dt,'mm")"')
  8                  end your_fmt
  9   from (select to_date( ceil(level/2), 'mm' ) + decode( mod(level,2), 0, 1, 20 ) dt
 10           from dual
 11                  connect by level <= 24)
 12   order by dt
 13  /

DT        TO_CHAR YOUR_FMT
--------- ------- --------------
02-JAN-10 2010-01 2010(01-15/01)
21-JAN-10 2010-01 2010(16-31/01)
02-FEB-10 2010-02 2010(01-15/02)
21-FEB-10 2010-02 2010(16-28/02)
02-MAR-10 2010-03 2010(01-15/03)
21-MAR-10 2010-03 2010(16-31/03)
02-APR-10 2010-04 2010(01-15/04)
21-APR-10 2010-04 2010(16-30/04)
02-MAY-10 2010-05 2010(01-15/05)
21-MAY-10 2010-05 2010(16-31/05)
02-JUN-10 2010-06 2010(01-15/06)
21-JUN-10 2010-06 2010(16-30/06)
02-JUL-10 2010-07 2010(01-15/07)
21-JUL-10 2010-07 2010(16-31/07)
02-AUG-10 2010-08 2010(01-15/08)
21-AUG-10 2010-08 2010(16-31/08)
02-SEP-10 2010-09 2010(01-15/09)
21-SEP-10 2010-09 2010(16-30/09)
02-OCT-10 2010-10 2010(01-15/10)
21-OCT-10 2010-10 2010(16-31/10)
02-NOV-10 2010-11 2010(01-15/11)
21-NOV-10 2010-11 2010(16-30/11)
02-DEC-10 2010-12 2010(01-15/12)
21-DEC-10 2010-12 2010(16-31/12)

24 rows selected.

Date Fornnate

Nazmul Hoque, September 15, 2010 - 1:03 am UTC

Greate TOM,

Thanks a Lot


first last month

omsai, September 18, 2010 - 12:30 am UTC

Hi Tom,
thanks for your kind help always,
could u please help on getting understand dense_rank for monthly sale on first custome and last customer each month with value
Rgds
Omsai
Tom Kyte
September 20, 2010 - 1:36 pm UTC

"U" isn't available, "U" is dead as far as I know. Look it up, it is true.

http://en.wikipedia.org/wiki/U_of_Goryeo

is your keyboard broken? It is dropping letters that are important to form words - such as the Y and O keys.


Your question "does not compute" - I have no idea what the input data might look like (no create table, no inserts) and I have no idea what "help on getting understand dense_rank for monthly sale on first
customer and last customer each month with value " means really. what is a "first customer" (what makes a customer 'first' in your application). What does it mean to have "each month with value". This is all very unclear

variable date format

A reader, September 21, 2010 - 2:05 pm UTC

Hi Tom,

We get customer data from different customers and is loaded into a table. One of the varchar2 column holds date (we don't have any control over this). Usually we get a particular date format and our insert into <out_table> select <col1>,<col2>...to_date(<date_str>,'DD-MON-YYYY HH24:MI:SS') from <source_table> works fine. Off late we have been getting variable date formats in the source data and our insert statement keep failing. Is there anyway we can modify INSERT statement to dynamically identify the date format and do a to_date accordingly?

Thanks,
Ravi

Tom Kyte
September 21, 2010 - 4:13 pm UTC

hah, tell you what, first you give us the deterministic procedural logic that is 100% failsafe - and then we can help you craft a function.

I'd love to see the code that could do this - because as soon as you give it to me, I'll run it with two different strings and you'll give me the WRONG date.


What date/time is this string:

01-02-03 04-05-06


whatever you answer you are wrong.

variable date format

Ravi B, September 21, 2010 - 5:05 pm UTC

Tom,

I know what you mean. You are right, there is noway we can code this 100% accurately.

Thanks,
Ravi

Synchronised Data Extract

A reader, September 22, 2010 - 7:34 am UTC

Hi Tom,
In Oracle GL we need to get data from two tables at the same. The tables are constantly being updated through batch processes. What is the best way so that we get data at the same time? can we put database in Quiescing mode, and then extract data? Any suggestion will be really helpful.
We are using 10gR2.
 
Regards
Tom Kyte
September 23, 2010 - 10:35 am UTC

you can

a) query them together in the same query, the data will always be read consistent in a single query.

b) use read only transactions - using that, all queries executed in a transaction will be AS OF the same point in time.

c) use serializable transactions - allows you to modify data and query data all AS OF the same point in time.

d) use flashback query - query the data "as of" a given point in time, a point in time you specify.


You don't need to do anything 'special', this is what Oracle does best. Do you know what read consistency is? You might want to glance at the concepts guide or if you have Expert Oracle Database Architecture - read up on it. It is the #1 feature of the database in my opinion.

Synchronised Data Set

A reader, September 23, 2010 - 4:51 pm UTC

Hi Tom,
Thanks.
Read consistency -- fully aware of this.....

Oracle GL package has trial balance report, which we want to leverage. On the other side we have queries from oracle GL base tables, to create Materlaized View which is used to send data to down stream systems, which goes through ETL process. We want to do reconciliation between trial GL balance report and downstream systems, and wanted to avoid opening the trial balance report query...
Regards,

SQL on 10 tables

Peter, September 25, 2010 - 1:53 pm UTC

Hi Tom,

The version of Oracle we have is 9.2.0.8.0 I am having a situation where I have to fetch data from 10 different tables.Out of these tables 4 tables are partitioned on month basis. Each of these 4 tables has data for 15 months, so there are 15 partitions and the data in each table is approximately 250 million to 300 million, divided evenly in all partitions. The sql I am using always retrives data from the begning of the partition to the ending of the partition, i.e month begning of the month to the ending of the month but not from some day of the month to some day of another month. In the where claus of the query, I am using the partition date of a table >= beginofmonth and partition date of a table <= endofmonth, and I am using this partition date to join with other tables. My question is will there be any performance improvement if I add all the partition date fields of other big tables tables to in the where with the begin and end of month condition. Please see below code.
What I am using now.
    AND T2.partitondate = T1.Partitondate
    AND T3.partitondate = T2.Partitondate
    AND T4.partitondate = T3.Partitondate

    AND    T1.PartitioningDate    >= to_date('2009-07-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
    AND    T1.PartitioningDate    <  to_date('2009-09-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')

What I want to know if it will improve the performance.

    AND T2.partitondate = T1.Partitondate
    AND T3.partitondate = T2.Partitondate
    AND T4.partitondate = T3.Partitondate

    AND    T1.PartitioningDate    >= to_date('2009-07-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
    AND    T1.PartitioningDate    <  to_date('2009-09-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')

    AND T2.PartitioningDate >= to_date('2009-07-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
    AND T2.PartitioningDate < to_date('2009-09-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')
    
    AND  T3.PartitioningDate >= to_date('2009-07-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
    AND  T3.PartitioningDate < to_date('2009-09-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')
    
    AND T4.EndCreaDate >= to_date('2009-07-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
    AND T4.EndCreaDate <= to_date('2009-09-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')



Also can you please let me know, what are the different hints I can consider while running this query. Right now I am using ORDERED, PARALLEL, FULL, USE_HASH on the big tables. These are used by the queries which were written back in 2008 by some other developers and I am looking to improve them. There is an Index on the partitioning date on all table but the performance degraded with use of the INDEX hint.

Thanks.
Tom Kyte
September 27, 2010 - 11:40 am UTC

lso can you please let me know, what are the different hints I can consider while running this query.

easy - here is my list:

[this space left INTENTIONALLY blank]


Maybe the ONLY one to consider would be parallel, but that would be about it (if and only if the tables are not already parallel)

It is impossible to answer your question given the level of information here. You don't say how much data you are querying (rows means nothing, gigabytes means something). You don't say how long it takes and how long you THINK it should take (sometimes we have to reset expectations).

Using optimization - the optimizer should know that the t1.partitioningdate >= and < stuff is applied transitively to the other columns as well.


ops$tkyte%ORA10GR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part_2009 VALUES LESS THAN (to_date('01-jan-2010','dd-mon-yyyy')) ,
 10    PARTITION part_2010 VALUES LESS THAN (to_date('01-jan-2011','dd-mon-yyyy')) ,
 11    PARTITION junk VALUES LESS THAN (MAXVALUE)
 12  )
 13  /

Table created.

ops$tkyte%ORA10GR2> CREATE TABLE t2
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part_2009 VALUES LESS THAN (to_date('01-jan-2010','dd-mon-yyyy')) ,
 10    PARTITION part_2010 VALUES LESS THAN (to_date('01-jan-2011','dd-mon-yyyy')) ,
 11    PARTITION junk VALUES LESS THAN (MAXVALUE)
 12  )
 13  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select * from t1, t2 where t1.dt >= to_date( '01-jan-2009' ) and t1.dt < to_date( '01-jan-2010' )
  2  and t1.dt = t2.dt;

Execution Plan
----------------------------------------------------------
Plan hash value: 2588213313

--------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes | Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |     1 |    78 |     4 |       |       |
|   1 |  NESTED LOOPS           |      |     1 |    78 |     4 |       |       |
|   2 |   PARTITION RANGE SINGLE|      |     1 |    39 |     2 |     1 |     1 |
|*  3 |    TABLE ACCESS FULL    | T1   |     1 |    39 |     2 |     1 |     1 |
|   4 |   PARTITION RANGE SINGLE|      |     1 |    39 |     2 |     1 |     1 |
|*  5 |    TABLE ACCESS FULL    | T2   |     1 |    39 |     2 |     1 |     1 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T1"."DT">=TO_DATE(' 2009-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   5 - filter("T1"."DT"="T2"."DT" AND "T2"."DT">=TO_DATE(' 2009-01-01
              00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T2"."DT"<TO_DATE(' 2010-01-01
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - cpu costing is off (consider enabling it)



see how the pstart/pstop is 1:1 for T2 - it got the information from T1.

SQL on 10 Tables

Peter, September 29, 2010 - 4:25 pm UTC

Hi Tom, 
Below are two SQLs and execution plans. The first one is with all the original hints and the second one is with the parallel hints only. 
Both are run for a date range of 6 months (partitions).I have tried both the queires, the one with all the hints along with the parallel 
came back in 14 hours. One with only parallel hints never came back and I have killed it after 16 or so hours.

Size of the tables are as below. Tables 1,2,3,4 are partitioned.
T1 100GB
T2 60GB
T3 24GB
T4 2GB
T5 112KB
T6 112KB
T7 672KB
T8 1MB

All the conditions used in the SQL where clause return most of the data from the 4 big tables

<code>
Hi Tom, 
Below are two SQLs and execution plans. The first one is with all the original hints and the second one is with the parallel hints only. 
Both are run for a date range of 6 months (partitions).I have tried both the queires, the one with all the hints along with the parallel 
came back in 14 hours. One with only parallel hints never came back and I have killed it after 16 or so hours.


Size of the tables are as below 
T1 100GB
T2 60GB
T3 24GB
T4 2GB
T5 112KB
T6 112KB
T7 672KB
T8 1MB


SELECT
    /*+
        ORDERED
        INDEX        (sub    XIE1T1)
        USE_HASH    (sd)
        USE_HASH    (sub)        
        USE_HASH    (mcm)
        USE_HASH    (st)
        USE_HASH    (ms)
        USE_HASH    (hol)
        parallel(sd 8)
        parallel(sub 8)
        parallel(mcm 8)
        parallel(st 8)
        parallel(ms 8)
        FULL        (sd)
        FULL        (mcm)
        FULL        (st)
        FULL        (ms)        
        USE_NL        (rou)
        USE_NL        (tvm)
        USE_NL        (sta)
    */
rou.rid         line_id--Line Id
,sta.stationid        station_id--Station Id
,rou.Description    line_name              --Line
,sta.Name          station_name                              --    Station
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '00', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_00
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '01', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_01
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '02', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_02
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '03', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_03
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '04', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_04
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '05', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_05
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '06', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_06
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '07', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_07
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '08', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_08
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '09', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_09
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '10', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_10
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '11', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_11
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '12', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_12
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '13', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_13
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '14', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_14
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '15', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_15
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '16', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_16
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '17', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_17
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '18', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_18
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '19', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_19
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '20', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_20
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '21', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_21
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '22', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_22
,sum(decode(hol.sdate, null, Decode(To_Char(sd.cdate,'HH24'), '23', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0),0))    wd_23
,sum(decode(hol.sdate, null, Decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0 )) wd_tot
-----------
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '00', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_00
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '01', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_01
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '02', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_02
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '03', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_03
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '04', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_04
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '05', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_05
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '06', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_06
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '07', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_07
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '08', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_08
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '09', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_09
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '10', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_10
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '11', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_11
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '12', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_12
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '13', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_13
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '14', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_14
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '15', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_15
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '16', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_16
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '17', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_17
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '18', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_18
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '19', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_19
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '20', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_20
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '21', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_21
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '22', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_22
,sum(decode(hol.stype, 2, Decode(To_Char(sd.cdate,'HH24'), '23', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesat_23
,sum(decode(hol.stype, 2, Decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0 )) wesat_tot
----------
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '00', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_00
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '01', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_01
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '02', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_02
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '03', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_03
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '04', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_04
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '05', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_05
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '06', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_06
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '07', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_07
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '08', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_08
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '09', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_09
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '10', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_10
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '11', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_11
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '12', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_12
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '13', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_13
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '14', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_14
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '15', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_15
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '16', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_16
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '17', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_17
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '18', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_18
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '19', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_19
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '20', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_20
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '21', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_21
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '22', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_22
,sum(decode(hol.stype, 3, Decode(To_Char(sd.cdate,'HH24'), '23', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0)) wesun_23
,sum(decode(hol.stype, 3, Decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1),0 )) wesun_tot
-----------
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '00', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))   hd_00
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '01', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_01
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '02', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_02
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '03', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))   hd_03
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '04', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))  hd_04
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '05', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_05
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '06', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_06
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '07', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_07
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '08', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_08
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '09', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_09
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '10', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_10
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '11', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_11
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '12', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_12
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '13', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_13
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '14', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_14
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '15', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_15
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '16', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_16
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '17', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_17
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '18', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_18
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '19', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_19
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '20', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_20
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '21', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_21
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '22', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_22
,sum(decode(hol.stype, 1,  Decode(To_Char(sd.cdate,'HH24'), '23', decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0), 0))    hd_23
,sum(decode(hol.stype, 1,  Decode(sd.mbooking||':'||sd.cancel,'1:1',1,'0:0',1,-1), 0))  hd_tot
FROM
T1               sd
,T1               sub
,T2   mcm
,T3       st
,T4                  ms
,T5                    rou
,T6                tvm
,T7               sta
,T8              hol
WHERE        1=1
--
--    Join conditions
--
AND    sub.dcid       (+)    = sd.dcid
AND    sub.did               (+)    = sd.did
AND    sub.umid             (+)    = sd.umid
AND    sub.T3No    (+)    = sd.T3No
AND    sub.T1EvSequNo    (+)    = sd.T1EvSequNo    +1
AND    sub.ccounter    (+)    = sd.ccounter
AND    sub.PartitioningDate    (+)    = sd.PartitioningDate
AND    mcm.dcid           = sd.dcid
AND    mcm.did                   = sd.did
AND    mcm.umid                 = sd.umid
AND    mcm.T3No        = sd.T3No
AND    mcm.SequenceNo                = Decode    (sub.T1EvSequNo
                                                    ,NULL    ,sd.T1EvSequNo
                                                ,sub.T1EvSequNo
                                                )
AND    mcm.ccounter        = sd.ccounter
AND    mcm.PartitioningDate        = sd.PartitioningDate
AND    mcm.TimeStamp                = sd.cdate
AND   st.did =  mcm.did         
AND   st.dcid =  mcm.dcid        
AND    st.umid = mcm.umid                
AND    st.T3No =  mcm.T3No     
AND    st.PartitioningDate = mcm.PartitioningDate     
AND    ms.dcid    =    st.dcid
AND    ms.did            =    st.did
AND    ms.umid        =    st.umid
AND    ms.Endcdate        =    st.PartitioningDate
AND    tvm.TVMID            =    ms.did
AND    tvm.dcid    =    ms.dcid
AND    sta.StationID     (+)    =    tvm.TVMTariffLocationID
AND    rou.rid         =    ms.RouteNo
AND    trunc(hol.sdate (+)) = trunc(sd.cdate)
AND    trunc(hol.sdate(+), 'hh24')  = trunc(sd.cdate, 'hh24')
--
--    Filter conditions
--
AND    mcm.mtype    IN     (7,20)
AND    sd.ano                >     100000
AND    sd.CorrectionFlag            =     0
AND    sd.RealStatisticArticle        =     0
AND    sd.tbokking                =     0
AND    sd.ano                <>     607900100
AND    sub.ano            (+)    =    607900100
AND    st.TestSaleFlag                =     0
AND    rou.rid in 
(
1400,    
1200,    
1000,    
1300,    
1100    
)  
-- Subway Stations Only 
--
--    Parameter conditions
--
AND    sd.cdate            >= to_date('2010-01-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
AND    sd.cdate            <= to_date('2010-07-01-02-59-59','YYYY-MM-DD-HH24-MI-SS')
AND    sd.PartitioningDate    >= to_date('2010-01-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')
AND    sd.PartitioningDate    <  to_date('2010-08-01-00-00-00','YYYY-MM-DD-HH24-MI-SS')
AND  hol.sdate(+) >= to_date('2010-01-01-03-00-00','YYYY-MM-DD-HH24-MI-SS')      --with out the below date condition it took 10 hrs for 1 month of data.
AND  hol.sdate(+) <= to_date('2010-07-01-02-59-59','YYYY-MM-DD-HH24-MI-SS')
GROUP BY rou.rid, sta.stationid, rou.Description, sta.Name;
















Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=329335 Card=1 Bytes=
          280)

   1    0   SORT* (GROUP BY) (Cost=329335 Card=1 Bytes=280)            :Q96008
   2    1     HASH JOIN* (OUTER) (Cost=329331 Card=1 Bytes=280)        :Q96007
   3    2       NESTED LOOPS* (OUTER) (Cost=329330 Card=1 Bytes=258)   :Q96007
   4    3         NESTED LOOPS* (Cost=329329 Card=1 Bytes=240)         :Q96007
   5    4           NESTED LOOPS* (Cost=329328 Card=1 Bytes=227)       :Q96007
   6    5             HASH JOIN* (Cost=329327 Card=1 Bytes=209)        :Q96007
   7    6               HASH JOIN* (Cost=328761 Card=1 Bytes=183)      :Q96006
   8    7                 HASH JOIN* (Cost=314937 Card=1 Bytes=153)    :Q96005
   9    8                   PARTITION RANGE* (ITERATOR)                :Q96005
  10    9                     TABLE ACCESS* (FULL) OF 'T2'             :Q96003
           (Cost=30194 Card=15914517 Bytes=747982299)

  11    8                   HASH JOIN* (OUTER) (Cost=274795 Card=36433 :Q96004
          791 Bytes=3861981846)

  12   11                     PARTITION RANGE* (ITERATOR)              :Q96004
  13   12                       TABLE ACCESS* (FULL) OF 'T1'  :Q96002
          (Cost=50038 Card=36433791 Bytes=2295328833)

  14   11                     PARTITION RANGE* (ITERATOR)              :Q96000
  15   14                       TABLE ACCESS (BY LOCAL INDEX ROWID) OF
           'T1' (Cost=219299 Card=2561328 Bytes=110137104)

  16   15                         INDEX (RANGE SCAN) OF 'XIE1T1' (NON-UNIQUE) (Cost=8177 Card=5538667)

  17    7                 PARTITION RANGE* (ITERATOR)                  :Q96006
  18   17                   TABLE ACCESS* (FULL) OF 'T3' :Q96006
           (Cost=13824 Card=170490821 Bytes=5114724630)

  19    6               PARTITION RANGE* (ITERATOR)                    :Q96007
  20   19                 TABLE ACCESS* (FULL) OF 'T4' (Cost=56 :Q96007
          6 Card=1671 Bytes=43446)

  21    5             TABLE ACCESS* (BY INDEX ROWID) OF 'T5' (Cost :Q96007
          =1 Card=1 Bytes=18)

  22   21               INDEX* (UNIQUE SCAN) OF 'XPKT5' (UNIQUE)   :Q96007
  23    4           INDEX* (RANGE SCAN) OF 'XIE2T6' (NON-UNIQUE) :Q96007
           (Cost=1 Card=1 Bytes=13)

  24    3         TABLE ACCESS* (BY INDEX ROWID) OF 'T7' (Cost :Q96007
          =1 Card=1 Bytes=18)

  25   24           INDEX* (UNIQUE SCAN) OF 'XPKT7' (UNIQUE)   :Q96007
  26    2       TABLE ACCESS* (BY INDEX ROWID) OF 'T8 :Q96001
          ' (Cost=2 Card=12 Bytes=264)

  27   26         INDEX (RANGE SCAN) OF 'PKT8'
           (UNIQUE) (Cost=2 Card=1)



   1 PARALLEL_TO_SERIAL            SELECT A1.C0,A1.C1,A1.C2,A1.C3,SUM(A1.C4),SU
                                   M(A1.C5),SUM(A1.C6),SUM(A1.C7),SUM(A

   2 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) */
                                    A1.C19 C0,A1.C26 C1,A1.C20 C2,A1.C2

   3 PARALLEL_COMBINED_WITH_PARENT
   4 PARALLEL_COMBINED_WITH_PARENT
   5 PARALLEL_COMBINED_WITH_PARENT
   6 PARALLEL_COMBINED_WITH_PARENT
   7 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) */
                                    A2.C0 C0,A2.C2 C1,A2.C1 C2,A2.C3 C3

   8 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) SW
                                   AP_JOIN_INPUTS(A2) */ A2.C0 C0,A2.C2

   9 PARALLEL_COMBINED_WITH_PARENT
  10 PARALLEL_TO_PARALLEL          SELECT /*+ NO_EXPAND ROWID(A1) */ A1."PARTIT
                                   IONINGDATE" C0,A1."dcid" C1

  11 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) */
                                    A1.C0 C0,A1.C1 C1,A1.C2 C2,A1.C3 C3

  12 PARALLEL_COMBINED_WITH_PARENT
  13 PARALLEL_TO_PARALLEL          SELECT /*+ NO_EXPAND ROWID(A1) */ A1."PARTIT
                                   IONINGDATE" C0,A1."dcid" C1

  14 PARALLEL_FROM_SERIAL
  17 PARALLEL_COMBINED_WITH_PARENT
  18 PARALLEL_COMBINED_WITH_PARENT
  19 PARALLEL_COMBINED_WITH_PARENT
  20 PARALLEL_COMBINED_WITH_PARENT
  21 PARALLEL_COMBINED_WITH_PARENT
  22 PARALLEL_COMBINED_WITH_PARENT
  23 PARALLEL_COMBINED_WITH_PARENT
  24 PARALLEL_COMBINED_WITH_PARENT
  25 PARALLEL_COMBINED_WITH_PARENT
  26 PARALLEL_FROM_SERIAL



Explain plan of the same above query with just the below parallel hints.
    /*+
        parallel(sd 8)
        parallel(sub 8)
        parallel(mcm 8)
        parallel(st 8)
        parallel(ms 8)
    */





Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=579 Card=1 Bytes=280
          )

   1    0   SORT* (GROUP BY) (Cost=579 Card=1 Bytes=280)               :Q102002
   2    1     NESTED LOOPS* (OUTER) (Cost=575 Card=1 Bytes=280)        :Q102001
   3    2       FILTER*                                                :Q102001
   4    3         NESTED LOOPS* (OUTER)                                :Q102001
   5    4           NESTED LOOPS* (Cost=573 Card=1 Bytes=215)          :Q102001
   6    5             NESTED LOOPS* (Cost=572 Card=1 Bytes=152)        :Q102001
   7    6               NESTED LOOPS* (Cost=571 Card=1 Bytes=105)      :Q102001
   8    7                 NESTED LOOPS* (OUTER) (Cost=570 Card=2 Bytes :Q102001
          =150)

   9    8                   NESTED LOOPS* (Cost=569 Card=2 Bytes=114)  :Q102001
  10    9                     HASH JOIN* (Cost=567 Card=43 Bytes=1892) :Q102001
  11   10                       INLIST ITERATOR*                       :Q102000
  12   11                         TABLE ACCESS (BY INDEX ROWID) OF 'RO
          UTES' (Cost=1 Card=5 Bytes=90)

  13   12                           INDEX (RANGE SCAN) OF 'XPKT5'
          (UNIQUE) (Cost=1 Card=5)

  14   10                       PARTITION RANGE* (ITERATOR)            :Q102001
  15   14                         TABLE ACCESS* (FULL) OF 'T4'  :Q102001
          (Cost=566 Card=1671 Bytes=43446)

  16    9                     INDEX* (RANGE SCAN) OF 'XIE2T6' (N :Q102001
          ON-UNIQUE) (Cost=1 Card=1 Bytes=13)

  17    8                   TABLE ACCESS* (BY INDEX ROWID) OF 'TVMSTAT :Q102001
          ION' (Cost=1 Card=1 Bytes=18)

  18   17                     INDEX* (UNIQUE SCAN) OF 'XPKT7'  :Q102001
          (UNIQUE)

  19    7                 PARTITION RANGE* (ITERATOR)                  :Q102001
  20   19                   TABLE ACCESS* (BY LOCAL INDEX ROWID) OF 'S :Q102001
          ALESTRANSACTION' (Cost=1 Card=1 Bytes=30)

  21   20                     INDEX* (RANGE SCAN) OF 'XIE1T3' :Q102001
           (NON-UNIQUE) (Cost=1 Card=5)

  22    6               PARTITION RANGE* (ITERATOR)                    :Q102001
  23   22                 INLIST ITERATOR*                             :Q102001
  24   23                   TABLE ACCESS* (BY LOCAL INDEX ROWID) OF 'M :Q102001
          ISCCARDMOVEMENT' (Cost=1 Card=1 Bytes=47)

  25   24                     INDEX* (RANGE SCAN) OF 'XIE4T2' :Q102001
           (NON-UNIQUE) (Cost=1 Card=1)

  26    5             TABLE ACCESS* (BY GLOBAL INDEX ROWID) OF 'T1' :Q102001
          (Cost=1 Card=1 Bytes=63)

  27   26               INDEX* (RANGE SCAN) OF 'XPKT1' (UNIQU :Q102001
          E) (Cost=1 Card=1)

  28    4           TABLE ACCESS* (BY GLOBAL INDEX ROWID) OF 'T1' :Q102001
           (Cost=1 Card=1 Bytes=43)

  29   28             INDEX* (UNIQUE SCAN) OF 'XPKT1' (UNIQUE :Q102001
          )

  30    2       TABLE ACCESS* (BY INDEX ROWID) OF 'T8' :Q102001 (Cost=1 Card=1 Bytes=22)

  31   30         INDEX* (RANGE SCAN) OF 'PKT8 :Q102001
          ' (UNIQUE) (Cost=1 Card=1)



   1 PARALLEL_TO_SERIAL            SELECT A1.C0,A1.C1,A1.C2,A1.C3,SUM(A1.C4),SU
                                   M(A1.C5),SUM(A1.C6),SUM(A1.C7),SUM(A

   2 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_NL(A2) INDE
                                   X(A2 "PKT8")

   3 PARALLEL_COMBINED_WITH_CHILD
   4 PARALLEL_COMBINED_WITH_PARENT
   5 PARALLEL_COMBINED_WITH_PARENT
   6 PARALLEL_COMBINED_WITH_PARENT
   7 PARALLEL_COMBINED_WITH_PARENT
   8 PARALLEL_COMBINED_WITH_PARENT
   9 PARALLEL_COMBINED_WITH_PARENT
  10 PARALLEL_COMBINED_WITH_PARENT
  11 PARALLEL_FROM_SERIAL
  14 PARALLEL_COMBINED_WITH_PARENT
  15 PARALLEL_COMBINED_WITH_PARENT
  16 PARALLEL_COMBINED_WITH_PARENT
  17 PARALLEL_COMBINED_WITH_PARENT
  18 PARALLEL_COMBINED_WITH_PARENT
  19 PARALLEL_COMBINED_WITH_PARENT
  20 PARALLEL_COMBINED_WITH_PARENT
  21 PARALLEL_COMBINED_WITH_PARENT
  22 PARALLEL_COMBINED_WITH_PARENT
  23 PARALLEL_COMBINED_WITH_PARENT
  24 PARALLEL_COMBINED_WITH_PARENT
  25 PARALLEL_COMBINED_WITH_PARENT
  26 PARALLEL_COMBINED_WITH_PARENT
  27 PARALLEL_COMBINED_WITH_PARENT
  28 PARALLEL_COMBINED_WITH_PARENT
  29 PARALLEL_COMBINED_WITH_PARENT
  30 PARALLEL_COMBINED_WITH_PARENT
  31 PARALLEL_COMBINED_WITH_PARENT


Please suggest and possible ways to improve this.</code>
Tom Kyte
September 29, 2010 - 4:32 pm UTC

one cannot really do that.

first - one needs to truly know your schema. And I do not.
second - one needs to know your data. And I do not.
third - throwing a 10 table query at someone is something I don't do.
fourth, that query is full of ugly ugly stuff.

AND mcm.SequenceNo = Decode (sub.T1EvSequNo
,NULL ,sd.T1EvSequNo
,sub.T1EvSequNo
)

ouch, that hurts. A conditional join :(

here is what you should do first:

ensure that the estimated cardinalities in the plan without the hints are near to what you expect them to be. If they are not - we need to address that. So, you do that analysis and if something doesn't look right - tell us about it and we'll start from there.


I've a feeling there must be a big mismatch given the cost differentials above - if the more optimal plan costs so so so much more than the default plan, something regarding the estimated cardinality values must be WAY off.


SQL on 10 Tables

Peter, September 30, 2010 - 3:00 pm UTC

Hi Tom,

I am working on those details, mean while, is there a way to eliminate the conditional join what you have mentioned. As mentioned in my previous question, I am outer joining the T1 table to itself and then joining one of the T1's to T2(condition in decode). Can I eliminate this with any of the analytical functions by Oracle.

three tables T1 sd, T1 sub, T2 mcm
AND sub.dcid             (+) = sd.dcid
AND sub.did              (+) = sd.did
AND sub.umid             (+) = sd.umid
AND sub.T3No             (+) = sd.T3No
<b>AND sub.T1EvSequNo    (+) = sd.T1EvSequNo+1</b>
AND sub.ccounter         (+) = sd.ccounter
AND sub.PartitioningDate (+) = sd.PartitioningDate

AND <u>sd.</u>ano           <> 607900100
AND <u>sub.</u>ano       (+) = 607900100

AND sd.ano                   > 100000
AND sd.CorrectionFlag        = 0
AND sd.RealStatisticArticle  = 0
AND sd.tbokking              = 0

AND mcm.SequenceNo = Decode(sub.T1EvSequNo,NULL ,sd.T1EvSequNo, sub.T1EvSequNo)


Thanks.
Tom Kyte
September 30, 2010 - 3:29 pm UTC

oh - wait - I DON'T WANT THEM - I'm not going to tune a 10 table query - that is not a good use of "me" and my time...


... mean while, is there a way to eliminate the conditional join what you have mentioned. ...

only if you don't really need it - which means going back to the data model and asking yourself "why, why did we do that". In my experience - it means you have a messy data model.


... I am outer joining the T1 table to itself and then joining one of the T1's to T2(condition in decode). Can I eliminate this with any of the analytical functions by Oracle. ...

quite likely - but only if you

a) understand what your query needs to retrieve
b) understand how the analytics actually work.

Why are you outer joining T1 to itself - give us a small example - bite sized - that mimics your use and we can tell you if you can use an analytic or not.

SQL on 10 Tables

Peter, September 30, 2010 - 5:12 pm UTC

Hi Tom,

Below are the sample table scipts and the output what I am looking for.


create table t1 as
select * from
(
select 501,2249,3933,139348,1012844,0,607100100 from dual union 
select 501,2249,3933,139348,1012845,0,607900100 from dual union 
select 502,2250,3934,139349,1012846,0,607100100 from dual union 
select 502,2250,3934,139349,1012847,0,607100101 from dual
)

create table t2
as select col1, col2,col3,col4,col5,col6, 0+rownum col7 from t1

col1,col2,col3,col4,col5,col6 composite primary key on t1 and t2

a.COL1 a.COL2 a.COL3   a.COL4   a.COL5      b.COL    a.COL6 a.COL7     b.COL7     c.COL7
501 2249 3933 139348 1012844 1012845 0 607100100  607900100     2
502 2250 3934 139349 1012846 NULL    0 607100100  NULL       3
502 2250 3934 139349 1012847 NULL    0 607100101  NULL       4

The key here is in T1 if the next value of col5 has a col7 value of 607900100 I have to join that col5 record to t2. For that below is the query I am using. Is there a better way this can be done so that the performance can be improved by eliminating both the outer join and the decode in the where clause.

select a.col1, a.col2, a.col3, a.col4, a.col5, b.col5, a.col6, a.col7, b.col7, c.col7
from t1 a, t1 b, t2 c
where a.col1 = b.col1(+)
and a.col2 = b.col2(+)
and a.col3 = b.col3(+)
and a.col4 = b.col4(+)
and a.col5+1 = b.col5(+)
and a.col6 = b.col6(+)

and a.col7 <> 607900100
and b.col7(+) = 607900100

and a.col1 = c.col1
and a.col2 = c.col2
and a.col3 = c.col3
and a.col4 = c.col4
and decode(b.col5, null, a.col5, b.col5) = c.col5
and a.col6 = c.col6

As I mentioned in my previous post. I am working on getting more details to post so that I can get some help on tuning the qhole query itself.

Thanks.


SQL on 10 Tables

A reader, October 01, 2010 - 3:45 pm UTC

Hi Tom,

Below are the explain plans before and after hints and as you were doubting, there is a huge difference in the cardinalities of the two plans. Tha plan with all the hints returns the cardinalities close to the exact rows returned by taht particular step.

Statistics are gathered on a bi monthly basis on the tables.

What could be the reason for this?
What are the changes to be made to the tables so that this can be improved.

I am new to this Database environment here and I am trying to provide as much informatin as possible.

Please advise.

EXPLAIN PALN WITH HINTS.

 /*+
        ORDERED
        INDEX        (sub    XIE1T1)
        USE_HASH    (sd)
        USE_HASH    (sub)        
        USE_HASH    (mcm)
        USE_HASH    (st)
        USE_HASH    (ms)
        USE_HASH    (hol)
        parallel(sd 8)
        parallel(sub 8)
        parallel(mcm 8)
        parallel(st 8)
        parallel(ms 8)
        FULL        (sd)
        FULL        (mcm)
        FULL        (st)
        FULL        (ms)        
        USE_NL        (rou)
        USE_NL        (tvm)
        USE_NL        (sta)
    */



PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
| Id  | Operation                                   |  Name| Rows  | Bytes |TempSpc| Cost  | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |      |     1 |   269 |       |   329K|       |       |
|   1 |  SORT GROUP BY                              |      |     1 |   269 |       |   329K|       |       |
|*  2 |   HASH JOIN OUTER                           |      |     1 |   269 |       |   329K|       |       |
|   3 |    NESTED LOOPS OUTER                       |      |     1 |   258 |       |   329K|       |       |
|   4 |     NESTED LOOPS                            |      |     1 |   240 |       |   329K|       |       |
|   5 |      NESTED LOOPS                           |      |     1 |   227 |       |   329K|       |       |
|*  6 |       HASH JOIN                             |      |     1 |   209 |       |   329K|       |       |
|*  7 |        HASH JOIN                            |      |     1 |   183 |       |   328K|       |       |
|*  8 |         HASH JOIN                           |      |     1 |   153 |   111M|   314K|       |       |
|   9 |          PARTITION RANGE ITERATOR           |      |       |       |       |       |     8 |    14 |
|* 10 |           TABLE ACCESS FULL                 | MCM  |    15M|   713M|       | 30194 |     8 |    14 |
|* 11 |          HASH JOIN OUTER                    |      |    36M|  3683M|   325M|   274K|       |       |
|  12 |           PARTITION RANGE ITERATOR          |      |       |       |       |       |     8 |    14 |
|* 13 |            TABLE ACCESS FULL                | SD   |    36M|  2188M|       | 50038 |     8 |    14 |
|  14 |           PARTITION RANGE ITERATOR          |      |       |       |       |       |     8 |    14 |
|* 15 |            TABLE ACCESS BY LOCAL INDEX ROWID| SD   |  2561K|   105M|       |   219K|     8 |    14
|* 16 |             INDEX RANGE SCAN                | XIESD|  5538K|       |       |  8177 |     8 |    14 |
|  17 |         PARTITION RANGE ITERATOR            |      |       |       |       |       |     8 |    14 |
|* 18 |          TABLE ACCESS FULL                  | ST   |   170M|  4877M|       | 13824 |     8 |    14 |
|  19 |        PARTITION RANGE ITERATOR             |      |      |       |        |       |     8 |    14 |
|* 20 |         TABLE ACCESS FULL                   | MS   |  1671 | 43446 |       |   566 |     8 |    14 |
|  21 |       TABLE ACCESS BY INDEX ROWID           | ROU  |     1 |    18 |       |     1 |       |       |
|* 22 |        INDEX UNIQUE SCAN                    |XPKROU|     1 |       |       |       |       |       |
|* 23 |      INDEX RANGE SCAN                       |XIETVM|     1 |    13 |       |     1 |       |       |
|  24 |     TABLE ACCESS BY INDEX ROWID             | STA  |     1 |    18 |       |     1 |       |       |
|* 25 |      INDEX UNIQUE SCAN                      |XPKSTA|     1 |       |       |       |       |       |
|* 26 |    TABLE ACCESS FULL                        | HOL  |  1352 | 14872 |       |    11 |       |       |
----------------------------------------------------------------------------------------------------

EXPALIN PLAN WITH ONLY PARALLEL HINTS.

       /*+
        parallel(sd 8)
        parallel(sub 8)
        parallel(mcm 8)
        parallel(st 8)
        parallel(ms 8)
       */

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
| Id  | Operation                                 |  Name  | Rows  | Bytes | Cost  | Pstart| Pstop |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |        |     1 |   269 |   579 |       |       |
|   1 |  SORT GROUP BY                            |        |     1 |   269 |   579 |       |       |
|   2 |   NESTED LOOPS OUTER                      |        |     1 |   269 |   575 |       |       |
|*  3 |    FILTER                                 |        |       |       |       |       |       |
|   4 |     NESTED LOOPS OUTER                    |        |       |       |       |       |       |
|   5 |      NESTED LOOPS                         |        |     1 |   215 |   573 |       |       |
|   6 |       NESTED LOOPS                        |        |     1 |   152 |   572 |       |       |
|   7 |        NESTED LOOPS                       |        |     1 |   105 |   571 |       |       |
|   8 |         NESTED LOOPS OUTER                |        |     2 |   150 |   570 |       |       |
|   9 |          NESTED LOOPS                     |        |     2 |   114 |   569 |       |       |
|* 10 |           HASH JOIN                       |        |    43 |  1892 |   567 |       |       |
|  11 |            INLIST ITERATOR                |        |       |       |       |       |       |
|  12 |             TABLE ACCESS BY INDEX ROWID   | ROU    |     5 |    90 |     1 |       |       |
|* 13 |              INDEX RANGE SCAN             | XPKROU |     5 |       |     1 |       |       |
|  14 |            PARTITION RANGE ITERATOR       |        |       |       |       |     8 |    14 |
|* 15 |             TABLE ACCESS FULL             | MS     |  1671 | 43446 |   566 |     8 |    14 |
|* 16 |           INDEX RANGE SCAN                |XIE2TVM |     1 |    13 |     1 |       |       |
|  17 |          TABLE ACCESS BY INDEX ROWID      | STA    |     1 |    18 |     1 |       |       |
|* 18 |           INDEX UNIQUE SCAN               | XPKSTA |     1 |       |       |       |       |
|  19 |         PARTITION RANGE ITERATOR          |        |       |       |       |   KEY |   KEY |
|* 20 |          TABLE ACCESS BY LOCAL INDEX ROWID| ST     |     1 |    30 |     1 |   KEY |   KEY |
|* 21 |           INDEX RANGE SCAN                | XIE1ST |     5 |       |     1 |   KEY |   KEY |
|  22 |        PARTITION RANGE ITERATOR           |        |       |       |       |   KEY |   KEY |
|  23 |         INLIST ITERATOR                   |        |       |       |       |       |       |
|* 24 |          TABLE ACCESS BY LOCAL INDEX ROWID| MCM    |     1 |    47 |     1 |   KEY |   KEY |
|* 25 |           INDEX RANGE SCAN                | XIE4MCM|     1 |       |     1 |   KEY |   KEY |
|* 26 |       TABLE ACCESS BY GLOBAL INDEX ROWID  | SD     |     1 |    63 |     1 | ROWID | ROW L 
|* 27 |        INDEX RANGE SCAN                   | XPKSD  |     1 |       |     1 |       |       |
|* 28 |      TABLE ACCESS BY GLOBAL INDEX ROWID   | SD     |     1 |    43 |     1 | ROWID | ROW L 
|* 29 |       INDEX UNIQUE SCAN                   | XPKSD  |     1 |       |       |       |       |
|  30 |    TABLE ACCESS BY INDEX ROWID            | HOL    |     1 |    11 |     1 |       |       |
|* 31 |     INDEX RANGE SCAN                      | PK_HOL |     1 |       |     1 |       |       |
----------------------------------------------------------------------------------------------------


Thanks.
Tom Kyte
October 04, 2010 - 1:36 am UTC

are statistics up to date.

if you use dynamic sampling set to level 3 or more, does the plan change (without the non-parallel hints). alter your session and explain the query with dynamic sampling at 3 or more.

lost question

A reader, October 04, 2010 - 6:11 am UTC

Hi Tom,
Could you please help me on question asked above:


Oracle GL package has trial balance report, which we want to leverage. On the other side we have
queries from oracle GL base tables, to create Materlaized View which is used to send data to down
stream systems, which goes through ETL process. We want to do reconciliation between GL trial balance report and downstream systems, and wanted to avoid opening the trial balance report query...
Regards,
Tom Kyte
October 04, 2010 - 7:44 am UTC

I do not do apps, I haven't touched them in years.


I gave you a list of technical approaches - serializable, flashback query, whatever. You are free to try them - but if you are looking for me to write the process, I cannot - I don't do GL.

I don't know how to "reconcile", I don't know what is involved, I don't know the requirements.


SQL on 10 Tables

Peter, October 04, 2010 - 9:01 am UTC

Hi Tom,

Thanks for the response. Below are the dates when the tables were analyzed last time.

sd 09/23/2010
mcm 09/20/2010
st 09/26/2010
ms 09/25/2010
rou 09/28/2010
tvm 09/28/2010
sta 09/28/2010
hol 09/29/2010

I have altered the session with levels at 6 and 8 and both of them gave the same explain plan below with out any hints. Yes, the plan changed but cardinality wise even this plan is not accurate and there is a huge difference between the reality and the CBO's prediction.

Below is the output explain_plan with the level Set to 6

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------
| Id  | Operation                                  |  Name                         | Rows  | Bytes | Cost  | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                               |     1 |   269 |  4544 |       |       |
|   1 |  SORT GROUP BY                             |                               |     1 |   269 |  4544 |       |       |
|   2 |   NESTED LOOPS OUTER                       |                               |     1 |   269 |  4539 |       |       |
|*  3 |    FILTER                                  |                               |       |       |       |       |       |
|   4 |     NESTED LOOPS OUTER                     |                               |       |       |       |       |       |
|   5 |      NESTED LOOPS                          |                               |     1 |   215 |  4537 |       |       |
|   6 |       NESTED LOOPS                         |                               |     1 |   152 |  4536 |       |       |
|   7 |        NESTED LOOPS OUTER                  |                               |     1 |   105 |  4535 |       |       |
|   8 |         NESTED LOOPS                       |                               |     1 |    87 |  4534 |       |       |
|*  9 |          HASH JOIN                         |                               |     7 |   399 |  4531 |       |       |
|* 10 |           HASH JOIN                        |                               |   164 |  7216 |  4527 |       |       |
|  11 |            INLIST ITERATOR                 |                               |       |       |       |       |       |
|  12 |             TABLE ACCESS BY INDEX ROWID    | ROU                           |     5 |    90 |     1 |       |       |
|* 13 |              INDEX RANGE SCAN              | XPKROU                        |     5 |       |     1 |       |       |
|  14 |            PARTITION RANGE ITERATOR        |                               |       |       |       |     7 |    13 |
|* 15 |             TABLE ACCESS FULL              | MS                            |  6369 |   161K|  4525 |     7 |    13 |
|  16 |           INDEX FULL SCAN                  | XIE1TVM                       |  3744 | 48672 |    10 |       |       |
|  17 |          PARTITION RANGE ITERATOR          |                               |       |       |       |   KEY |   KEY |
|* 18 |           TABLE ACCESS BY LOCAL INDEX ROWID| ST                            |     1 |    30 |     1 |   KEY |   KEY |
|* 19 |            INDEX RANGE SCAN                | XIE1ST                        |     5 |       |     1 |   KEY |   KEY |
|  20 |         TABLE ACCESS BY INDEX ROWID        | STA                           |     1 |    18 |     1 |       |       |
|* 21 |          INDEX UNIQUE SCAN                 | XPKSTA                        |     1 |       |       |       |       |
|  22 |        PARTITION RANGE ITERATOR            |                               |       |       |       |   KEY |   KEY |
|  23 |         INLIST ITERATOR                    |                               |       |       |       |       |       |
|* 24 |          TABLE ACCESS BY LOCAL INDEX ROWID | MCM                           |     1 |    47 |     1 |   KEY |   KEY |
|* 25 |           INDEX RANGE SCAN                 | XIE4MCM                       |     1 |       |     1 |   KEY |   KEY |
|* 26 |       TABLE ACCESS BY GLOBAL INDEX ROWID   | SD                            |     1 |    63 |     1 | ROWID | ROW L 
|* 27 |        INDEX RANGE SCAN                    | XPKSD                         |     1 |       |     1 |       |       |
|* 28 |      TABLE ACCESS BY GLOBAL INDEX ROWID    | SD                            |     1 |    43 |     1 | ROWID | ROW L 
|* 29 |       INDEX UNIQUE SCAN                    | XPKSD                         |     1 |       |       |       |       |
|  30 |    TABLE ACCESS BY INDEX ROWID             | HOL                           |     1 |    11 |     1 |       |       |
|* 31 |     INDEX RANGE SCAN                       | PK_HOL                        |     1 |       |     1 |       |       |
----------------------------------------------------------------------------------------------------


I came to know from my DBA that statistics are gathered on the big tables using a sample of 3% data of the tables. Do you think this might be the reason for the difference in the explain plans with and with out hints. My DBA here says that we could not go with higher sample percent because of the time constraint.

Please suggest.

to insert datafile in database

ankur, October 07, 2010 - 5:04 am UTC

hi tom
suppose we have a datafile on our desktop which has same format as we use in the oracle and we want to use this file into oracle,then how can we insert this table into the database.is this possible?
Tom Kyte
October 07, 2010 - 7:26 am UTC

a datafile from a random database - by itself - is useless. You cannot use it anywhere other than the database it came from.


If you TRANSPORT all of the datafiles for a given tablespace (or set of tablespaces) using export or datapump export - you can plug that tablespace (and its datafiles) into another database.

but you have to transport it - that will give you the datafiles AND a file of metadata that describes the contents of the datafile so it can be plugged into another database.


http://www.oracle.com/pls/db112/search?remark=quick_search&word=transportable+tablespace

How to sum data from all 3 unions

anan, October 07, 2010 - 12:12 pm UTC

Tom:
I am running the following query on 11i database.
My result should be to sum volume from all 3 unions data.

The following query sum by unions.
Help me on how to consolidate volume(sum) from all 3 union data.

SELECT as_of_date,
entity_au_id,
cas_au_status,
cas_prod,
volume
FROM (SELECT cvr.as_of_date,
cvr.entity_au_id,
NVL(cust.status, NULL) AS cas_au_status,
cas.product_code AS cas_prod,
ROUND(SUM(cvr.total_cost)) AS volume
FROM cvr_ptl_cost_it cvr,
cvr_cas_prod cas,
cvr_cust_data cust,
cvr_as_of_date aod
WHERE cvr.as_of_date = aod.as_of_date
AND cvr.bill_flag = 'B'
AND cvr.cas_product_id = cas.product_code
AND cvr.fpa_account_id = cas.fpa_line
AND ROUND(cvr.total_cost) <> 0
AND (cvr.pass_flag = 1 AND cvr.serv_prov_flag IS NULL)
AND cas.status = 'A'
AND cvr.as_of_date = cust.as_of_date(+)
AND SUBSTR(cvr.entity_au_id, -7) = cust.au_id(+)
and cvr.entity_au_id = 'AU000000000848'
and cas.product_code = '417W00015'
GROUP BY cvr.as_of_date,
cvr.entity_au_id,
cust.status,
cas.product_code,
cas.fpa_line,
cas.provider_id,
cas_version,
cas.cas_description)
UNION
(SELECT cvr.as_of_date,
cvr.entity_au_id,
NVL(cust.status, NULL) AS cas_au_status,
cas.product_code AS cas_prod,
ROUND(SUM(cvr.total_cost)) AS volume
FROM cvr_ptl_cost_it cvr,
cvr_cas_prod cas,
cvr_cust_data cust,
cvr_as_of_date aod
WHERE cvr.as_of_date = aod.as_of_date
AND cvr.bill_flag = 'B'
AND cvr.cas_product_id = cas.product_code
AND cvr.fpa_account_id = cas.fpa_line
AND ROUND(cvr.total_cost) <> 0
AND (cvr.pass_flag = 2 AND debit_credit != 'C')
AND cas.status = 'A'
AND cvr.as_of_date = cust.as_of_date(+)
AND SUBSTR(cvr.entity_au_id, -7) = cust.au_id(+)
and cvr.entity_au_id = 'AU000000000848'
and cas.product_code = '417W00015'
GROUP BY cvr.as_of_date,
cvr.entity_au_id,
cust.status,
cas.product_code,
cas.fpa_line,
cas.provider_id,
cas_version,
cas.cas_description)
UNION
(SELECT cvr.as_of_date,
cvr.entity_au_id,
NVL(cust.status, NULL) AS cas_au_status,
cas.product_code AS cas_prod,
ROUND(SUM(cvr.total_cost)) AS volume
FROM cvr_ptl_cost_it cvr,
cvr_cas_prod cas,
cvr_cust_data cust,
cvr_as_of_date aod
WHERE cvr.as_of_date = aod.as_of_date
AND cvr.bill_flag = 'B'
AND cvr.cas_product_id = cas.product_code
AND cvr.fpa_account_id = cas.fpa_line
AND ROUND(cvr.total_cost) <> 0
AND (cvr.pass_flag = 3 AND debit_credit != 'C')
AND cas.status = 'A'
AND cvr.as_of_date = cust.as_of_date(+)
AND SUBSTR(cvr.entity_au_id, -7) = cust.au_id(+)
and cvr.entity_au_id = 'AU000000000848'
and cas.product_code = '417W00015'
GROUP BY cvr.as_of_date,
cvr.entity_au_id,
cust.status,
cas.product_code,
cas.fpa_line,
cas.provider_id,
cas_version,
cas.cas_description);

no data from union 1,
volume 6066 is from union 2
volume 124272 is from union 2
(I want to sum volume from all 3 union data)
Result from the query:
------------------------------------------------------
AS_OF_DATE ENTITY_AU_ID CAS_AU_STATUS CAS_PROD VOLUME
10/31/2010 AU000000000848 N 417W00015 6066
10/31/2010 AU000000000848 N 417W00015 124272
-------------------------------------------------------
Expected result: (6066+124272)
------------------------------------------------------
AS_OF_DATE ENTITY_AU_ID CAS_AU_STATUS CAS_PROD VOLUME
10/31/2010 AU000000000848 N 417W00015 130338
------------------------------------------------------

Thanks,
Anan


Tom Kyte
October 11, 2010 - 10:58 am UTC

select as_of_date, entity_au_id, ..., sum( volume )
from (YOUR_QUERY_HERE)
group by as_of_date, entity_au_id, ....

How to sum data from all 3 unions

anan, October 07, 2010 - 12:20 pm UTC

Tom,
Correction:

no data from union 1,
volume 6066 is from union 2
volume 124272 is from union 3
(I want to sum volume from all 3 union data)

How to sum data ...

Leo Mannhart, October 08, 2010 - 8:00 am UTC

anan, you already know how to do it ;-)
Just do another "select ... from ( select ..." around your existing query.

SQL> select v1.as_of_date, v1.entity_au_id, v1.cas_au_status, v1.cas_prod, sum(v1.volume) volume
  2  from   (
  3          select to_date('10/31/2010', 'mm/dd/yyyy') as_of_date, 'AU000000000848' entity_au_id,
  4                 'N' cas_au_status, '417W00015' cas_prod, 6066 volume from dual
  5          union all
  6          select to_date('10/31/2010', 'mm/dd/yyyy'), 'AU000000000848',
  7                 'N', '417W00015',  124272 from dual
  8         ) v1
  9  group by v1.as_of_date, v1.entity_au_id, v1.cas_au_status, v1.cas_prod
 10  ;

AS_OF_DAT ENTITY_AU_ID   C CAS_PROD      VOLUME
--------- -------------- - --------- ----------
31-OCT-10 AU000000000848 N 417W00015     130338

SQL>

A reader, October 15, 2010 - 11:56 am UTC

Hello Sir,

I known this is bit off to post here...

Recently someone posted on OTN the strange behaviour of query when running on 9i and 10g which result in different output

http://forums.oracle.com/forums/thread.jspa?threadID=1630653&start=0&tstart=75

could you please explain the reason for this? Also is it a bug?

Thanks









Tom Kyte
October 15, 2010 - 2:01 pm UTC

I cannot run your query

ops$tkyte%ORA11GR2> desc user_departments
ERROR:
ORA-04043: object user_departments does not exist

A reader, October 15, 2010 - 2:25 pm UTC

Hello Sir,

I am not the OP it was someone else but on page 2 Divya have demonstarte using emp table

Thanks
Tom Kyte
October 15, 2010 - 3:00 pm UTC

well, if you want to bring it over here to talk about it - that is one thing. But please don't expect me to go somewhere else and piece it all together.

give an example - from start to finish - so we can reproduce the findings and then we can talk about it.

Adding clarity to above post

Rajaram Subramanian, October 18, 2010 - 8:35 am UTC

Tom,

I am not the OP and I am not the above poster "A Reader dated October 14 2010". However, I am quite intrigued to find out why the optimizer decides to perform a merge join cartesian in 10gR2 and Hash join semi in 11gR2.

Also, I do agree that this query doesn't make any sense. Still I would like to what makes the optimizer to choose a different path between oracle version(s).

Please find my observation.

Works correctly in Version 11.2.0.1

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> explain plan for
  2  SELECT AC_CODE, SUM(AMT)
  3      FROM
  4      (
  5      SELECT '006' AC_CODE, 100 AMT
  6      FROM DUAL
  7      UNION ALL
  8      SELECT '006', 200
  9      FROM DUAL
 10      )
 11     WHERE AC_CODE IN
 12     (
 13     SELECT DECODE(2,1,ename,'006')
 14     FROM scott.emp
 15     )
 16     GROUP BY AC_CODE
 17  /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 767040572

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    14 |     9  (23)| 00:00:01 |
|   1 |  HASH GROUP BY      |      |     1 |    14 |     9  (23)| 00:00:01 |
|*  2 |   HASH JOIN SEMI    |      |     1 |    14 |     8  (13)| 00:00:01 |
|   3 |    VIEW             |      |     2 |    16 |     4   (0)| 00:00:01 |
|   4 |     UNION-ALL       |      |       |       |            |          |
|   5 |      FAST DUAL      |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |      FAST DUAL      |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |    TABLE ACCESS FULL| EMP  |    14 |    84 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("AC_CODE"=DECODE(2,1,"ENAME",'006'))

19 rows selected.

SQL> select count(*) from scott.emp;

  COUNT(*)
----------
        14

SQL> SELECT AC_CODE, SUM(AMT)
  2      FROM
  3      (
  4      SELECT '006' AC_CODE, 100 AMT
  5      FROM DUAL
  6      UNION ALL
  7      SELECT '006', 200
  8      FROM DUAL
  9      )
 10     WHERE AC_CODE IN
 11     (
 12     SELECT DECODE(2,1,ename,'006')
 13     FROM scott.emp
 14     )
 15     GROUP BY AC_CODE
 16  /

AC_   SUM(AMT)
--- ----------
006        300

SQL> select * from table(dbms_Xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0fswp2p4d1u9g, child number 0
-------------------------------------
SELECT AC_CODE, SUM(AMT)     FROM     (     SELECT '006' AC_CODE, 100
AMT     FROM DUAL     UNION ALL     SELECT '006', 200     FROM DUAL
)    WHERE AC_CODE IN    (    SELECT DECODE(2,1,ename,'006')    FROM
scott.emp    )    GROUP BY AC_CODE

Plan hash value: 767040572

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |     9 (100)|          |
|   1 |  HASH GROUP BY      |      |     1 |    14 |     9  (23)| 00:00:01 |
|*  2 |   HASH JOIN SEMI    |      |     1 |    14 |     8  (13)| 00:00:01 |
|   3 |    VIEW             |      |     2 |    16 |     4   (0)| 00:00:01 |
|   4 |     UNION-ALL       |      |       |       |            |          |
|   5 |      FAST DUAL      |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |      FAST DUAL      |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |    TABLE ACCESS FULL| EMP  |    14 |    84 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("AC_CODE"=DECODE(2,1,"ENAME",'006'))


27 rows selected.


Yields incorrect result in Version 10.2.0.4
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> explain plan for
  2  SELECT AC_CODE, SUM(AMT)
  3          FROM
  4          (
  5          SELECT '006' AC_CODE, 100 AMT
  6          FROM DUAL
  7          UNION ALL
  8          SELECT '006', 200
  9          FROM DUAL
 10          )
 11        WHERE AC_CODE IN
 12        (
 13        SELECT DECODE(2,1,ename,'006')
 14        FROM scott.emp
 15        )
 16        GROUP BY AC_CODE
 17  /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1381516717

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     2 |    28 |     8  (13)| 00:00:01 |
|   1 |  HASH GROUP BY        |      |     2 |    28 |     8  (13)| 00:00:01 |
|   2 |   MERGE JOIN CARTESIAN|      |     2 |    28 |     8  (13)| 00:00:01 |
|   3 |    TABLE ACCESS FULL  | EMP  |    14 |    84 |     3   (0)| 00:00:01 |
|   4 |    BUFFER SORT        |      |     2 |    16 |     5  (20)| 00:00:01 |
|*  5 |     VIEW              |      |     2 |    16 |     4   (0)| 00:00:01 |
|   6 |      UNION-ALL        |      |       |       |            |          |
|   7 |       FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |       FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("AC_CODE"='006')

20 rows selected.

SQL> select count(*) from scott.emp;

  COUNT(*)
----------
        14

SQL> SELECT AC_CODE, SUM(AMT)
  2          FROM
  3          (
  4          SELECT '006' AC_CODE, 100 AMT
  5          FROM DUAL
  6          UNION ALL
  7          SELECT '006', 200
  8          FROM DUAL
  9          )
 10        WHERE AC_CODE IN
 11        (
 12        SELECT DECODE(2,1,ename,'006')
 13        FROM scott.emp
 14        )
 15        GROUP BY AC_CODE
 16  /

AC_   SUM(AMT)
--- ----------
006       4200

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  1ztud9349dmvq, child number 0
-------------------------------------
SELECT AC_CODE, SUM(AMT)         FROM         (         SELECT '006'
AC_CODE, 100 AMT         FROM DUAL         UNION ALL         SELECT
'006', 200         FROM DUAL         )       WHERE AC_CODE IN       (
    SELECT DECODE(2,1,ename,'006')       FROM scott.emp       )
GROUP BY AC_CODE

Plan hash value: 1381516717

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |       |       |     8 (100)|          |
|   1 |  HASH GROUP BY        |      |     2 |    28 |     8  (13)| 00:00:01 |
|   2 |   MERGE JOIN CARTESIAN|      |     2 |    28 |     8  (13)| 00:00:01 |
|   3 |    TABLE ACCESS FULL  | EMP  |    14 |    84 |     3   (0)| 00:00:01 |
|   4 |    BUFFER SORT        |      |     2 |    16 |     5  (20)| 00:00:01 |
|*  5 |     VIEW              |      |     2 |    16 |     4   (0)| 00:00:01 |
|   6 |      UNION-ALL        |      |       |       |            |          |
|   7 |       FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |       FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("AC_CODE"='006')


29 rows selected.


As it quite evident in the plan Version 10.2.0.4 performs a merge join cartesian whereas in Version 11.2.0.1 CBO is doing a Hash Join Semi.

To me it looks like a bug. I searched in metalink for any known bug with respect to this issue but no luck. If you could share your thoughts on this it will be much appreciated.

Thanks for looking into this.

Regards

Rajaram
Tom Kyte
October 25, 2010 - 8:53 am UTC

Still I would like to what makes the optimizer to choose a different path between oracle version(s).

that is entirely easy to answer. This will be the easiest question I'll take all day today!


The software is very different, that is why. There are thousands (tens, hundreds of thousands) of incremental nudges, fixes, adjustments to the optimizers algorithms with every patch release, every upgrade.

I honestly did not read your posting beyond that - the reason is really quite simple - it is different software processing the same or similar inputs - and we EXPECT the optimizer to optimize differently.

Likely bug

William, October 24, 2010 - 5:09 pm UTC

What you can try doing is the following to disable the cartesian join:

alter session set "_optimizer_mjc_enabled"=false ;

Re-run the query will yield the correct result.

William, October 24, 2010 - 10:20 pm UTC

Disabling the MERGE JOIN CARTESIAN does not work also. The execution plan changes to NESTED LOOP but still gives the wrong result.

Interesting also that using DISTINCT still gives the wrong result:
SELECT AC_CODE, SUM(AMT)
FROM ( SELECT '006' AC_CODE, 100 AMT
FROM DUAL
UNION ALL
SELECT '006', 200
FROM DUAL )
WHERE AC_CODE IN ( SELECT DISTINCT DECODE(2, 1, ENAME, '006')
FROM SCOTT.EMP )
GROUP BY AC_CODE;

whereas if we use a GROUP BY in subquery it gets the right result:

SELECT AC_CODE, SUM(AMT)
FROM ( SELECT '006' AC_CODE, 100 AMT
FROM DUAL
UNION ALL
SELECT '006', 200
FROM DUAL )
WHERE AC_CODE IN ( SELECT DECODE(2, 1, ENAME, '006')
FROM SCOTT.EMP
GROUP BY DECODE(2, 1, ENAME, '006') )
GROUP BY AC_CODE;

This also gives the correct result:

SELECT AC_CODE, SUM(AMT)
FROM ( SELECT '006' AC_CODE, 100 AMT
FROM DUAL
UNION ALL
SELECT '006', 200
FROM DUAL )
WHERE AC_CODE IN ( SELECT DECODE(2, 1, '007', '006')
FROM SCOTT.EMP )
GROUP BY AC_CODE;

If you don't mind could you re-read my post end to end

Rajaram Subramanian, October 26, 2010 - 4:12 am UTC

Tom,

I do agree that every single major/minor release there will be quite a lot of changes made to the optimizer. But i would expect the same query to come up with the same result when ran against the same data irrespective of the Oracle Version in which it's been executed. I feel this is a fair assumption. Correct me If I am wrong with my assumption.

Probably if you re-read my post again (completely) you will understand what I mean.

Regards

Raj
Tom Kyte
October 26, 2010 - 7:58 pm UTC

the answer stands - different versions, different software.

It would appear there is a bug here that was corrected. Have you filed one? Do you have support - if not, I can file one if one hasn't been already.

But I would prefer you file the bug if possible that is the preferred route.

Retrieving duplicate rows from a table

A reader, November 29, 2010 - 2:24 am UTC

Hi Tom,

I want to retrieve duplicate rows from a table with lacks of rows in it.
Suppose table desc: my_table(id number(10), name varchar2(10), location varchar2(15), nickname varchar2(5))
In have one query to do that:
select * from my_table group by id,name,location,nickname having count(*) > 1;

It is working but how could I retrieve duplicate rows from table which have many columns? Wouldn't it be tedious to take so much columns in "group by clause"? Or, there is any other better query?
Tom Kyte
November 29, 2010 - 4:09 am UTC

tedious is in the eye of the beholder.

you have to group by everything you want to be considered part of the key.

duplicate rows

A reader, November 29, 2010 - 4:24 am UTC

Hi Tom,

So are you supporting the above query best in this case? or there is any better query in terms of performance is concerned?
Tom Kyte
November 29, 2010 - 4:39 am UTC

that is about it. you could use analytics but it wouldn't be any faster - likely slower in fact.

Duplicate Rows

Reader, November 29, 2010 - 5:31 am UTC

Hi Tom,

What about this way...
SELECT * FROM tab a
WHERE rowid >(SELECT MIN(rowid) FROM tab b
WHERE a.c1=b.c1
AND a.c2.b.c2
....
the whole key
)
;
Tom Kyte
November 29, 2010 - 6:50 am UTC

how would that be easier to code than group by a,b,c,d... (it would be harder)

and would require at least two passes on the table - if not a lot more than that.

would you really want to have to look up for every row to see if it was the "min(rowid)" for that key? That could get quite expensive.

A reader, November 30, 2010 - 8:24 am UTC

"
What about this way...
SELECT * FROM tab a
WHERE rowid >(SELECT MIN(rowid) FROM tab b
WHERE a.c1=b.c1
AND a.c2.b.c2
....
the whole key
)
;
"

If you are looking for duplicate rows than the above query technically output duplicate rows more than once
since you are selecting all rows greater than the min rowid for the given set of keys?

or am i missing something?

Thanks

Tom Kyte
November 30, 2010 - 12:02 pm UTC

it would do that, but no big deal, you are looking for duplicate rows - it satisfies the original intent - of finding them.

but would be not very efficient at doing so

Removing Duplicate ' characters' in string.

Rajeshwaran Jeyabal, December 02, 2010 - 11:23 am UTC

scott@11GR2> with t as
  2  (
  3  select 'CHARACTER' str  from dual
  4  )
  5  select replace(sys_connect_by_path(str,','),',') str
  6    from (
  7           select rownum rno, str
  8             from (
  9                    select level l, substr(str,level,1) str, row_number() over(partition by substr(str,level,1) order by 1) my_rno
 10                      from t
 11                    connect by level <= length(str)
 12                    order by l
 13                  )
 14            where my_rno = 1
 15                  )
 16   where connect_by_isleaf = 1
 17   start with rno = 1
 18  connect by rno = prior rno+1
 19  /

STR
---------------------------
CHARTE


Tom:
Is this can be achieved using Regex?

I tried this to split each characters, but dont know how to remove duplicates. please help.

scott@11GR2> select regexp_replace('character','(.)','\1 ') from dual;

REGEXP_REPLACE('CH
------------------
c h a r a c t e r
scott@11GR2>


Problem with Contains keyword in query

Bharath, December 03, 2010 - 12:30 am UTC

Getting error if i uses a numerical value(EX: 66) or numerical value with hyphen(%6-6%) as search keyword, if i remov %, query will work, but i dont wana remove % coz, if i want to search a word like BGS6-67A34 i need %,

Query:
select * from line_item_price where status = 'A' and (contains(partnumberkeyword, '6{-}6|%6{-}6%|66|%66%',100)>0 or contains(keyword, '6{-}6%',20)>0 or contains(keyword,'6{-}6',10)>0)


Error:
java.sql.SQLException: ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-51030: wildcard query expansion resulted in too many terms

Thank you.
Tom Kyte
December 07, 2010 - 9:16 am UTC

wana? coz?

did you read the error message? It isn't due to numerical values. It is due to "wildcard query expansion resulted in too many terms". Your wildcard returns so many hits that it isn't possible use text to query it up.

search with instr and substr

keke, December 06, 2010 - 12:10 pm UTC

Tom,

We need to do a complicated search on a clob column, and here is the query being used:

select createdtime,to_char(substr(message,instr(message, 'CLASS')+6,instr(substr(message, instr(message,'CLASS')+5),chr(2))-2)) messagetype
from currentmessage
where createdtime > to_date('12/2/2010 11:00', 'MM/DD/YYYY HH24:MI')
and createdtime < to_date('12/2/2010 14:00', 'MM/DD/YYYY HH24:MI');

My questions is:
1. Does Oralce cache the results from the first instr(message, 'CLASS') so the second one won't need calculate it again? If it does, where it caches?

2. This query took about 5 mins to finish. Wonder if there is a better or faster way to do it, like indexing....
Currently there is no index on this column.

Thanks much!!


Tom Kyte
December 07, 2010 - 9:51 am UTC

1) not defined - it might, it might not, it doesn't have to.


2) How big are the rows? how many rows does it return. How many rows in the table. Is there an index on createdtime?

Using source_type, source_ID generic columns in a table.

Vijay, December 06, 2010 - 3:47 pm UTC

In our product we have several tables that have source_type, source_id generic fields. This provides a general flexibility of configuring our tool for various products. However i want to know your opinion about using option a vs option b

Option a) create table inventory(source_type varchar2(10),source_id number, trxdate date, qty number);

Source Type = 'TICKET', Source ID = Ticket_ID
Source Type = 'RECEIPT', Source ID = Receipt_ID

Pros: gives flexibility such that the table does not require modification for a new product type

Cons: Queries become cumbersome to write

Option b) create table inventory(ticket_id number,receipt_id number,trx_date date,qty number);

pros: i can have real foreign keys instead of database triggers
i can have indexes making use of the fields
cons: for each product type launch, table definition need to be modified and managed.

Question to you is, from performance point of view which one do you suggest is a good choice when designing the tables ??

Are there any other serious cons in using option A ?

Tks
Vijay
Tom Kyte
December 07, 2010 - 10:07 am UTC

... Cons: Queries become cumbersome to write
...

not only that, but they do not perform - and doing constraints is virtually impossible, getting an optimal query plan is out of the question. You'll have dirty data (please don't tell me "application validates it", if you say that then I know with 100% degree of confidence that the data is dirty). You'll have slow performance.

But gee - the developers can take the rest of the week off!


.... i can have real foreign keys instead of database triggers ...

Ok, now I know 110% that you have bad data - not even a question anymore. If you are using triggers to enforce constraints - not only are you running about as slow as you can - but unless they include a LOCK TABLE command - they are probably done wrong!!

... cons: for each product type launch, table definition need to be modified and
managed. ...

oh my, you mean the developers might have to work and define things after all?



I have written a ton on these generic models and what I think of them - you can probably get my gist from this short answer :)


Option a:

assures you poor performance
assures you complex queries
assures you dirty data

what's not to like about it?



A reader, January 05, 2011 - 1:20 am UTC


Correlated Query - Help

Rajeshwaran Jeyabal, January 09, 2011 - 6:00 am UTC

rajesh@10GR2>
rajesh@10GR2> drop table emp purge;

Table dropped.

Elapsed: 00:00:00.61
rajesh@10GR2>
rajesh@10GR2> create table emp as select * from scott.emp;

Table created.

Elapsed: 00:00:00.26
rajesh@10GR2>
rajesh@10GR2> insert into emp
  2  select * from scott.emp
  3  where deptno = 20
  4  and rownum <=2;

2 rows created.

Elapsed: 00:00:00.01
rajesh@10GR2>
rajesh@10GR2> commit;

Commit complete.

Elapsed: 00:00:00.01
rajesh@10GR2>
rajesh@10GR2> select e1.*
  2  from emp e1
  3  where 2> ( select count(*) from emp e2
  4                     where e1.ename = e2.ename
  5                     and e1.deptno = e2.deptno
  6                     and sysdate > e2.hiredate
  7                     and e2.deptno in (10,20))
  8  order by deptno,ename
  9  /

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30

12 rows selected.

Elapsed: 00:00:00.09
rajesh@10GR2>


Tom:

I have some queries from Application, simillar to this. Can this be done in a better way?
Tom Kyte
January 10, 2011 - 8:15 am UTC

why do people post a query and NOT the specification? That means the person receiving the query has to:

a) reverse engineer it
b) assume it is correct
c) assume that everything it does is necessary.


That query seems to be saying:

find all employee records such that there is only one record with the same name in the same deptno whose hiredate is less than right now and whose deptno is either 10 or 20


This looks like a nonsense question to me - so - how about this. Phrase your question in "text", in the form of a clear specification, something the "business" would have given you to generate code from.

Then we can talk.

When I run your query - I get 14 rows

Obvious "optimization" might be "where deptno NOT IN (10,20) or 2>(...)" - you know you want every employee in deptnos that are not 10, 20 - assumes deptno is NOT NULL of course...

other rewrites


ops$tkyte%ORA11GR2> select *
  2  from(
  3  select empno, ename, deptno,
  4         count( case when deptno in (10,20) and sysdate > hiredate then 1 end)
  5        over (partition by ename, deptno) cnt_ename_deptno
  6    from emp emp
  7  )
  8  where cnt_ename_deptno < 2
  9   order by deptno, ename
 10  /
     EMPNO ENAME          DEPTNO CNT_ENAME_DEPTNO
---------- ---------- ---------- ----------------
      7782 CLARK              10                1
      7839 KING               10                1
      7934 MILLER             10                1
      7876 ADAMS              20                1
      7902 FORD               20                1
      7788 SCOTT              20                1
      7499 ALLEN              30                0
      7698 BLAKE              30                0
      7900 JAMES              30                0
      7654 MARTIN             30                0
      7844 TURNER             30                0
      7521 WARD               30                0

12 rows selected.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select empno, ename, deptno
  2    from emp e1
  3   where (ename,deptno) NOT IN
  4                 ( select ename, deptno
  5                     from emp e2
  6                                    where sysdate > hiredate
  7                                      and deptno in (10,20)
  8                                          and ename is not null
  9                                          and deptno is not null
 10                                    group by ename, deptno
 11                                   having count(*) > 1 )
 12   order by deptno,ename
 13  /
     EMPNO ENAME          DEPTNO
---------- ---------- ----------
      7782 CLARK              10
      7839 KING               10
      7934 MILLER             10
      7876 ADAMS              20
      7902 FORD               20
      7788 SCOTT              20
      7499 ALLEN              30
      7698 BLAKE              30
      7900 JAMES              30
      7654 MARTIN             30
      7844 TURNER             30
      7521 WARD               30

12 rows selected.




might need minor tweaking if ename or deptno is "nullable"

phone_question

A reader, January 31, 2011 - 2:18 pm UTC

hi,i have question i didnt do this question long time pelesea help me ı have a table whic have tel_no,date,status .status can be just open or close
you can talk with phone then u must close ı want
how much money ı spend for calling after talk
but u can talk many diffrent time with one tel_no number
ı want status=close time-open time then multıple with 10$
for every minute plz help.

phone_question_again

A reader, February 01, 2011 - 5:09 am UTC

create table tel_phone (
tel_no varchar(11),
status varchar(4),
date1 date)

ı have this values

te_no status date1
12345678932 A 10-FEB-2010 00:00:00
12345678932 K 11-FEB-2010 00:00:00
12345932544 A 11-JAN-2010 00:00:00
12345932544 K 15-JAN-2010 00:00:00
12345932559 A 15-OCT-2010 00:00:00
12345932559 K 20-OCT-2010 00:00:00
12345932545 A 20-OCT-1990 00:00:00
12345932545 K 28-OCT-1990 00:00:00
12345932025 A 11-JAN-2011 00:00:00
12345932025 K 17-JAN-2011 00:00:00
12345678932 A 10-FEB-2010 10:30:30
12345678932 K 10-FEB-2010 11:30:30
12345678932 A 10-FEB-2010 07:30:30
12345678932 K 10-FEB-2010 08:15:30
12345932544 A 11-FEB-2010 08:30:30
12345932544 K 11-FEB-2010 08:35:30
12345678932 K 10-FEB-2010 01:01:01
12345678932 A 10-FEB-2010 11:55:01


here I have tel number i can open phone and close any time in a day
forexample tel_no=05312473496 status=open date1=10-FEB-2010 07:45:10 am
then ı close phone status=close date1=10-FEB-2010 08:10:20: am
then ı open agaın the same phone number at date1:10-FEB-2010 09:40:10 am,status=open
and close time date1=10-FEB-2010 10:10:20: am status close
then i want for every mınute after close date-open date $10 multiple then this is cost for every talk


I have many phone number like this how can ı solve this question
Tom Kyte
February 01, 2011 - 4:51 pm UTC

no inserts?

why do you use 05312473496 in your example, but in your data you never do?

why do you use status=open in your text but in your data you have only A and K?

It is not at all clear to me what you mean.

pho_question

omer, February 02, 2011 - 1:47 am UTC


hi,tom sorry ı wasnt clear u are right


create table tel_phone (
tel_no varchar(11),
status varchar(4),
date1 date
)

tel_no status date1

12345678932 open 10-FEB-2010 00:00:00
12345678932 close 11-FEB-2010 00:00:00
12345932544 pen 11-JAN-2010 00:00:00
12345932544 close 15-JAN-2010 00:00:00
12345932559 pen 15-OCT-2010 00:00:00
12345932559 close 20-OCT-2010 00:00:00
12345932545 pen 20-OCT-1990 00:00:00
12345932545 close 28-OCT-1990 00:00:00
12345932025 pen 11-JAN-2011 00:00:00
12345932025 close 17-JAN-2011 00:00:00
12345678932 pen 10-FEB-2010 10:30:30
12345678932 close 10-FEB-2010 11:30:30
12345678932 pen 10-FEB-2010 07:30:30
12345678932 close 10-FEB-2010 08:15:30
12345932544 open 11-FEB-2010 08:30:30
12345932544 close 11-FEB-2010 08:35:30
12345678932 close 10-FEB-2010 01:01:01
12345678932 open 10-FEB-2010 11:55:01

ı have this table and cloum ı want the same tel_no open then close maybe then again open and close
for every open then close how many minute talk then for every minute multiple 10$ for cost money
for example tel_no=12345678932 status=open,date1=10-FEB-2010 00:00:00
tel_no =12345678932 status=close,date1=10-FEB-2010 01:01:01 then (10-FEB-2010 01:01:01)-(10-FEB-2010 00:00:00)*24*60*10 salary_for_minute
but,the close date must be minimum date after open time plz i think u understant

Tom Kyte
February 02, 2011 - 7:41 am UTC

who is U?

and there still aren't any inserts in sight??

plz? german postal codes? What does that have to do with anything???


play around with this query:

select tel_no, status, next_status, date1, next_date, 
       (next_date-date1)*24*60 minutes
  from (
select tel_no, status, 
       lead(status) over (partition by tel_no order by date1) next_status, 
       date1, 
       lead(date1) over (partition by tel_no order by date1) next_date
  from tel_phone
       )
 where status = 'open' and next_status = 'close';




I couldn't test it since I don't have any data (pen <> open - don't expect anyone to FIX your example AND format it into inserts - do it like I do - give the full code)

phone_question continue

A reader, February 02, 2011 - 9:16 am UTC

open and close sorry no pen just open and close
Tom Kyte
February 02, 2011 - 10:11 am UTC

I've gone as far as I can do with the information supplied. I kept asking for stuff - never got it. I've done as much as I can with the above query.

I was just pointing out that the presentation of the question was quite sloppy. The data was incorrect - multiple times. We asked for the inserts - multiple times. Make it easy for us - do the extra work to make sure everything makes sense. To make sure that the text you write matches the examples you give. And so on...

phone_question

omer, February 02, 2011 - 9:30 am UTC

hi,tom,ı m omer thank u very much it is work sorry
the first time i read the web site another time i will be more careful when ı ask question ,thanks too much

Update statement

olerag, February 02, 2011 - 7:35 pm UTC

In a pl/sql block, where "x" is the passed value, if a statement looks something like this...

UPDATE myTable
SET amount = 5
WHERE recId = getRecId(x);

IF sql%NOTFOUND THEN
RAISE....
END IF;
/

The function is causing the update to fail and the error handler to fire.

However, on the other hand, if the statement is written as such...

vKey = getRecId(x);

UPDATE myTable
SET amount = 5
WHERE recId = vKey;
IF sql%NOTFOUND THEN
RAISE....
END IF;
/

The update is performed as the function now (mysteriously) finds the key. For simplicity, the function "getRecId" is a public function contained in a package and the call is using the schema.packageName.functionName.

I have no idea why the function works if initially used in a predefined bind variable but why it fails when injected directly as part of the where clause. We have many other statements that utilize the later and they work just fine. This is only happening with this particular function.

Have I provided enough info or do you need the complete source??
Tom Kyte
February 03, 2011 - 3:13 pm UTC


Your function getRecId must be raising an unhandled "no data found". In general the code would work:

ops$tkyte%ORA11GR2> create table t ( x int );

Table created.

ops$tkyte%ORA11GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function getid( p_data in number ) return number
  2  as
  3          l_data number;
  4  begin
  5          select p_data into l_data from dual;
  6          return l_data;
  7  end;
  8  /

Function created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          l_data number := 1;
  3  begin
  4          update t set x = x+1 where x = getid(l_data);
  5          if sql%notfound then
  6                  raise program_error;
  7          end if;
  8  end;
  9  /

PL/SQL procedure successfully completed.



but if we modify getId to raise a no data found:

ops$tkyte%ORA11GR2> create table t ( x int );

Table created.

ops$tkyte%ORA11GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace function getid( p_data in number ) return number
  2  as
  3          l_data number;
  4  begin
  5          select p_data into l_data from dual WHERE 1=0;
  6          return l_data;
  7  end;
  8  /

Function created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          l_data number := 1;
  3  begin
  4          update t set x = x+1 where x = getid(l_data);
  5          if sql%notfound then
  6                  raise program_error;
  7          end if;
  8  end;
  9  /
declare
*
ERROR at line 1:
ORA-06501: PL/SQL: program error
ORA-06512: at line 6



it fails.

Same topic as previous

olerag, February 03, 2011 - 2:46 pm UTC

To expand, it would appear functions that are placed into "where clauses" of updates/deletes that actually perform "select" statements are not permitted.

Well, sometimes and sometimes not. I guess it is this when/when not that I cannot find much/any documentation on.

By the way, if the function doesn't involve any sql and it's simply doing a mathematical result, Oracle doesn't seem to have a problem with this and works just fine in "where clauses".

I suppose this gets back to from time to time Oracle pl/sql doesn't trust developors and implements its own restrictive rules. Of course, trying to determine when this happens, how to handle such events, and, most importantly, WHY Oracle would do such a thing, escapes me.

Now that I've spewed my frustation, please let me know how you feel and I really "want you to give it to me" if you believe I deserve it. I'm probably missing something and I typically understand it better when I'm abused.

Thanx Tom
Tom Kyte
February 03, 2011 - 4:09 pm UTC

... To expand, it would appear functions that are placed into "where clauses" of
updates/deletes that actually perform "select" statements are not permitted. ...

how so? My example right above demonstrates that is not correct.

Their function is raising a NO_DATA_FOUND and it is an unhandled example and that is getting returned by the update as "no data found"

olerag

olerag, February 03, 2011 - 4:56 pm UTC

Perhaps, but as I initially mentioned, when I put the function result into a bind variable instead of the "where clause" the key is "mysteriously" found.

And, to repeat, we are doing this in many places (using bind variables and in the "where" clause) and we get no problems.

Finally, the "sql%notfound" that is used works well to trap the problem. However (and, using your "lingo") the "exception" really isn't an exception as the phrase...

exception
when no_data_found OR others

does not handle the exception. Hence Oracle doesn't really believe an exception has occured but merely the fact the function didn't return (find) a value.

In actuality, the function returns the key value when initially implemented in a bind variable but not when placed directly into the "where clause".

This is very strange and I cannot understand why this one particular function fails. And you confirmed that it should work (unless the sql fails).

If you like, I can forward you a test case that involves one silly table, one silly package (with several public functions), and one silly implementation pl/sql block. You can then confirm the function doesn't work in the where clause but does when used with a bind variable.

Would you like the source???? I can send it tomorrow morning when I get back to the office.

But I really love the fact you respond so fast. I really do!
Tom Kyte
February 04, 2011 - 8:28 am UTC

You have a bug in your code - somewhere - it is raising an unhandled no data found when invoked from the where clause.

If you post a bit of code, perhaps we can help you identify where the issue is.


If you like, I can forward you a test case that involves one silly table, one
silly package (with several public functions), and one silly implementation
pl/sql block. You can then confirm the function doesn't work in the where
clause but does when used with a bind variable.


definitely do that and we'll post the reason why you are seeing what you are seeing and try to make it clear (so you don't think it is "magic" or "arbitrary" - it isn't, you have a select into that isn't returning data *probably*)

Function and Update statement

olerag, February 04, 2011 - 5:14 am UTC

This ranting of mine can probably be ended. My guess is the function I'm trying to call is actually performing a "select" from the table I'm trying to update. Hence, Oracle doesn't like this.

By placing the function call in a preceding bind variable before the update statement, all works well.

I suppose I'm still lost as to why an "exception" doesn't catch this or, perhaps, I'm expecting "others" to catch this when maybe some other exception should be used. But it really doesn't matter as "sql%notfound" traps the failure.

So...I suppose the rule is "do not to utilize a function that performs a select on the table with a update/delete on that same table."
Tom Kyte
February 04, 2011 - 10:04 am UTC

ops$tkyte%ORA11GR2> create or replace function foo( p_x in number ) return number
  2  as
  3          l_cnt number;
  4  begin
  5          select count(*) into l_cnt
  6            from t
  7           where x >= p_x;
  8  
  9          return l_cnt;
 10  end;
 11  /

Function created.

ops$tkyte%ORA11GR2> update t set z = 42 where y = foo(x);
update t set z = 42 where y = foo(x)
                              *
ERROR at line 1:
ORA-04091: table OPS$TKYTE.T is mutating, trigger/function may not see it
ORA-06512: at "OPS$TKYTE.FOO", line 5



You'd get a mutating table constraint - not a no data found, my guess is you have the ever EVIL AND HORRIFIC "when others then <don't really do anything, pretend everything is OK>" in there some where.

Post example and we'll example why

a) it does what it does
b) it should be doing what it does
c) the approach was flawed from the get go.


Think about it - if you try to read a table you are modifying and you are actually IN THE MIDDLE of modifying it - what kind of "junk" would you expect to see?


I can tell you how to avoid the mutating table constraint - however I won't. It would lead to even worse things happening - the code would "run" but it would be flawed in a huge way.

Same thing

olerag, February 04, 2011 - 3:00 pm UTC

>Think about it - if you try to read a table you are modifying and you are actually IN THE MIDDLE of modifying it - what kind of "junk" would you expect to see? <

Considering I'm selecting a column from the table that isn't changing in order to acquire the PK value to change another column's value, I'm not expecting any junk - just the key.

But, yes, this is precisely what is happening and, when I run the function from SQL I get the same error. From plsql, again, the "sql%notfound" traps the error.

Yes, I was wondering what kind of mysterious conjuration was happening and now I know why and how to deal with it.

Thanx for the confirmation.


Tom Kyte
February 06, 2011 - 11:53 am UTC

... Considering I'm selecting a column from the table that isn't changing in order
to acquire the PK value to change another column's value, I'm not expecting any
junk - just the key. ...


How do we know that? Your update could be updating many rows - there could be triggers firing, lots of stuff *IN GENERAL* going on. You would (could, have the ability to) get junk.


Why does your update need to read a different row? (if it doesn't need a different row - you don't need your function, you have access to all of the other columns)

Please think about what happens in a mulit-user database here - what happens if that read you are reading and copying data from is being changed by some other user at the same time. You won't see their modifications - they won't see yours. I think you have a flaw in your data model here - one that will lead to data inconsistencies over time. I say this, because I see it happen all of the time.

Getting execution time

A reader, February 14, 2011 - 8:02 am UTC

Hi Tom,

How can we get the exact execution time when we are subsequently running the query.

When we run the query for the first time, it undergoes all the parsing and execution pan preparation. on subsequent run, it takes the plan from the cache. Now I think, we can clear the system cache on 10G to get the fresh execution plan every time.

But, wanted to know if we can achive that in other way except restarting the server.


Tom Kyte
February 14, 2011 - 9:07 am UTC

... Now I think, we can clear the system cache on 10G to get the fresh execution
plan every time. ...

why the heck would you even consider doing that.

Having an empty cache is more artificial than anything - you would NEVER have an empty cache.

do not clear caches before benchmarking, just run a representative workload to benchmark. You know, do something that would actually happen in real life.

question

omer, February 14, 2011 - 2:02 pm UTC

hi tom ı'm omer from Turkey 1 month i read u in this
website u solve sql proplem very prefessional
ı wanna ask u do u know any e-book when i finish then write
like u good sql plz help me
ı want to write like u but ı work with sql in 2 months
thanks tom
Tom Kyte
February 14, 2011 - 2:19 pm UTC

... ı want to write like u ...

start with using real words and not letters as a representation of words. PLZ is shorthand for German Postal codes. U is a letter, not a word. I know you are not a native speaker of English - but it takes more work for you to use these "icons" rather than real words - since you were taught the real words and making the mapping from real words to these fake "icons" is extra work for you.

There are many books on SQL - some available as e-books, some not. Joe Celko's material is highly regarded.

abaut sql

omer, February 17, 2011 - 7:02 am UTC

thank you very much I will use english like you said sorry.

Query help !

Rajeshwaran, Jeyabal, February 18, 2011 - 4:59 am UTC

drop table t purge;
create table t(x number,y date,z number);

insert into t values(1,to_date('01-jan-2011','dd-mon-yyyy'),1);
insert into t values(2,to_date('01-jan-2010','dd-mon-yyyy'),1);
insert into t values(3,to_date('01-jan-2009','dd-mon-yyyy'),1);

insert into t values(4,to_date('01-jan-2009','dd-mon-yyyy'),2);
insert into t values(5,to_date('01-jan-2000','dd-mon-yyyy'),2);

commit;

rajesh@10GR2> select x,z from t where z in (
  2  select z
  3  from t
  4  where y between to_date('01-jan-2010','dd-mon-yyyy')
  5  and to_date('31-dec-2010','dd-mon-yyyy') );

         X          Z
---------- ----------
         3          1
         2          1
         1          1

Elapsed: 00:00:00.79
rajesh@10GR2>
rajesh@10GR2> select t2.x,t2.z
  2  from t t1, t t2
  3  where t1.y  between to_date('01-jan-2010','dd-mon-yyyy')
  4  and to_date('31-dec-2010','dd-mon-yyyy')
  5  and t1.z = t2.z;

         X          Z
---------- ----------
         1          1
         2          1
         3          1

Elapsed: 00:00:00.81
rajesh@10GR2>


Tom:

If the value of Y is between 01-jan-2010 and 31-dec-2010 then get value of Z and all the corresponding values of X.

1) Is there is any other best approach other than this Tom?

2) ( Note, we have simillar table in production with rowlen=80 and rowcount=1300M ) table 'T' is NON-partitioned will creating a global range partitioned index on column 'Y' will be helpful?


Tom Kyte
February 18, 2011 - 9:08 am UTC

these are "other" approaches - the best approach "depends" on your data.

Suggestion: benchmark them against your own data

ops$tkyte%ORA11GR2> select *
  2    from (
  3  select x, z, count( case
  4                      when y between to_date('01-jan-2010','dd-mon-yyyy')
  5                             and to_date('31-dec-2010','dd-mon-yyyy')
  6                      then 1
  7                      end ) over (partition by z) cnt
  8    from t
  9         )
 10   where cnt > 0
 11  /

         X          Z        CNT
---------- ---------- ----------
         1          1          1
         2          1          1
         3          1          1


ops$tkyte%ORA11GR2> select x,z
  2    from t t1
  3   where exists (select null from t t2 where t2.z = t1.z and t2.y
  4                                between to_date('01-jan-2010','dd-mon-yyyy')
  5                                and to_date('31-dec-2010','dd-mon-yyyy'))
  6  /

         X          Z
---------- ----------
         3          1
         2          1
         1          1




... 'T' is NON-partitioned will creating a global range partitioned index on column 'Y' will be helpful? ...

maybe, it depends on how many rows are between your dates - indexes are good for getting relatively small sets of data out of big sets of data - if the rows in question represent a big set of data - we will not use the index in general.

SQL Query

sam, February 18, 2011 - 11:37 am UTC

I have a Crystal Report based on this SQL Command and uses 3 parameters.
User enters "begin date" and "end date" and I sum the quantity ordered and shipped for each stock by month.

BEcause I want to use a dynamic parameter in Crystal, having SQL in report will not work so I want to create a VIEW in DB and select from VIEW.

The problem is that I SUM quantity requested and quantity shipped for the date range given in the subquery.

I also can't pass parameters to a VIEW before I run it.


is there a way to rewrite this so that view will work.

A 4 table join may not work between Orders and shipment detail tables.



select a.stock_number,a.description,a.category,vendor vendcode,
(select decode(sum(quantity_requested),null,0,sum(quantity_requested))
from stock_requested_item c, stock_request d
where c.request_id=d.request_id and c.stock_number=a.stock_number
and d.request_date between {?P_Begin_Date} and {?P_End_Date}
and d.ship_to_org like decode('{?P_Customer}','*All Customers*','%','{?P_Customer}' )
) Total_Ordered,
(select decode(sum(quantity_shipped),null,0,sum(quantity_shipped)) from
stock_shipped_item e, stock_shipment f
where e.shipment_id=f.shipment_id and e.stock_number=a.stock_number
and f.shipment_date between {?P_Begin_Date} and {?P_End_Date}
and (e.dispcode is not null )
and f.ship_to_org like decode('{?P_Customer}','*All Customers*','%','{?P_Customer}' )
) Total_Shipped
,
(select name||', '||city||', '||state)
from org where orgcd='{?P_Customer}'
) Customer
from stock_item a, org b
where a.vendor=b.orgcd(+)
and stock_type in ('P','C')
order by 1


Tom Kyte
February 18, 2011 - 11:54 am UTC

you already know views don't do parameters - so how could we rewrite it if your request is "use parameters". I'm confused.

And I know *nothing* about crystal reports at all...

sql

sam, February 18, 2011 - 6:02 pm UTC

it is not crystal report but more of a SQL issue.

the SQL has two subqueries that SUM up total quantity ordered and shipped by month for the range of dates given.

I want to create a VIEW for this SQL. Crystal can send the parameters to the VIEW but the problem is that it adds it to the end of the SQL like

SELECT * From VIEW where field1=p_begin_date and field2=p_end_date.

It cant be done with subqueries. from what i see the only alternative is table joins if it can be done that way.
Tom Kyte
February 20, 2011 - 12:10 pm UTC

can crystal reports use a ref cursor returned from a function?

RE: SQL

Greg, March 01, 2011 - 1:21 pm UTC

Crystal does use a Ref Cursor very well. We are rehosting reports using Oracle stored procedures to do the heavy lifting. These are called from a Crystal Report passing parameters into the stored procedure and using the returned Ref Cursor to format and present the data.
Tom Kyte
March 01, 2011 - 1:54 pm UTC

thanks much - then that is the answer for Sam.

Sql query

sam, March 09, 2011 - 6:57 pm UTC

I want a report (MV refreshed first day of each month) that shows each stock number, (Month, year), total ordered, total shipped for each stock item in the last 12 months.
if a stock item does not have activity i still need to show ZERO for each month.
If we are in MArch, then I should show like

stock_no Month, year total_ordered total_shipped
----------------------------------------------------
XYZ1 FEB, 2011 1200 1000
XYZ1 JAN, 2011 0 0
...
XYZ1 MAR, 2011 2000 990
ABC1 FEB, 2011 3000 1500
....


I usually try to join ORDERS-->ORDERED_ITEM and SHIPMENT--->SHIPPED_ITEM separately to get order and shipmen stats for a given period.

How would you write the SQL to get that based on these tables?

The materialize view will have these columns
(STOCK_NUMBER, MONTH (includes year), TOTAL_ORDERED, TOTAL_SHIPPED)

STOCK_ITEM
------------
STOCK_NUMBER VARCHAR2(20) (PK)
......


ORDERS
------------
ORDER_ID NUMBER(10) (PK)
ORDER_DATE DATE
.....
CREATED_DATE DATE


ORDERED_ITEMS
-------------
ORDER_ID NUMBER(10) (PK)
STOCK_NUMBER VARCHAR2(20) (PK)
ITEM_NO NUMBER(3) (PK)
QTY_ORDERED NUMBER(5)
........



SHIPMENT
-------------
SHIPMENT_ID NUMBER(10) (PK)
ORDER_ID NUMBER(10) (FK)
SHIPMENT_DATE DATE
.......


SHIPPED_ITEMS
---------------
SHIPMENT_ID NUMBER(10) (PK)
STOCK_NUMBER VARCHAR2(20) (PK)
ITEM_NUMBER NUMBER(3) (PK)
QTY_SHIPPED NUMBER(5)
QTY_BACKORDER NUMBER(5)
SHIPCODE VARCHAR2(1)
........




Tom Kyte
March 10, 2011 - 10:23 am UTC

Sam

you've read this site far too long to ask a question in that format. Think about what I've said *a lot* in the past.

no creates
no inserts
no look

I usually try to join ORDERS-->ORDERED_ITEM and SHIPMENT--->SHIPPED_ITEM
separately to get order and shipmen stats for a given period.

How would you write the SQL to get that based on these tables?


join two inline views together after aggregating to the same level. Join orders to ordered_item call that O, join shipment to shipped_item call that S. Join O to S and report away.

query

sam, March 10, 2011 - 11:37 am UTC

Tom:

here are the creates and inserts. I di the joins as you said for one stock number.
I am not sure how to get it for all 12 months. Now if no activity for a given month i wont get any results.

I want stats for every stock for last 12 months regrdles if there is any sctivity or not.

create table stock_item (
stock_number varchar2(20) primary key
)
/

insert into stock_item values ('XYZ1')
/
insert into stock_item values ('ABC1')
/
commit
/


create table orders (
order_id number(10) primary key,
order_Date date,
create_Date date default sysdate )
/


insert into orders values (1, sysdate-180, sysdate-180)
/
insert into orders values (2, sysdate-140, sysdate-140)
/
insert into orders values (3, sysdate-100, sysdate-100)
/
insert into orders values (4, sysdate-60, sysdate-60)
/
insert into orders values (5, sysdate-20, sysdate-20)
/
insert into orders values (6, sysdate-220, sysdate-220)
/
insert into orders values (7, sysdate-240, sysdate-240)
/
insert into orders values (8, sysdate-280, sysdate-280)
/
insert into orders values (9, sysdate-320, sysdate-320)
/
commit
/

create table ordered_items (
order_id number(10),
stock_number varchar2(20),
item_no number(3),
qty_ordered number(5),
constraint pk_ordered_items primary key (order_id,stock_number,item_no)
)
/

insert into ordered_items values (1, 'XYZ1',1,100)
/
insert into ordered_items values (1, 'ABC1',2,200)
/
insert into ordered_items values (2, 'XYZ1',1,50)
/
insert into ordered_items values (3, 'ABC1',1,800)
/
insert into ordered_items values (4, 'XYZ1',1,90)
/
insert into ordered_items values (5, 'ABC1',1,900)
/
commit
/

create table shipments (
shipment_id number(10) primary key,
order_id number(10),
shipment_date date )
/

insert into shipments values ( 1, 1, sysdate-160)
/
insert into shipments values ( 2, 2, sysdate-120)
/
insert into shipments values ( 3, 3, sysdate-100)
/
insert into shipments values ( 4, 4, sysdate-60)
/
insert into shipments values ( 5, 5, sysdate-10)
/
commit
/


create table shipped_items (
shipment_id number(10),
stock_number varchar2(20),
item_number varchar2(1),
qty_shipped number(5),
shipcode varchar2(1),
constraint pk_shipped_items primary key (shipment_id, stock_number, item_number)
)
/

insert into shipped_items values (1, 'ABC1', 1, 50, 'A')
/
insert into shipped_items values (1, 'XYZ1', 2, 200, 'A')
/
insert into shipped_items values (2, 'XYZ1', 2, 300, 'A')
/
insert into shipped_items values (3, 'ABC1', 2, 200, 'A')
/
insert into shipped_items values (4, 'XYZ1', 2, 200, 'A')
/
insert into shipped_items values (5, 'ABC1', 2, 200, 'A')
/
commit
/



I want a report (MV) that shows each stock number, (Month, year), total ordered, total shipped for each stock item in the last 12 months.
if a stock item does not have activity i still need to show ZERO for each month.
If we are in MArch, then I should show like

stock_no Month, year total_ordered total_shipped
----------------------------------------------------
XYZ1 FEB, 2011 1200 1000
XYZ1 JAN, 2011 0 0
...
XYZ1 MAR, 2011 2000 990
ABC1 FEB, 2011 3000 1500
....

Tom Kyte
March 10, 2011 - 1:07 pm UTC

outer join to:

ops$tkyte%ORA11GR2> with dates
  2  as
  3  (select add_months( trunc(sysdate,'y'), level-1 ) dt
  4     from dual
  5  connect by level <= 12)
  6  select * from dates .;

DT
---------
01-JAN-11
01-FEB-11
01-MAR-11
01-APR-11
01-MAY-11
01-JUN-11
01-JUL-11
01-AUG-11
01-SEP-11
01-OCT-11
01-NOV-11
01-DEC-11

12 rows selected.



query

sam, March 10, 2011 - 1:53 pm UTC

Tom:

 I need always to report on last 12 months (not future).

This gives me the months but i never used the WITH syntax. How do you join this to STOCK_ITEM table and other tables to get totals.

I think i need to group by STOCK_NUMBER, MONTH,YEAR, 


 1  with dates
  2        as
  3        (select add_months( trunc(sysdate,'mm'),-level ) dt
  4           from dual
  5         connect by level <= 12)
  6*       select * from dates order by 1 desc
SQL> /

DT
---------
01-FEB-11
01-JAN-11
01-DEC-10
01-NOV-10
01-OCT-10
01-SEP-10
01-AUG-10
01-JUL-10
01-JUN-10
01-MAY-10
01-APR-10

Tom Kyte
March 10, 2011 - 2:26 pm UTC

ops$tkyte%ORA11GR2> with data
  2  as
  3  (
  4  select coalesce( O.order_date, s.shipment_date ) dt,
  5         coalesce( O.stock_number, S.stock_number ) sn,
  6         o.qo, s.qs
  7    from (
  8  select trunc(o.order_date,'mm') order_date, oi.stock_number, sum(oi.qty_ordered) qo
  9    from orders o, ordered_items oi
 10   where o.order_id = oi.order_id
 11   group by trunc(o.order_date,'mm'), oi.stock_number
 12         ) O full outer join
 13         (
 14  select trunc(s.shipment_date,'mm') shipment_date, si.stock_number, sum(si.qty_shipped) qs
 15    from shipments s, shipped_items si
 16   where s.shipment_id = si.shipment_id
 17   group by trunc(s.shipment_date,'mm'), si.stock_number
 18         ) S
 19      on (s.shipment_date = o.order_date)
 20  ),
 21  dates
 22  as
 23  (select add_months( trunc(sysdate,'mm'),-level ) dt
 24     from dual
 25  connect by level <= 12)
 26  select dates.dt, data.sn, data.qo, data.qs
 27    from data PARTITION BY (SN)
 28         right outer join
 29         dates
 30      on (data.dt = dates.dt)
 31   order by 2, 1
 32  /

DT        SN                           QO         QS
--------- -------------------- ---------- ----------
01-MAR-10 ABC1
01-APR-10 ABC1
01-MAY-10 ABC1
01-JUN-10 ABC1
01-JUL-10 ABC1
01-AUG-10 ABC1
01-SEP-10 ABC1                        200
01-OCT-10 ABC1
01-NOV-10 ABC1                        800        300
01-NOV-10 ABC1                        800        200
01-DEC-10 ABC1
01-JAN-11 ABC1
01-FEB-11 ABC1                        900        200
01-MAR-10 XYZ1
01-APR-10 XYZ1
01-MAY-10 XYZ1
01-JUN-10 XYZ1
01-JUL-10 XYZ1
01-AUG-10 XYZ1
01-SEP-10 XYZ1                        100
01-OCT-10 XYZ1                         50        200
01-OCT-10 XYZ1                         50         50
01-NOV-10 XYZ1
01-DEC-10 XYZ1
01-JAN-11 XYZ1                         90        200
01-FEB-11 XYZ1

26 rows selected.




Using with is not magic, it is not special, it is just like using a view. You just use it in the select component at the bottom in the same exact manner you would any table or view.


All I did was create a set of stock items ordered by month, a set of stock items shipped by month.

We full outer join these two sets (so we get records for months we ordered by did not ship something, months we shipped but did not order something, and months we both ordered and shipped something).

We called that DATA

We generated the dates we were interested on.

Then, using a partitioned outer join - we joined each set of stock numbers by date to the DATES view we created - using a right outer join so as to keep every single month intact

and there you go.


This uses SQL constructs that have existed since at least 2003.

query

sam, March 11, 2011 - 4:37 pm UTC

Tom:

I appreciate your help. But I noticed a few issues with your query.

1) notice the duplicate rows reported. The data is also not accurate.

Also, ABC1 was not shipped in November. It shows 300 pieces. etc.. the 300 were for XYZ1

01-NOV-10 ABC1                        800        300
01-NOV-10 ABC1                        800        200
01-OCT-10 XYZ1                         50        200
01-OCT-10 XYZ1                         50         50


You are joining the queries for orders and shipped using month. Should not that be on MONTH and STOCK NUMBER?

2) I noticed you ignored the STOCK_ITEM table. That is the driving table because I want to list ZEROs even if a stock has no activity for those months.

3) I ran same query you provided in 9iR2 but getting error. Is PARTITION BY supported? I thought you use OVER with anlaytics normally.



SQL> with data
  2    as
  3   (
  4   select coalesce( O.order_date, s.shipment_date ) dt,
  5          coalesce( O.stock_number, S.stock_number ) sn,
  6           o.qo, s.qs
  7      from (
  8    select trunc(o.order_date,'mm') order_date, oi.stock_number, sum(oi.qty_ordered) qo
  9      from orders o, ordered_items oi
10     where o.order_id = oi.order_id
11     group by trunc(o.order_date,'mm'), oi.stock_number
12           ) O full outer join
13          (
14    select trunc(s.shipment_date,'mm') shipment_date, si.stock_number, sum(si.qty_shipped) qs
15      from shipments s, shipped_items1 si
16     where s.shipment_id = si.shipment_id
17     group by trunc(s.shipment_date,'mm'), si.stock_number
18          ) S
19       on (s.shipment_date = o.order_date)
20   ),
21   dates
22   as
23    (select add_months( trunc(sysdate,'mm'),-level ) dt
24       from dual
25    connect by level <= 12)
26    select dates.dt, data.sn, data.qo, data.qs
27      from data PARTITION BY (SN)
28           right outer join
29           dates
30        on (data.dt = dates.dt)
31     order by 2, 1
32  /
    from data PARTITION BY (SN)
                        *
ERROR at line 27:
ORA-00933: SQL command not properly ended


Tom Kyte
March 12, 2011 - 8:57 am UTC

1) you are correct, need a join on stock number

2) then add it and figure out the right way to add it. You see the technique now, keep going with it. take my DATA view and PARTITION OUTER JOIN it to the stock item table to fill out all of the stock items like I filled out all of the dates.

3) partition by is new in 10g. I always - ALWAYS assume the latest releases unless told otherwise. I know you know that, I just told you that earlier this very same week.



Query

sam, March 12, 2011 - 9:12 am UTC

Tom:

Thanks for great mentoring. You made me a tenacious SQL warrior. The WITH clause is great for breaking out complex SQL and make things easy to visualize when you deal with aggregates and nested queries. Most of the books i have (including yours) did not mention it. I am getting a fresh oracle sql book.

If I break things into SETS it makes things easy.

I took the concept and changed it slightly. there is no need
to PARTITION BY.

Do you see any issues.

SQL> edi
Wrote file afiedt.buf

  1  with order_sum
  2   as
  3   (select trunc(o.order_date,'mm') order_date, oi.stock_number, sum(oi.qty_ordered) qo
  4       from orders o, ordered_items oi
  5      where o.order_id = oi.order_id
  6      group by trunc(o.order_date,'mm'), oi.stock_number),
  7   shipment_sum
  8   as
  9   (select trunc(s.shipment_date,'mm') shipment_date, si.stock_number, sum(si.qty_shipped) qs
10       from shipments s, shipped_items1 si
11      where s.shipment_id = si.shipment_id
12     group by trunc(s.shipment_date,'mm'), si.stock_number),
13   dates
14    as
15   (select add_months( trunc(sysdate,'mm'),-level ) dt
16          from dual
17       connect by level <= 12),
18   items
19    as
20   (select b.stock_number,a.dt from dates a, stock_item1 b
22   order by 1,2 desc
23   )
24   select i.stock_number, i.dt, os.qo,  qs
26    from items i, order_sum os, shipment_sum ss
27     where i.stock_number = os.stock_number(+) and i.dt= os.order_date(+)
28*     and i.stock_number = ss.stock_number(+) and i.dt = ss.shipment_date(+)
SQL> /

STOCK_NUMBER         DT                QO         QS
-------------------- --------- ---------- ----------
ABC1                 01-MAR-10
ABC1                 01-APR-10
ABC1                 01-MAY-10
ABC1                 01-JUN-10
ABC1                 01-JUL-10
ABC1                 01-AUG-10
ABC1                 01-SEP-10        200
ABC1                 01-OCT-10                    50
ABC1                 01-NOV-10
ABC1                 01-DEC-10        800        200
ABC1                 01-JAN-11

STOCK_NUMBER         DT                QO         QS
-------------------- --------- ---------- ----------
ABC1                 01-FEB-11        900
XYZ1                 01-MAR-10
XYZ1                 01-APR-10
XYZ1                 01-MAY-10
XYZ1                 01-JUN-10
XYZ1                 01-JUL-10
XYZ1                 01-AUG-10
XYZ1                 01-SEP-10        100
XYZ1                 01-OCT-10         50        200
XYZ1                 01-NOV-10                   300
XYZ1                 01-DEC-10

STOCK_NUMBER         DT                QO         QS
-------------------- --------- ---------- ----------
XYZ1                 01-JAN-11         90        200
XYZ1                 01-FEB-11

24 rows selected.


Tom Kyte
June 09, 2011 - 12:00 pm UTC

Most of the books i have
(including yours) did not mention it.


Given that I've never written a book on writing SQL - that is not too surprising - is it?

But, you do read asktom from time to time, and there - if you pay attention - you would see it day after day after day after day...



You are correct that by doing the cartesian join you do not need the partition by - the partition by was added in 10g to avoid the issues associated with cartesian joins (huge explosions in intermediate result sets in many cases). The partition by was created to skip the cartesian join altogether. When you get on software that was written in this century, you should consider looking into it.


Alter table on running query.

Jalpesh Shah, March 15, 2011 - 5:04 am UTC

Hi Tom, I have doubt that.. assume that Having one table, from that table, inserting the records to another table which took around 2 hours to complete the task. during execution time somebody alter that table and add one colum. what will be the impect on currently running query?
and what happend in case of view.
Tom Kyte
March 15, 2011 - 8:43 am UTC

all of those "thats" there -

having one table (1st table)
from that table (refers to previous line, 1st table)
insert the records to another table (2nd table)
somebody alter THAT table (????? no idea which table you mean now)


But knowing the way Oracle works - I know that you can alter the 1st table without issue - but won't be able to alter the 2nd table while there is an outstanding transaction against it.


Altering the first table and adding that column after the insert as select has started will not impact the insert as select. The result is well known (it won't include that column)

Adding the column to the second table will not be permitted while the transaction is outstanding, you have to wait for the commit to happen to add that column.


Ditto with the view, if you alter the base table of the READER of the view, that is OK. You cannot alter the base table of a WRITER of the view.


Query...

Jalpesh Shah, March 15, 2011 - 11:41 pm UTC

Hi Tom,

Thanks for your reply, I brief my question, i have Table A having huge number of records. I m inserting table A data to Table B during inseration process, altering Table A. its sure that there is no issue on runing process but I just wann to know how the oracle internally manage it.
Tom Kyte
March 16, 2011 - 8:32 am UTC

It manages it well. Once you start the query - it is perfectly OK.

It is strange to do this for sure - but it works OK. We know what to read and what not to read. The newly added column would go at the end of the row and we'll never even see it as we read through the table - because we know we only need the first N columns (the N columns that were there when we started)

sql with as

sam, March 23, 2011 - 8:21 am UTC

Tom:

The "with as" is a great tool for simplifying complex sql and for storing different aggregates and acting like temporary table.

My question is how do you normally decide to use it. With a pl/sql program you can have

1) select from cursor defined in the routine
2) select from view in the database
3) select from query built using with as which can be in pl/sql program or a view in db.


Do you also agree that moving all SQL to VIEWS instead of being defind in PL/SQL might be become difficult to manage.
Tom Kyte
March 23, 2011 - 8:46 am UTC

"with as" is just part of a query, it should not be viewed as 'different' than being just part of a query, not any more so than 'WHERE' or 'UNION ALL' or 'INTERSECT'.

So, your option #3 is not part of a logical set of choices, we are left only with #1 and #2.


In 11g, there is a good reason to consider using a VIEW always - but a special kind of view, an editioning view.
http://www.oracle.com/pls/db112/search?remark=quick_search&word=editioning+view

otherwise, you use a view when it makes sense, when you want to implement a function on a column for example - uniformally, for all people that access the table. To hide a join or some complexity from some set of people. Etc.

Do you also agree that moving all SQL to VIEWS instead of being defind in
PL/SQL might be become difficult to manage.


of course not. Why would I? If you have source code control and documentation - it would not be difficult at all. Even if you don't - it wouldn't be 'hard to manage', not any harder than having 1,000 stored procedures or anything would. Consider a view to be a SUBROUTINE, because that is really what they are - they hide some programming logic behind them, just like code.

Having a view V is not any more or less manageable than having a procedure P.

sql

sam, March 25, 2011 - 9:11 pm UTC

Tom:

If I understand you correctly, then I should use a VIEW when many users access and share the same SQL so everyone uses the view.

If the SQL is only used by one user or porocedure then keep it defined in CURSOR in procedure.

I found it much easier for debugging sometimes when the code is defined clearly in the cursor rather than view.

View provide a level of abstratction and you have to see what the view does to debug. Same way you beleive that triggers are bad for. They hide things in the background of a transaction.
Tom Kyte
March 29, 2011 - 2:38 am UTC

think of a view or a cursor as a subroutine (BUT ONLY BECAUSE IT IS).

You have some subroutines that are private, local - in a package body, but not exposed in a spec. (think cursor defined in a package)

You have some subroutines that you borrow from other packages (think view)

I found it much easier for debugging sometimes when the code is defined clearly
in the cursor rather than view.


It shouldn't be - if you ever call a routine in another package, you have done the logical equivalent of using a view.


View provide a level of abstratction and you have to see what the view does to
debug. Same way you beleive that triggers are bad for. They hide things in the
background of a transaction.


DO NOT ever put words in my mouth sam, because you leap to conclusions and outcomes that do not make sense and are not accurate.

A view is no more like a trigger than a toaster oven is like an orange.

If you can query a view, you can see the view. Views provide a beneficial layer of abstraction AND security. We need views, we don't need triggers. A view hides things - in a safe manner, there are no "side effects" from views.

Similar Query

A reader, March 26, 2011 - 10:53 am UTC

Tom,
I got different scenario with similar SQL query requirement:

drop table order_header cascade constraints
/

create table order_header
(order_id varchar2(32),
status varchar2(8),
primary key (order_id)
)
/

drop table order_line cascade constraints
/

create table order_line
(order_line_id varchar2(32),
order_id varchar2(32),
line_no number,
line_status varchar2(8)
)
/

insert into order_header(order_id, status)
values('1001', 'DR')
/

insert into order_header(order_id, status)
values('1002', 'IP')
/

insert into order_header(order_id, status)
values('1003', 'CO')
/

insert into order_header(order_id, status)
values('1004', 'RS')
/

insert into order_header(order_id, status)
values('1005', 'OR')
/

insert into order_header(order_id, status)
values('1006', 'XX')
/


insert into order_line(order_line_id, order_id, line_no, line_status)
values('2001', '1001', 1, 'DR')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2002', '1001', 2, 'AB')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2003', '1002', 1, 'IP')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2004', '1003', 1, 'CO')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2005', '1003', 2, 'AB')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2006', '1003', 3, 'RS')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2007', '1003', 4, 'GH')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2008', '1004', 1, 'PC')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2009', '1004', 2, 'PB')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2010', '1004', 3, 'PD')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2011', '1005', 1, 'OR')
/

insert into order_line(order_line_id, order_id, line_no, line_status)
values('2012', '1006', 1, 'XX')
/

commit
/

Requirement:

If status in order HEADER are in ("DR", "IP", "OR", "XX")
then records both in order HEDAER and line are good (#1).
else
LOOP THROUGH Order Line
{
if order LINE status is "PC" for AT LEAST ONE order line record
then status for both order header and line for this order_id are good(#2)
else if status for ALL of the line records are in ONE of ("PC", "PB", "PD")
then status both order header and line for this order_id are good(#3)
else
order status for both header and lines are NOT good(#4).
}


Write a SQL statment to pick out the bad orders (#4 above) displaying both order and line info
breaking by order header_id. In this case pick out the records with order_id of "1003" and its
related lines (order_line_id of "2004", "2005", "2006", "2007").

I tried:

select oh.order_id, ol.line_no
from order_header oh, order_line ol,
(select count(*) total_count, order_id from order_line group by order_id) a,
(select count(line_status) in_count, order_id from order_line where line_status in ('PC', 'PB', 'PD') group by order_id) b,
(select count(line_status) pc_count, order_id from order_line where line_status = 'PC' group by order_id) c
where oh.order_id = ol.order_id
and oh.status not in ('DR', 'IP', 'OR', 'XX')
and a.total_count <> b.in_count
and c.pc_count = 0
and a.order_id = ol.order_id
and b.order_id = ol.order_id
and c.order_id = ol.order_id
and a.order_id = b.order_id(+)
and a.order_id = c.order_id(+)
/

However, it does NOT work, please help.

Thanks



Tom Kyte
March 29, 2011 - 2:56 am UTC

ops$tkyte%ORA11GR2> select t.*,
  2         case when oh_status in ('DR', 'IP', 'OR', 'XX') then 'good #1'
  3                  when count_pc > 0 then 'good #2'
  4                          when count_bad = 0 then 'good #3'
  5                          else 'bad'
  6          end flag
  7    from (
  8  select oh.order_id, oh.status oh_status, ol.order_line_id, ol.line_no, ol.line_status,
  9         count( case when ol.line_status = 'PC' then 1 end ) over (partition by oh.order_id) count_pc,
 10             count( case when ol.line_status NOT IN ('PC','PB','PD') or ol.line_status is null then 1 end)
 11                over (partition by oh.order_id) count_bad
 12    from order_header oh, order_line ol
 13   where oh.order_id = ol.order_id
 14         ) t
 15  /

ORDER OH_STATU ORDER_LINE    LINE_NO LINE_STA   COUNT_PC  COUNT_BAD FLAG
----- -------- ---------- ---------- -------- ---------- ---------- -------
1001  DR       2002                2 AB                0          2 good #1
1001  DR       2001                1 DR                0          2 good #1
1002  IP       2003                1 IP                0          1 good #1
1003  CO       2005                2 AB                0          4 bad
1003  CO       2004                1 CO                0          4 bad
1003  CO       2006                3 RS                0          4 bad
1003  CO       2007                4 GH                0          4 bad
1004  RS       2008                1 PC                1          0 good #2
1004  RS       2009                2 PB                1          0 good #2
1004  RS       2010                3 PD                1          0 good #2
1005  OR       2011                1 OR                0          1 good #1
1006  XX       2012                1 XX                0          1 good #1

12 rows selected.



You didn't test for case #3 :(

but basically case #1 is trivial

for case #2 we just count the number of PC line items, that was easy

for case #3 we invert (NOT) the condition and count how many match the NOT - if more than zero of them - then it would be bad

Just add another inline view and keep "where flag = 'bad'"

Teeerific

A reader, March 29, 2011 - 9:30 am UTC

As usual, you nailed it. Thanks.

transactions and exchange rates

Slavko Brkic, April 11, 2011 - 11:13 am UTC

Hi Tom,

We have a query that looks something like this:

 select trunc(c.opdate) opdate,
        count(*) cnt,
        sum(c.amnt)              amnt,
        sum(ehc.rate*c.amnt)     amnt_cst,
        sum(c.wamnt)             wamnt,
        sum(ehc.rate * c.wamnt)  wamnt_cst
   from sales s,
        coupons c,
        exchangecodes_history ehc
  where s.saleid        = c.saleid
    and s.exchcode      = ehc.exchcode
    and ehc.fromdate    < c.opdate
    and (ehc.todate    >= c.opdate or ehc.todate     is null)
    and s.userid = 1607392
 group by trunc(c.opdate);


As the user has quite a lot of transactions we need for each row look up the exchange rate for that point in time. (can change severla times per day). This obviously makes this query a tad slow. Is there a way to make this query more clever (lol, fast)?
My suggestion is since we always want this conversion to be done to euros to actually store the euro value at the point in time we do the transaction. However if you have another neat solution that would be great.

Thanks in advance.
Tom Kyte
April 13, 2011 - 9:01 am UTC

That non-equi join is going to be problematic. If you wanted this to be 'fast', you would do what you suggested. It is a totally legitimate method - and won't effect your operations since you don't come back and update exchange rates after the fact (at least I don't think you would since the money has already changed hands..)

An alternative approach could be to store the exchangecodes_history row at least ONCE per day (even if it did not change) so you'd be able to add "and ehc.this_new_column = trunc(c.opdate)" to the where clause as well - limiting greatly the number of rows that need be considered - giving you an equi-join opportunity.

usage of sysdate in check constraint

kiran, April 12, 2011 - 12:33 am UTC

why cant we use the sysdate in check constraint condetion
Tom Kyte
April 13, 2011 - 9:08 am UTC

close your eyes and "think" about it.

how could it work, what would it mean?

say you had a constraint such as:

...
to_be_done_by_date date check( date > sysdate ),
....


Ok, now you put some data in the table and it all works OK. Then, someone disables the constraint and re-enables it. What would likely happen over time?

Or you export/import the data - what happens over time?

Or you update the row and set some other column to some value one year from now, what would happen?

and so on. Since sysdate is always moving forward, your constraint definition would not be deterministic over time - you would have data that would VIOLATE your constraint shortly after inserting it (the row that satisfies the constraint today probably does not satisfy it at some point in the future)

transactions and exchange rates

A reader, April 14, 2011 - 1:59 am UTC

Hi Tom,

I think I understood your second solution, which I actually implemented at first. However as the exchange code can be set several times a day the trunc(c.opdate) would not just find one row. I think I will suggest storing the value in euros as well.

Thanks a lot for your help.
Tom Kyte
April 14, 2011 - 9:48 am UTC

... trunc(c.opdate) would not just find one row ...

I know that - what I said is that it would LIMIT the rows returned, it would permit an equi join ( that would be further filtered by your existing BETWEEN)

storing the value in euros would be the most expidient

transactions and exchange rates

A reader, April 14, 2011 - 2:02 am UTC

Hi Tom,

I actually reread your solution, and got it now. I will try that out and see how it does.

Cheers,

Assignment on Operators

Sadashiv, May 27, 2011 - 2:16 am UTC

Display all the employees who are getting 2500 and excess salaries in department 20
Tom Kyte
May 27, 2011 - 10:36 am UTC

go for it, pretty basic homework. Hopefully they progress along to more significantly challenging things very soon!


Get unwanted tables

A reader, June 09, 2011 - 10:31 am UTC

Hi Tom,

I have a question regarding the tables. Lets say I have few tables in the schema:

CREATE TABLE EMP(ENO NUMBER);
CREATE TABLE EMP_IMP(ENO NUMBER);
CREATE TABLE EMP_INA(ENO NUMBER);
CREATE TABLE EMP_12311(ENO NUMBER);
CREATE TABLE EMP_IMP_BKP(ENO NUMBER);
CREATE TABLE EMP_BKP(ENO NUMBER);
CREATE TABLE DEPT(DNO NUMBER);
CREATE TABLE DEPT_OLD(DNO NUMBER);
CREATE TABLE DEPT_OLD_1233(DNO NUMBER);

Now I want to just keep EMP, EMP_INA and EMP_IMP, DEPT, DEPT_OLD.. Rest I want to drop.. Can this be done through script? I understand it may seem a strange one but if not possible through script I will do it manually..
Tom Kyte
June 09, 2011 - 11:58 am UTC

begin
for x in (select table_name from user_tables where table_name not in ( 'EMP', .... 'DEPT_OLD' )
loop
dbms_output.put_line( 'drop table ' || x.table_name || ';' );
-- execute immediate 'drop table ' || x.table_name;
end loop;
end;



that'll print out the tables it'll drop, un-comment the drop to drop them... be careful :)

Get unwanted tables

A reader, June 09, 2011 - 10:33 am UTC

Hi Tom,

I have a question regarding the tables. Lets say I have few tables in the schema:

CREATE TABLE EMP(ENO NUMBER);
CREATE TABLE EMP_IMP(ENO NUMBER);
CREATE TABLE EMP_INA(ENO NUMBER);
CREATE TABLE EMP_12311(ENO NUMBER);
CREATE TABLE EMP_IMP_BKP(ENO NUMBER);
CREATE TABLE EMP_BKP(ENO NUMBER);
CREATE TABLE DEPT(DNO NUMBER);
CREATE TABLE DEPT_OLD(DNO NUMBER);
CREATE TABLE DEPT_OLD_1233(DNO NUMBER);

Now I want to just keep EMP, EMP_INA and EMP_IMP, DEPT, DEPT_OLD.. Rest I want to drop.. Can this be done through script? I understand it may seem a strange one but if not possible through script I will do it manually..

Thanks

A reader, June 09, 2011 - 1:10 pm UTC

Hi Tom,

Thanks for the suggestion.. I could try that but again I have around 1500 datatables of which 500 are to be discarded and it will be very hard for me to put that in a query.

Anyways, I will go ahead with manual work .. Thanks for your suggestion,
Tom Kyte
June 09, 2011 - 1:19 pm UTC

well, if you have a list of tables to keep - load them into a table.

if you have a list of tables to drop - load them into a table.

and use a subquery

Thanks

A reader, June 09, 2011 - 2:29 pm UTC

hi Tom,

I have already started dumping the unwanted names in a file that I can load in the database table and with join to user_tables I can get the drop statements using your previous statement.. This would help me as I have to carry this operation in two different environments.

Thanks again for your time.

Full Outer Joining of more than three tables

Abhisek, July 05, 2011 - 12:46 pm UTC

Hi Tom,

I need to achieve a full outer join so that lets say: if I have 5 tables so if a row exists in any of the five tables, it should be included in result with no duplicates.

create table table1(key1 varchar2(10), col1 varchar2(10));
create table table2(key2 varchar2(10), col2 varchar2(10));
create table table3(key3 varchar2(10), col3 varchar2(10));
create table table4(key4 varchar2(10), col4 varchar2(10));
create table table5(key5 varchar2(10), col5 varchar2(10));

insert into table1 values('1','T1:1');
insert into table2 values('1','T2:1');
insert into table2 values('2','T2:2');
insert into table3 values('3','T3:1');
insert into table3 values('6','T3:2');
insert into table3 values('5','T3:3');
insert into table4 values('4','T4:1');
insert into table5 values('5','T5:1');
insert into table5 values('5','T5:2');


select * from table1
full join table2 on key1=key2
full join table3 on nvl(key1,key2)=key3
full join table4 on nvl(nvl(key1,key2),key3)=key4
full join table5 on nvl(nvl(nvl(key1,key2),key3),key4)=key5;



It may be a bad designed solution.Do you have any better ideas?


Tom Kyte
July 05, 2011 - 4:13 pm UTC

not knowing the requirement nor any of the constraints no, not really.

if this is a case where keyN is the primary key of each, then I do have a suggestion for the future:

one table.


I rarely, if ever, physically model 1:1 optional or mandatory relations as a pair of tables - it screams out for "one table"

coalesce would be better than nvl nvl nvl nvl....


SQL...

A Reader, September 15, 2011 - 8:24 am UTC

Hi Tom,
thanks for your time.
I need to create a table t with data as detailed below ( to simulate the real time distribution)


ops$man@ora11gR2:Test> create table t ( id number, x number);

Table created.

ops$man@ora11gR2:Test> alter table t add primary key ( id);

Table altered.


Now need query to insert data such that

column id --- should be unique ( starting from 1)
column x -- should have values...

( select x , count(*) from t group by x order by count(*) desc )

x COUNT(*)
----- --------
NULL 691056
11 26224
1046 1442
1036 438
1078 340
1066 287
1073 173
18 141
8 113
1070 38

total row inserted should be sum of the count(*) above
i.e. 691056+26224+1442+438+340+287+173+141+113+38 = 720252


these insert should be random.. ( to simulate the real)
Kindly suggest how to?

Tom Kyte
September 15, 2011 - 12:01 pm UTC

ops$tkyte%ORA11GR2> create table t
  2  as
  3  with data as
  4  (
  5  select NULL    x from dual connect by level <= 691056 union all
  6  select 11      x from dual connect by level <= 26224 union all
  7  select 1046    x from dual connect by level <= 1442 union all
  8  select 1036    x from dual connect by level <= 438 union all
  9  select 1078    x from dual connect by level <= 340 union all
 10  select 1066    x from dual connect by level <= 287 union all
 11  select 1073    x from dual connect by level <= 173 union all
 12  select 18      x from dual connect by level <= 141 union all
 13  select 8       x from dual connect by level <= 113 union all
 14  select 1070    x from dual connect by level <= 38
 15  )
 16  select x from data order by dbms_random.random;

Table created.

ops$tkyte%ORA11GR2> select x, count(*) from t group by x order by 2 desc;

         X   COUNT(*)
---------- ----------
               691056
        11      26224
      1046       1442
      1036        438
      1078        340
      1066        287
      1073        173
        18        141
         8        113
      1070         38

10 rows selected.



and then query:


select * from (
select rownum r, x from t
)
where x is not null;


and you'll see how 'varied' the data is..

SQL Query Concept

A reader, September 15, 2011 - 2:38 pm UTC

I have a question regarding the method to select the rows in a set query if possible. I have achieved the same in PL SQL part.
Lets say we have three tables: Table_A, Table_B and Table_C.

CREATE TABLE TABLE_A(COL_1 VARCHAR2(100), COL_2 VARCHAR2(100));
CREATE TABLE TABLE_B(COL_1 VARCHAR2(100), COL_2 VARCHAR2(100), COL_3 VARCHAR2(100));
CREATE TABLE TABLE_C(ID NUMBER, COL_1 VARCHAR2(100), COL_2 VARCHAR2(100), COL_3 VARCHAR2(100), COL_4 DATE);
CREATE TABLE TABLE_D(COL_1 VARCHAR2(100), COL_2 VARCHAR2(100), COL_3 VARCHAR2(100), COL_4 DATE);

INSERT INTO TABLE_A VALUES('A1','VAL_1');
INSERT INTO TABLE_A VALUES('A2','VAL_2');
INSERT INTO TABLE_A VALUES('A3','VAL_3');

Note: COL_1 is primary key in TABLE_A

INSERT INTO TABLE_B VALUES('A1','B_1','C_1');
INSERT INTO TABLE_B VALUES('A2','B_2','C_2');
INSERT INTO TABLE_B VALUES('A3','B_3','C_3');

Note: COL_1 is primary key in TABLE_B. Normally all records are unique

INSERT INTO TABLE_C VALUES(1, 'A1','141','19', '04-MAY-2000');
INSERT INTO TABLE_C VALUES(2, 'A1','141','29', '04-MAY-2011');
INSERT INTO TABLE_C VALUES(3, 'A2','141','19', '11-JUL-2011');
INSERT INTO TABLE_C VALUES(4, 'A2','300','4', '11-JUL-2010');

Note: COL_1 is primary key in TABLE_C. COL_2 is non-unique indexed in TABLE_C.

INSERT INTO TABLE_D VALUES('B_1','C_1','1', '04-MAY-2000');
INSERT INTO TABLE_D VALUES('B_1','C_1','2', '12-JAN-2009');
INSERT INTO TABLE_D VALUES('B_2','C_2','1', '21-AUG-2011');
Note: COL_1, COL_2 is primary key in TABLE_D.

Relation about the tables:
TABLE_A.COL_1 = TABLE_B.COL_1 AND
TABLE_B.COL_1 = TABLE_C.COL_2 AND
TABLE_B.COL_2 = TABLE_D.COL_1 AND
TABLE_B.COL_3 = TABLE_D.COL_2

Problem Statement:
-------------------
STEP 1:
SELECT the rows from TABLE_A WHERE EXISTS a row in TABLE_B. Check if the selected row is present in TABLE_C with value of col_2 = 141 and col_3 in 19,29. If EXISTS, then goto STEP 2 else goto STEP 3.


STEP 2:
IF col_2 = 141 and col_3 in 19,29 EXISTS in TABLE_C for the given ID, then check if the TABLE_C.COL_4 is less than SYSDATE - 6 months.
if smaller then goto LAST_STEP else REJECT the record.

STEP 3:
Check if record exists in TABLE_D for the given ID. if a record is present then check if there is col_3 = 1 present in the rows. If only col_3 = 1 is present then goto the LAST_STEP. If col_3 contains other than value 1(e.g. B_1 and C_1 has col_3 as 2 as well) then check if the TABLE_C.COL_4 is less than SYSDATE - 6 months.
if smaller then goto LAST_STEP else REJECT the record.

STEP LAST_STEP:
Select and return the set of rows as result

I understand it is a very big problem statement but I am curious if these type of situations can be handled in a single query.

I have already implemented it in PL SQL with the help of CURSOR FOR LOOP but it is taking some time as these are a bit big tables with just around 1 million rows.

Any concept would be appreciated.

Thanks for your time

Tom Kyte
September 16, 2011 - 1:45 pm UTC

INSERT INTO TABLE_D VALUES('B_1','C_1','1', '04-MAY-2000');
INSERT INTO TABLE_D VALUES('B_1','C_1','2', '12-JAN-2009');
INSERT INTO TABLE_D VALUES('B_2','C_2','1', '21-AUG-2011');
Note: COL_1, COL_2 is primary key in TABLE_D.


bzzt - no it isn't. try again. B_1, C_1 appear twice.

To A reader

Michel Cadot, September 16, 2011 - 1:42 am UTC


Note: COL_1, COL_2 is primary key in TABLE_D.
INSERT INTO TABLE_D VALUES('B_1','C_1','1', '04-MAY-2000');
INSERT INTO TABLE_D VALUES('B_1','C_1','2', '12-JAN-2009');


There is a mismatch between your description and your data.

Regards
Michel

To A reader

Michel Cadot, September 16, 2011 - 1:45 am UTC


Forgot to mention: same thing for table_b.
Please repost a correct test case.

Regards
Michel

Tim, September 16, 2011 - 12:28 pm UTC

First, it probably would have been best just to put the constraints in your CREATE TABLE statements. I think most people able to give you a hand writing the SQL would be able to discern what the keys were with that, and it would have shown the problems in the test data that you have.

That said, I don’t think correct sample data is needed to get a query together for this. (Also, I feel bad for you having to work in a system with composite keys with VARCHAR2 columns.)

That being said, there are a couple of ways to do this query. It looks like there are 3 sets of conditions you want to check:

TABLE_C.COL_2 = 141 AND TABLE_C.COL_3 IN (19,29) AND TABLE_C.COL_4 < ADD_MONTHS(SYSDATE,-6)
MIN(TABLE_D.COL_3) = 1 AND MAX(TABLE_D.COL_3) = 1
(MIN(TABLE_D.COL_3) != 1 OR MAX(TABLE_D.COL_3) != 1) AND TABLE_C.COL_4 < ADD_MONTHS(SYSDATE,-6)

If this is correct, then you could UNION 3 selects together to get your result set, or do something like:

SELECT a.*
FROM table_a a, table_b b, table_c c,
(SELECT col_1, col_2, MIN(col_3) AS min_col_3, MAX(col_3) AS max_col_3
FROM table_d
GROUP BY col_1, col_2)d
WHERE a.col_1 = b.col_1
AND b.col_1 = c.col_1(+)
AND b.col_2 = d.col_1(+)
AND b.col_3 = d.col_2(+)
AND ( ( ( ( c.col_2 = 141 AND c.col_3 IN (19,29) ) OR ( min_col_3 != 1 OR max_col_3 != 1 ) ) AND c.col_4 < ADD_MONTHS(SYSDATE,-6) ) OR (min_col_3 = 1 AND max_col_3 = 1) )


Now, I cannot guarantee performance over millions of rows, but I believe this will get you want you want. It probably should be faster than your FOR LOOP, in any case, especially if you are seperately querying TABLE_C and TABLE_D for every row in TABLE_A.
Tom Kyte
September 16, 2011 - 2:19 pm UTC

I don’t think correct sample data is needed to get a query together
for this.


sure it is, it probably means the data hasn't been thought out at all - mistakes in the text case could either be

a) test data was wrong
b) description was entirely wrong

I've seen it go both ways equally and it is a waste of time trying to deal with it. ;)

Apologies for bad test data and case

A reader, September 17, 2011 - 10:39 am UTC

Hi Tom,

Apologies for the bad data and a bit incorrect logic. Below find the create and insert statements:

CREATE
  TABLE "HR"."TABLE_A"
  (
    "COL_1" VARCHAR2(100 BYTE) NOT NULL ENABLE,
    "COL_2" VARCHAR2(100 BYTE),
    CONSTRAINT "TABLE_A_PK" PRIMARY KEY ("COL_1")
  );
Insert into HR.TABLE_A (COL_1,COL_2) values ('id_001','ID 1');
Insert into HR.TABLE_A (COL_1,COL_2) values ('id_002','ID 2');
Insert into HR.TABLE_A (COL_1,COL_2) values ('id_003','ID 3');
Insert into HR.TABLE_A (COL_1,COL_2) values ('id_004','ID 4');


CREATE
  TABLE "HR"."TABLE_B"
  (
    "COL_1" VARCHAR2(100 BYTE) NOT NULL ENABLE,
    "COL_2" VARCHAR2(100 BYTE),
    "COL_3" VARCHAR2(100 BYTE),
    CONSTRAINT "TABLE_B_PK" PRIMARY KEY ("COL_1")
   );

Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_001','b1','s1');
Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_002','b2','s2');
Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_003','b3','s3');
Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_004','b4','s4');

CREATE
  TABLE "HR"."TABLE_C"
  (
    "ID"    NUMBER NOT NULL ENABLE,
    "COL_1" VARCHAR2(100 BYTE),
    "COL_2" VARCHAR2(100 BYTE),
    "COL_3" VARCHAR2(100 BYTE),
    "COL_4" DATE,
    CONSTRAINT "TABLE_C_PK" PRIMARY KEY ("ID")
  );

CREATE
    INDEX "HR"."TABLE_C_INDEX1" ON "HR"."TABLE_C"
    (
      "COL_1",
      "COL_2",
      "COL_3"
    );
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (1,'id_001','04','19',to_date('04-MAY-00','DD-MON-RR'));
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (2,'id_001','041','131',to_date('04-MAR-11','DD-MON-RR'));
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (3,'id_002','041','24',to_date('11-JUL-11','DD-MON-RR'));
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (4,'id_002','300','99',to_date('11-JUL-10','DD-MON-RR'));
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (5,'id_003','041','29',to_date('11-SEP-10','DD-MON-RR'));
Insert into HR.TABLE_C (ID,COL_1,COL_2,COL_3,COL_4) values (6,'id_004','890','19',to_date('09-JUL-11','DD-MON-RR'));


CREATE
  TABLE "HR"."TABLE_D"
  (
    "COL_1" VARCHAR2(100 BYTE) NOT NULL ENABLE,
    "COL_2" VARCHAR2(100 BYTE) NOT NULL ENABLE,
    "COL_3" VARCHAR2(100 BYTE) NOT NULL ENABLE,
    "COL_4" VARCHAR2(100 BYTE),
    "COL_5" DATE,
    CONSTRAINT "TABLE_D_PK" PRIMARY KEY ("COL_1", "COL_2", "COL_3") 
  );

Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values ('b1','s1','sfx_1_1','07',to_date('04-MAY-00','DD-MON-RR'));
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values ('b1','s1','sfx_1_2','09',to_date('17-SEP-11','DD-MON-RR'));
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values ('b2','s2','sfx_2_1','07',to_date('21-AUG-11','DD-MON-RR'));
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values ('b2','s2','sfx_2_2','19',to_date('17-SEP-10','DD-MON-RR'));
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values ('b4','s4','sfx_4_1','09',to_date('17-OCT-11','DD-MON-RR'));





Now my logic the sample PL-SQL part. I was curious if it could be handled in one single statement:

declare
l_exists_1 number := 0;
l_exists_2 number := 0;
l_exists_3 number := 0;
l_exists_4 number := 0;
l_exists_5 number := 0;
l_cond_1 number := 0;
l_cond_2 number := 0;
begin
  for cur_var in (select t1.col_1, t2.col_2, t2.col_3 from table_a t1, table_b t2
              where t1.col_1 = t2.col_1
            )
  loop
    l_exists_1 := 0;
    l_exists_2 := 0;
    l_exists_3 := 0;
    l_exists_4 := 0;
    l_exists_5 := 0;
    l_cond_1 := 0;
    l_cond_2 := 0;
    BEGIN
        SELECT 1 INTO l_exists_1
        FROM table_c t3
          WHERE t3.col_1 = cur_var.col_1
          AND ((t3.col_2 = '041' and t3.col_3 in ('131','152'))
                or
                (t3.col_2 = '04' and t3.col_3 in ('19','29'))
              )
          AND ROWNUM = 1;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
      l_exists_1 := 0;
    END ;
    IF l_exists_1 = 1 THEN
      BEGIN
        --Check if the row in TABLE_C is expired
        SELECT 1 INTO l_cond_1
          FROM table_c t3
            WHERE t3.col_1 = cur_var.col_1
            AND   t3.col_4 < ADD_MONTHS(SYSDATE, -6)
            AND   t3.col_4 = (SELECT MIN(col_4) 
                                FROM table_c inner_c3
                              WHERE inner_c3.col_1 = t3.col_1);
         dbms_output.put_line('C1: ' || cur_var.col_1);                     
      EXCEPTION
        WHEN NO_DATA_FOUND THEN
          l_cond_1 := 0;
      END;
    ELSE
       l_cond_1 := 0;
    END IF;
    --In case no row exists in TABLE_C or there was a row in TABLE_C but was not expired then check the next table TABLE_D
    IF l_cond_1 = 0 THEN
      BEGIN
          SELECT 1 INTO l_exists_2
            FROM Table_D t4
          WHERE t4.COL_1 = cur_var.col_2
          AND   t4.COL_2 = cur_var.col_3
          AND   ROWNUM = 1;
        EXCEPTION
      WHEN NO_DATA_FOUND THEN
        l_exists_2 := 0;
      END;
      IF l_exists_2 = 1 THEN
        BEGIN
          SELECT 1 INTO l_exists_3
            FROM Table_D t4
          WHERE t4.COL_1 = cur_var.col_2
          AND   t4.COL_2 = cur_var.col_3
          AND   t4.COL_4 = '07'
          AND   ROWNUM = 1;
        EXCEPTION
        WHEN NO_DATA_FOUND THEN
          l_exists_3 := 0;
        END;
        IF l_exists_3 = 1 THEN
          BEGIN
            --Check if another row exists where COL_4 != '07'
            SELECT 1 INTO l_exists_4
              FROM Table_D t4
            WHERE t4.COL_1 = cur_var.col_2
            AND   t4.COL_2 = cur_var.col_3
            AND   t4.COL_4 != '07';
          EXCEPTION
          WHEN NO_DATA_FOUND THEN
            l_exists_4 := 0;  
          END ;
          IF l_exists_4 = 1 THEN
            BEGIN
              --There exists a second row in TABLE_D where col_4 != 07 along with col_4 = 07 and the row is expired
              SELECT 1 INTO l_exists_5
                FROM Table_D t4
              WHERE t4.COL_1 = cur_var.col_2
              AND   t4.COL_2 = cur_var.col_3
              AND   t4.COL_4 != '07'
              AND   t4.col_5 < ADD_MONTHS(SYSDATE, -6)
              AND   t4.col_5 = (SELECT MIN(inner_t4.col_5)
                                  FROM table_d inner_t4
                                WHERE inner_t4.col_1 = t4.col_1
                                AND   inner_t4.col_2 = t4.col_2
                                );
              dbms_output.put_line('C2: ' || cur_var.col_1);
            EXCEPTION
            WHEN NO_DATA_FOUND THEN
              l_exists_5 := 0;  
            END ;
          ELSE
            --No other than COL_4 = 07 available in TABLE_D THEN print
            dbms_output.put_line('C3: ' || cur_var.col_1);
          END IF ;
        END IF;
      END IF;
      
    END IF;
  END LOOP;
END;


Now to put that in words the problem statement:

Problem Statement:
-------------------
STEP 1:
SELECT the rows from TABLE_A WHERE EXISTS a row in TABLE_B. Check if the selected row is present in
TABLE_C with value of (t3.col_2 = '041' and t3.col_3 in ('131','152')) or (t3.col_2 = '04' and t3.col_3 in ('19','29')). If EXISTS, then goto STEP 2 else goto STEP 3.


STEP 2:
IF (t3.col_2 = '041' and t3.col_3 in ('131','152')) or (t3.col_2 = '04' and t3.col_3 in ('19','29')) EXISTS in TABLE_C for the given ID, then check if the minimum of TABLE_C.COL_4 is less than SYSDATE - 6 months.
if smaller then goto LAST_STEP else goto STEP 3.

STEP 3:
Check if record exists in TABLE_D for the given ID. if a record is present then check if there is
col_4 = '07' present in the rows. If only col_4 = '07' is present then goto the LAST_STEP.

If col_3 contains other than value 1(e.g. B_1 and C_1 has col_4 as '09' as well) then check if the minimum(TABLE_C.COL_5) is less than SYSDATE - 6 months.
if smaller then goto LAST_STEP else REJECT the record.

STEP LAST_STEP:
Select and return the set of rows as result


I am curious to see the rows returned with the would-be SQL statement to be same as that are returned from PL SQL

I hope I am a bit clear this time.

Thanks to Tim

A reader, September 17, 2011 - 10:51 am UTC

Hi Tim,

Thanks a lot for the query. It works the same way I want. It was so simple straight forward method.. I had to manipulate some logic due to my changed test case:

SELECT distinct a.*
FROM table_a a, table_b b, table_c c, 
(SELECT col_1, col_2, MIN(col_4) AS min_col_3, MAX(col_4) AS max_col_3
FROM table_d 
GROUP BY col_1, col_2)d
WHERE a.col_1 = b.col_1
AND b.col_1 = c.col_1(+)
AND b.col_2 = d.col_1(+)
AND b.col_3 = d.col_2(+)
AND ( ( ( ((c.col_2 = '041' and c.col_3 in ('131','152'))
                or
                (c.col_2 = '04' and c.col_3 in ('19','29'))
              ) OR ( min_col_3 != '07' OR max_col_3 != '07' ) ) AND 
c.col_4 < ADD_MONTHS(SYSDATE,-6) ) OR (min_col_3 = '07' AND max_col_3 = '07') )


But I assume there is any hamper in performance because I am using DISTINCT in the query considering all the tables are a bit big with 1-5 million records.

Thanks again to Michel and Tom as well for their inputs.

TABLE_A to TABLE_D: avoiding DISTINCT

Stew Ashton, September 18, 2011 - 6:50 am UTC


Apologies, Tom, if you answer this yourself.

It's a bad sign when you have to DISTINCT the result of a big join: the join may be producing lots more rows than necessary. It would be better to join inline views or subqueries that each produce just the rows you need.

Here is a proposal that uses subquery factoring. If I read the contraints right, the final join should not produce duplicate rows.
with qa as (
  select a.col_1 acol1, a.col_2 acol2, b.col_2 bcol2, b.col_3 bcol3
  from table_a a, table_b b
  where a.col_1 = b.col_1
), qb as (
  select col_1 ccol1, min(col_4) ccol4 from table_c
  where (col_2 = '041' and col_3 in ('131','152'))
  or    (col_2 = '04'  and col_3 in ('19','29'))
  group by col_1
), qc as (
  select * from (
    select col_1 dcol1,col_2 dcol2, 
    max(decode(col_4,'07','Y')) dhas07, 
    min(decode(col_4,'07',null, col_5)) dcol5
    from table_d
    group by col_1,col_2
  ) where dhas07 = 'Y' and (
    dcol5 < add_months(sysdate, -6)
    or dcol5 is null
  )
) select * from qa, qb, qc
where acol1 = ccol1(+)
and (bcol2, bcol3) = ((dcol1(+), dcol2(+)))
and (
  ccol1 is null and dcol1 is not null -- C2
  or ccol4 < add_months(sysdate, -6) -- C1
  or ccol4 >= add_months(sysdate, -6) and dcol1 is not null -- C3
)

Thanks Stew

A reader, September 19, 2011 - 12:03 pm UTC

Thanks Stew for your kind suggestion!!

Everything seems to work as per the scenario.

I missed to one more point write in Step 3: In case there there is no value '07' in the table TABLE_D but has a entry for the record of TABLE_A and TABLE_B, then we check if it is expired. if expired then select else reject..

Insert into HR.TABLE_A (COL_1,COL_2) values ('id_005','ID 5');

Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_005','b5','s5');


Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5) values
('b5','s5','sfx_5_1','09',to_date('17-SEP-10','DD-MON-RR'));

Could any one help on this? Thanks a lot.



reader's new requirement

Stew Ashton, September 19, 2011 - 2:08 pm UTC


Try to understand what's going on here so you can maintain this code yourself. If you don't understand it, it's dangerous to use it, since all code including mine may contain bugs.

Look at the qc subquery: the innermost query is
select col_1 dcol1,col_2 dcol2, 
max(decode(col_4,'07','Y')) dhas07, 
min(decode(col_4,'07',null, col_5)) dcol5
from table_d
group by col_1,col_2
I am getting one record that answers two questions:
1) Was there a row with '07'?
2) If there were one or more non-'07' rows, what was the oldest date?

Your first requirement was: there has to be a row with '07' AND either no non-'07' row OR a non-'07' row with an expired date. This I translate to
where dhas07 = 'Y' and (
  dcol5 < add_months(sysdate, -6)
  or dcol5 is null
)
Your new requirement is: there has to be EITHER a row with '07' and no non-'07' row OR a non-'07' row with an expired date. This I translate to
where dhas07 = 'Y' and dcol5 is null
or dcol5 < add_months(sysdate, -6)
The complete statement would then be
with qa as (
  select a.col_1 acol1, a.col_2 acol2, b.col_2 bcol2, b.col_3 bcol3
  from table_a a, table_b b
  where a.col_1 = b.col_1
), qb as (
  select col_1 ccol1, min(col_4) ccol4 from table_c
  where (col_2 = '041' and col_3 in ('131','152'))
  or    (col_2 = '04'  and col_3 in ('19','29'))
  group by col_1
), qc as (
  select * from (
    select col_1 dcol1,col_2 dcol2, 
    max(decode(col_4,'07','Y')) dhas07, 
    min(decode(col_4,'07',null, col_5)) dcol5
    from table_d
    group by col_1,col_2
  )
  where dhas07 = 'Y' and dcol5 is null
  or dcol5 < add_months(sysdate, -6)
) select * from qa, qb, qc
where acol1 = ccol1(+)
and (bcol2, bcol3) = ((dcol1(+), dcol2(+)))
and (
  ccol1 is null and dcol1 is not null -- C2
  or ccol4 < add_months(sysdate, -6) -- C1
  or ccol4 >= add_months(sysdate, -6) and dcol1 is not null -- C3
);

Tom Kyte
September 19, 2011 - 6:00 pm UTC


Try to understand what's going on here so you can maintain this code yourself. If you don't understand it, it's dangerous to use it, since all code including mine may contain bugs.


well said - that keeps me up at night - that FACT of life. Too many - way way way WAY too many people cut and paste from forums, and use it in production. Makes me afraid to fly on planes (flown by wire) sometimes.....

thanks for saying it and thanks for responding here...

Thanks Stew

A reader, September 19, 2011 - 2:32 pm UTC

Thanks Stew,

I must admit I was not very confident with the sub querying part of TABLE_D. I was trying with another DECODE condition but in vain. May be I should have given a bit more time to understand than ask sooner :(

Now, I understand your code and would be really helpful in maintaining the code. Thanks for the wonderful explanation and your time. I am now going to read a bit about the SUBQUERY part from documentation to get a bit hold on it!!

Another condition in the query

A reader, September 21, 2011 - 2:01 pm UTC

Hi Tom,

I have a strange condition where there will be an additional column in table_d

alter table TABLE_D add  col_6 date;
update table_d set col_6 = TO_DATE('01-JAN-2015','DD-MON-YYYY');

Insert into HR.TABLE_A (COL_1,COL_2) values ('id_008','ID 8');
Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_008','b8','s8');
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5, COL_6) values 
('b7','s7','sfx_8_1','09',TO_DATE('01-JAN-2015','DD-MON-YYYY'),to_date('25-SEP-10','DD-MON-RR'));

Insert into HR.TABLE_A (COL_1,COL_2) values ('id_009','ID 9');
Insert into HR.TABLE_B (COL_1,COL_2,COL_3) values ('id_009','b9','s9');
Insert into HR.TABLE_D (COL_1,COL_2,COL_3,COL_4,COL_5, COL_6) values 
('b9','s9','sfx_9_1','09',TO_DATE('01-JAN-2015','DD-MON-YYYY'),TO_DATE('01-JAN-2015','DD-MON-YYYY'));


Now the conditions are:
1. Check if both COL_5 and COL_6 are filled with default date "01-JAN-2015". In case both are filled with records in table_a and table_b, select the record. e.g. id_009

2. In case one of the two columns, col_5 and col_6 is not equal to default date, take the column not equal to default.
So in the case of id_008, col_6 must be considered to check for expiry date(sysdate -6 months)

3. In case both are not equal to default then choose col_5.

So to check the existence of default date in both columns, i used the query:

select count(*) from table_d t1 where t1.col_1='b9'
  and t1.col_2 = 's9'
  and t1.col_5 = t1.col_6
  and t1.col_5 = '01-JAN-2015'
  and not exists
      (select 1 from table_d t2
      where t2.col_1 = t1.col_1
      and t2.col_2 = t1.col_1
      and t2.col_5 != '01-JAN-2015');


It returns 1 if both columns are equal and 0 if one of them is not default.
Should I make a function and use it as an OR condition in the query.

SQL Query

A reader, September 22, 2011 - 2:31 pm UTC

I tried for the query with my understanding and came up with this idea:

with qa as (
  select a.col_1 acol1, a.col_2 acol2, b.col_2 bcol2, b.col_3 bcol3
  from table_a a, table_b b
  where a.col_1 = b.col_1
), qb as (
  select col_1 ccol1, min(col_4) ccol4 from table_c
  where (col_2 = '041' and col_3 in ('131','152'))
  or    (col_2 = '04'  and col_3 in ('19','29'))
  group by col_1
), qc as (
  select * from (
    select col_1 dcol1,col_2 dcol2, 
    max(decode(col_4,'07','Y')) dhas07, 
    min(decode(col_4,'07',null, decode(col_6, TO_DATE('01.01.2015','DD.MM.YYYY'), col_5, col_6))) dcol5
    from table_d
    group by col_1,col_2
  )
  where dhas07 = 'Y' and dcol5 is null
  or dcol5 < add_months(sysdate, -6)
or dcol5 = TO_DATE('01.01.2015','DD.MM.YYYY') and dcol1 is not null
) select * from qa, qb, qc
where acol1 = ccol1(+)
and (bcol2, bcol3) = ((dcol1(+), dcol2(+)))
and (
  ccol1 is null and dcol1 is not null -- C2
  or ccol4 < add_months(sysdate, -6) -- C1
  or ccol4 >= add_months(sysdate, -6) and dcol1 is not null -- C3
);

On new condition for TABLE_D

Stew Ashton, September 24, 2011 - 1:34 am UTC


Quoting your new logic:

1. Check if both COL_5 and COL_6 are filled with default date "01-JAN-2015". In case both are filled with records in table_a and table_b, select the record. e.g. id_009

Your new query seems to do that using the condition
or dcol5 = TO_DATE('01.01.2015','DD.MM.YYYY') and dcol1 is not null
However I don't see why you put "and dcol1 is not null" in there.

2. In case one of the two columns, col_5 and col_6 is not equal to default date, take the column not equal to default.
So in the case of id_008, col_6 must be considered to check for expiry date(sysdate -6 months)


Your new query seems to do that.

3. In case both are not equal to default then choose col_5.

BUG! In case both are not equal to default you choose col_6...
decode(col_6, TO_DATE('01.01.2015','DD.MM.YYYY'), col_5, col_6)
Try this instead:
decode(col_5, TO_DATE('01.01.2015','DD.MM.YYYY'), col_6, col_5)

Thanks Stew

A reader, September 27, 2011 - 11:39 am UTC

Thanks a lot.

Sorry tom to fill the space but I am just trying to solve the different situations in a different way, like you all do in simple SQLs. Whereas I require PL Code for that.

Now I have changed a little bit of requirement and the code. Could you please verify of I did the things right now?

------------------------------------------

select rows when records in table_A and table_B with the following conditions:

select the rows from table_c where col_3 is:
(col_2 = '041' and col_3 in ('131','152'))
or (col_2 = '04' and col_3 in ('19','29'))
select the rows from table_D with col_4 = 07. In case multiple rows are present for table_d with col_4 other than 07

Other conditions:

1. No data in table_c and table_d
2. Data in table_c with max(col_4) less than 6 months from sysdate
and
Data in table_D with col_4 = 07. In case multiple rows are present for table_d with col_4 other than 07 then
max(col_5) of them less than 6 months from sysdate
3. No data in table_c
and
Data in table_D with col_4 = 07. In case multiple rows are present for table_d with col_4 other than 07 then
max(col_5) of them less than 6 months from sysdate
4. Data in table_c with max(col_4) less than 6 months from sysdate
and no data in table_d

with qa as (
  select a.col_1 acol1, a.col_2 acol2, b.col_2 bcol2, b.col_3 bcol3
  from table_a a, table_b b
  where a.col_1 = b.col_1
), qb as (
  select col_1 ccol1, max(col_4) ccol4 from table_c
  where (col_2 = '041' and col_3 in ('131','152'))
  or    (col_2 = '04'  and col_3 in ('19','29'))
  group by col_1
), qc as (
  select * from (
    select col_1 dcol1,col_2 dcol2, 
    max(decode(col_4,'07','Y')) dhas07, 
    max(decode(col_4,'07',null, col_5)) 
dcol5
    from table_d
    group by col_1,col_2
  )
  where dhas07 = 'Y' and dcol5 is null
  or dcol5 < add_months(sysdate, -6)
or dcol5 = TO_DATE('01.01.2015','DD.MM.YYYY') and dcol1 is not null
) select * from qa, qb, qc
where acol1 = ccol1(+)
and (bcol2, bcol3) = ((dcol1(+), dcol2(+)))
and (
  ccol1 is null and dcol1 is not null -- C3
  or ccol4 <= add_months(sysdate, -6) and (dcol1 is not null or   not exists(select 1 from TABLE_D xd where xd.col_1 = bcol2 and xd.col_2 = bcol3))--c2 and c4
  or (( ccol1 is null and dcol1 is null) --c1
        and (
            not exists(select 1 from TABLE_C xc where xc.col_1 = acol1)
                and
            not exists(select 1 from TABLE_D xd where xd.col_1 = bcol2 and xd.col_2 = bcol3)
     ))  
);


Thanks a lot for the guidance.

Follow up

A reader, September 28, 2011 - 5:33 pm UTC

Hi Tom,

Could you please suggest if I have done the right thing and there is not high performance problem for the above query? Is there any better way?

Thanks a lot.
Tom Kyte
September 29, 2011 - 6:56 am UTC

does it run in an acceptable amount of time given your data?
does it return the right result?

If the answers are yes to both, you are done.
If the answers are no to either, you have work to do...

If it works and it works acceptably well - it is good to go.

On followup up from "reader"

Stew Ashton, September 30, 2011 - 2:14 am UTC


They say a stopped clock shows the correct time twice a day. So why don't you just leave your query alone and wait until the requirements change to fit the results?

Weird, ever-changing requirements from someone anonymous: how surprising people are not skipping Oracle Open World to help out!

One last bit of serious advice: think about your test data as much as your query. You need at least two records for each combination in your requirements, one record that passes the test and one that fails. You didn't catch the bug I pointed out previously because the condition was not fully tested.

count records in multiple tables

kiran, November 17, 2011 - 1:00 am UTC

Hi tom,
I have a column deptno in emp1 table with value 10. in that table having n number of rows. similarly i have 40 tables with the same column and value.I need to display the table name , column name and count.
is it possible in sql query? if yes can i get that query.

Tom Kyte
November 17, 2011 - 6:58 pm UTC

select 't1', 'deptno', count(deptno) from t1
union all
select 't2', 'deptno', count(deptno) from t2
union all ...


would be one approach. You could also use a tiny bit of dynamic sql if you wanted:

ops$tkyte%ORA11GR2> create or replace
  2  function countem( p_tname in varchar2, p_cname in varchar2 )
  3  return number
  4  authid current_user
  5  as
  6      l_cursor sys_refcursor;
  7      l_cnt    number;
  8  begin
  9      open l_cursor for
 10       'select count(' || dbms_assert.simple_sql_name( p_cname ) || ') cnt
 11          from ' || dbms_assert.simple_sql_name( p_tname );
 12  
 13      fetch l_cursor into l_cnt;
 14      close l_cursor;
 15  
 16      return l_cnt;
 17  end;
 18  /

Function created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select table_name, column_name, countem(table_name,column_name) cnt
  2    from user_tab_columns
  3   where column_name = 'X';

TABLE_NAME                     COLUMN_NAME                           CNT
------------------------------ ------------------------------ ----------
FACT                           X                                       0
T1                             X                                       1
T2                             X                                       1





or, using a tiny bit of the XML stuff - you could:

ops$tkyte%ORA11GR2> select table_name, column_name ,
  2         extractvalue
  3           ( dbms_xmlgen.getxmltype
  4             ( 'select count('|| column_name||') c from '||
  5                  table_name
  6             ) , '/ROWSET/ROW/C'
  7           ) cnt
  8    from user_tab_columns
  9   where column_name = 'X'
 10  /

TABLE_NAME                     COLUMN_NAME                    CNT
------------------------------ ------------------------------ ----------
FACT                           X                              0
T1                             X                              1
T2                             X                              1


SQL Query

Reader, December 01, 2011 - 8:39 am UTC

Hi Tom,

I have a table as below:

create table test
( id number(5),
  value varchar2(10));

insert into test values(1,'val1');

insert into test values(2,'');

insert into test values(3,'val3');

insert into test values(4,'');  

insert into test values(5,'');

insert into test values(6,'val6');

insert into test values(7,'');

commit;

SQL> select *from test;

        ID VALUE
---------- ----------
         1 val1
         2
         3 val3
         4
         5
         6 val6
         7

I need to write an update statement which will update the table as below:

SQL>  select *from test;

        ID VALUE
---------- ----------
         1 val1
         2 val1
         3 val3
         4 val3
         5 val3
         6 val6
         7 val6

In other words, the null value for an ID should inherit the previous non null value.

Your help will be greatly appreciated.

Thanks

Tom Kyte
December 06, 2011 - 10:26 am UTC

ops$tkyte%ORA11GR2> select * from t;

        ID VALUE
---------- ----------
         1 val1
         2
         3 val3
         4
         5
         6 val6
         7

7 rows selected.

ops$tkyte%ORA11GR2> merge into t
  2  using ( select id, new_val
  3            from (select id, value,
  4                         last_value(value ignore nulls) over (order by id) new_val
  5                    from t)
  6           where value is null) t_new
  7  on (t.id = t_new.id)
  8  when matched then update set t.value = t_new.new_val;

4 rows merged.

ops$tkyte%ORA11GR2> select * from t;

        ID VALUE
---------- ----------
         1 val1
         2 val1
         3 val3
         4 val3
         5 val3
         6 val6
         7 val6

7 rows selected.

Is there any other thoughts efficient

chan, December 08, 2011 - 12:01 am UTC

My English is not enough.
Please understand.

data is :
with
t1 as
(
select 'a' eq from dual union all
select 'b' eq from dual union all
select 'c' eq from dual union all
select 'd' eq from dual
)
,t2 as
(
select '20111201' dt, 'a' eq, 1 qt from dual union all
select '20111201' dt, 'a' eq, 2 qt from dual union all
select '20111201' dt, 'b' eq, 3 qt from dual union all
select '20111201' dt, 'b' eq, 4 qt from dual union all
select '20111202' dt, 'c' eq, 5 qt from dual union all
select '20111202' dt, 'c' eq, 6 qt from dual
)

I want :
DT EQ QT
20111201 a 3
20111201 b 7
20111201 c 0
20111201 d 0
sub total 10
20111202 a 0
20111202 b 0
20111202 c 11
20111202 d 0
sub total 11
20111203 a 0
20111203 b 0
20111203 c 0
20111203 d 0
sub total 0
total 21

So I was implemented as follows:
select
decode( grouping_id( v2.dt, v2.eq ), 0, v2.dt, 1, 'sub total', 3, 'total' ) dt
, v2.eq
, nvl( sum( t2.qt ), 0 ) qt
from
(
select *
from
t1
, (
select to_char( sysdate - level, 'yyyymmdd' ) dt
from dual
connect by
level <= 7
) v1
) v2
, t2
where 1=1
and v2.dt between '20111201' and '20111203'
and v2.dt = t2.dt(+)
and v2.eq = t2.eq(+)
group by
rollup( v2.dt, v2.eq )


Is there any other thoughts efficient ?

Thank you.

mehul, December 19, 2011 - 3:18 am UTC

Retrieve the salesman name in ‘New Delhi’ whose efforts have resulted into atleast one sales
transaction.
Table Name : SALES-MAST
Salesman-no Name City
B0001
B0002
B0003
B0004
B0005
B0006
B0007
Puneet Kumar
Pravin Kumar
Radha Krishna
Brijesh Kumar
Tushar Kumar
Nitin Kumar
Mahesh Kumar
Varanasi
Varanasi
New Delhi
New Delhi
Allahabad
Allahabad
Gr. Noida
Table Name : SALES-ORDER
Order-no Order-date Salesman-no
S0001
S0002
S0003
S0004
S0005
S0006
10-Apr-07
28-Apr-07
05-May-07
12-June-07
15-July-07
18-Aug-07
B0001
B0002
B0003
B0004
B0005
B0006
Tom Kyte
December 19, 2011 - 7:59 am UTC

huh?

rolling twelve month invoice SUM output column for supplier

Mahesh, December 22, 2011 - 4:43 am UTC

Hi,

I need to add the below column to "ap invoice timeline report":-

Add rolling twelve month invoice SUM output column, that sums up global invoice spend for supplier, for previous 365 days.

Please give me the sql query to add this column.

Tom Kyte
December 22, 2011 - 10:12 am UTC

I don't have any example to work with so no SQL for you.

no create
no inserts
no look

not promising anything either - I don't see anything you've typed in yet that clearly explains the problem at hand - this tiny blurb here is not meaningful.

SQL Query

Reader, December 23, 2011 - 8:52 am UTC

Hi Tom,

I have a table that looks like below:

CREATE TABLE test
(oid     NUMBER,
 state   VARCHAR2(20),
 version NUMBER
);

INSERT INTO test VALUES(1,'Invalid',1);
INSERT INTO test VALUES(1,'Review',2);
INSERT INTO test VALUES(1,'Pending',3);
INSERT INTO test VALUES(1,'Approved',4);

COMMIT;

SQL> select *from test;

       OID STATE                   VERSION
---------- -------------------- ----------
         1 Invalid                       1
         1 Review                        2
         1 Pending                       3
         1 Approved                      4


I need to write a query to display four additional columns as below:


       OID STATE                   VERSION INITAL_STATE         PREVIOUS_STATE       NEXT_STATE           FINAL_STATE
---------- -------------------- ---------- -------------------- -------------------- -------------------- --------------------
         1 Invalid                       1 Invalid              Invalid              Review               Approved
         1 Review                        2 Invalid              Invalid              Pending              Approved
         1 Pending                       3 Invalid              Review               Approved             Approved
         1 Approved                      4 Invalid              Pending              Approved             Approved


Is it possible to achieve this without using analytical functions? 

I need to run the query in SQLServer where support for analytical functions is very limited.

Many Thanks

Tom Kyte
December 23, 2011 - 9:56 am UTC

Is it possible to achieve this without using analytical functions?


yes it is, but it would be excessively expensive and inefficient.

I need to run the query in SQLServer where support for analytical functions is
very limited.


interesting, and how does this relate to oracle.com?


I could - but I'd end up using ROWNUM or "keep dense rank" type queries. I don't know how to do a "top-n" query in sql server.

Need help for update

A reader, December 27, 2011 - 8:50 pm UTC

Hi Tom,

I have a table where I have to do a update.

create table demo(col1 varchar2(10));

insert into demo values('XX0024');
insert into demo values('XX4345');
insert into demo values('XX2300');
insert into demo values('XX0124');
insert into demo values('XX2024');
insert into demo values('XX0004');

when I update the result should be
col1
-----
24
4345
2300
124
24
4

Which means I have to replace XX and preceding 0 if any. Please suggest.
Tom Kyte
December 29, 2011 - 10:38 am UTC

ops$tkyte%ORA11GR2> update demo set col1 = ltrim( substr( col1, 3 ), '0' );

6 rows updated.

ops$tkyte%ORA11GR2> select * from demo;

COL1
----------
24
4345
2300
124
2024
4

6 rows selected.


update rows of table

A reader, January 21, 2012 - 1:07 am UTC

Hi Tom,

Could you please suggest if we can have update some rows in table while the table can still be updated?

If a user wants to update the rows of table, the lock is over only those rows while the table is still without lock. I was reading about row level lock and select for update but could not find an appropriate example.

Could you please provide me an example.

Tom Kyte
January 21, 2012 - 10:30 am UTC

sure.


drop table t;
set echo on
create table t ( x int, y int );
insert into t values ( 1, 0 );
insert into t values ( 2, 0 );
commit;

update t set y = 42 where x = 1;
set echo off
prompt in another session issue:
prompt update t set y = 55 where x = 2;;
prompt and come back here and hit enter
pause
set echo on
commit;
set echo off
prompt now you can commit in the other session
prompt note that no one blocked ever



run that script in sqlplus (or whatever) and note that no one ever blocks.

performance on rownum

pranavmithra, February 09, 2012 - 3:56 am UTC

HI Tom,
I have a sql where it is taking more than 2 secs. Is there any better way to write the below sql to run within no time.
Here is the requirement i have,

I have table called contains employee working days in a week as shown below

EMPLID SUN MON TUES WED THURS FRI SAT STD_DAYS
076479 N Y Y Y Y Y N 5

I need to find out the number of working days of this employee for a given from date and days count, here is my query:

Select
COUNT(*)
FROM
( SELECT rownum rnum FROM all_objects WHERE rownum <= 5 (ie., 5 means 'days count')
)
WHERE 1=1
and (TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case MON when 'Y' then 'mon' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND MON='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case TUES when 'Y' then 'tue' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND TUES='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case WED when 'Y' then 'wed' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND WED='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case THURS when 'Y' then 'thu' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND THURS='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case FRI when 'Y' then 'fri' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND FRI='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case SAT when 'Y' then 'sat' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND SAT='Y')
or
TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+rnum-1, 'dy' ) in(SELECT case SUN when 'Y' then 'sun' else '' end FROM PS_RS_WORKER_TBL WHERE EMPLID='076479' AND SUN='Y'));

which returns count as 4 means from 01-NOV-2011 for the next 5 days he will be working only for 4 days as per the row shown in my table. Is there any better way to write this sql. Thanks in advance
Tom Kyte
February 09, 2012 - 5:45 am UTC

gotta just say how much I don't like 'models' like this :(

anyway - don't use all_objects, that is a rather expensive view to use for this


ops$tkyte%ORA11GR2> create table t
  2  ( emplid varchar2(10),
  3    sun    varchar2(1),
  4    mon    varchar2(1),
  5    tue    varchar2(1),
  6    wed    varchar2(1),
  7    thu    varchar2(1),
  8    fri    varchar2(1),
  9    sat    varchar2(1),
 10  std_days number
 11  )
 12  /

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( '076479', 'N', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 5 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable dt varchar2(20)
ops$tkyte%ORA11GR2> exec :dt := '01-nov-2011'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> with data
  2  as
  3  (select t.*, level R from t start with emplid = '076479' connect by level <= 5)
  4  select
  5  sum(
  6  case TO_CHAR(to_date(:dt,'DD-MON-YYYY')+r-1, 'dy' )
  7  when 'sun' then case when sun = 'Y' then 1 else 0 end
  8  when 'mon' then case when mon = 'Y' then 1 else 0 end
  9  when 'tue' then case when tue = 'Y' then 1 else 0 end
 10  when 'wed' then case when wed = 'Y' then 1 else 0 end
 11  when 'thu' then case when thu = 'Y' then 1 else 0 end
 12  when 'fri' then case when fri = 'Y' then 1 else 0 end
 13  when 'sat' then case when sat = 'Y' then 1 else 0 end
 14  end
 15  ) cnt
 16  from data;

       CNT
----------
         4

Need sql query instead of pl/sql

pranavmithra, February 10, 2012 - 2:13 am UTC

Very brilliant tom !!. Actually i need this in sql instead of pl/sql. I am very bad at writing sql, i am finding difficulty to convert this into sql. could you please write this in sql will be very grate help for me.
Tom Kyte
February 10, 2012 - 5:07 pm UTC

umm, i didn't write any plsql anywhere above. It was 100% pure SQL

ops$tkyte%ORA11GR2> with data
  2  as
  3  (select t.*, level R from t start with emplid = '076479' connect by level <= 5)
  4  select
  5  sum(
  6  case TO_CHAR(to_date(:dt,'DD-MON-YYYY')+r-1, 'dy' )
  7  when 'sun' then case when sun = 'Y' then 1 else 0 end
  8  when 'mon' then case when mon = 'Y' then 1 else 0 end
  9  when 'tue' then case when tue = 'Y' then 1 else 0 end
 10  when 'wed' then case when wed = 'Y' then 1 else 0 end
 11  when 'thu' then case when thu = 'Y' then 1 else 0 end
 12  when 'fri' then case when fri = 'Y' then 1 else 0 end
 13  when 'sat' then case when sat = 'Y' then 1 else 0 end
 14  end
 15  ) cnt
 16  from data;

       CNT
----------
         4



where did you see any plsql?


Getting error while compiling in sql develope

pranavmithra, February 12, 2012 - 1:29 pm UTC

HI TOm,

I am executing the above sql given by you in sql developer.

with data
as
(select t.*, level R from PS_RS_WORKER_TBL start with emplid = '076479' connect by level <= 5)
select
sum(
case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
when 'sun' then case when sun = 'Y' then 1 else 0 end
when 'mon' then case when mon = 'Y' then 1 else 0 end
when 'tues' then case when tues = 'Y' then 1 else 0 end
when 'wed' then case when wed = 'Y' then 1 else 0 end
when 'thurs' then case when thurs = 'Y' then 1 else 0 end
when 'fri' then case when fri = 'Y' then 1 else 0 end
when 'sat' then case when sat = 'Y' then 1 else 0 end
end
) cnt
from data;

Error:
----------------
ORA-00600: internal error code, arguements:[qcscbpcbua],[],[],[],[],[],[],[]
00600.00000 - "internal error code, arguements: [%s],[%s],[%s],[%s],[%s],[%s],[%s],[%s]
cause: this is generic interanl error number for oracle program exeptions. this indicates that a process has encountered and exceptional condition.
action: report as a bug - the first arguement is the internal error number
Error at Line:1

------

When i run the sql once again, i am getting error as "closed connection", error at Line: 1

Could you please kindly help me in this. Very thankful in advance.


Tom Kyte
February 13, 2012 - 8:04 am UTC

please take the action to heart. They told you what needs be done at this point.


Let's try a rewrite to see if we cannot work around it temporarily


select
sum(
case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
when 'sun' then case when sun = 'Y' then 1 else 0 end
when 'mon' then case when mon = 'Y' then 1 else 0 end
when 'tues' then case when tues = 'Y' then 1 else 0 end
when 'wed' then case when wed = 'Y' then 1 else 0 end
when 'thurs' then case when thurs = 'Y' then 1 else 0 end
when 'fri' then case when fri = 'Y' then 1 else 0 end
when 'sat' then case when sat = 'Y' then 1 else 0 end
end
) cnt
from (select t.*, level R from PS_RS_WORKER_TBL start with emplid = '076479' connect
by level <= 5) data;

Correct sql

pranavmithra, February 12, 2012 - 1:34 pm UTC

Here is the correct sql for the above error i am getting:

with data
as
(select t.*, level R from PS_RS_WORKER_TBL t start with t.emplid = '076479' connect by level <= 5)
select
sum(
case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
when 'sun' then case when sun = 'Y' then 1 else 0 end
when 'mon' then case when mon = 'Y' then 1 else 0 end
when 'tues' then case when tues = 'Y' then 1 else 0 end
when 'wed' then case when wed = 'Y' then 1 else 0 end
when 'thurs' then case when thurs = 'Y' then 1 else 0 end
when 'fri' then case when fri = 'Y' then 1 else 0 end
when 'sat' then case when sat = 'Y' then 1 else 0 end
end
) cnt
from data;

Thanks
Tom Kyte
February 13, 2012 - 8:09 am UTC

try one of these two

ops$tkyte%ORA11GR2> select
  2  sum(
  3  case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
  4  when 'sun' then case when sun = 'Y' then 1 else 0 end
  5  when 'mon' then case when mon = 'Y' then 1 else 0 end
  6  when 'tues' then case when tues = 'Y' then 1 else 0 end
  7  when 'wed' then case when wed = 'Y' then 1 else 0 end
  8  when 'thurs' then case when thurs = 'Y' then 1 else 0 end
  9  when 'fri' then case when fri = 'Y' then 1 else 0 end
 10  when 'sat' then case when sat = 'Y' then 1 else 0 end
 11  end
 12  ) cnt
 13  from (select t.*, level R from PS_RS_WORKER_TBL t start with t.emplid = '076479' connect by level <= 5) data;

       CNT
----------
         2

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select
  2  sum(
  3  case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
  4  when 'sun' then case when sun = 'Y' then 1 else 0 end
  5  when 'mon' then case when mon = 'Y' then 1 else 0 end
  6  when 'tues' then case when tues = 'Y' then 1 else 0 end
  7  when 'wed' then case when wed = 'Y' then 1 else 0 end
  8  when 'thurs' then case when thurs = 'Y' then 1 else 0 end
  9  when 'fri' then case when fri = 'Y' then 1 else 0 end
 10  when 'sat' then case when sat = 'Y' then 1 else 0 end
 11  end
 12  ) cnt
 13  from PS_RS_WORKER_TBL, (select level R from dual connect by level <= 5)
 14  where PS_RS_WORKER_TBL.emplid = '076479';

       CNT
----------
         2


Performance is very slow

pranavmithra, February 13, 2012 - 1:09 pm UTC

Hi Tom,

As per your latest solution, i have exactly created table t and inserted one row and ran the sql given by you and it is taking average of 0.8 seconds to execute in sql developer. Also when i use my table instead of t which has 11330 rows, it is taking endless time to show the output in sql developer, sql developer is getting stuck up or showing the error posted in the previous thread. Not sure what the reason is? Please suggest me, please help me. Thanks in advance.
Tom Kyte
February 13, 2012 - 2:12 pm UTC

umm, you do have an index on that column

and that index is being used?

this should take about 0.00 seconds. Yes, 0.00 - as in less than 1/100th of a second.


REGARDLESS of the size of the table. I don't care if there is one row or one billion rows - to read a single record and do this query should be instantaneous.


show us the TKPROF report.

performance is little better now

pranavmithra, February 13, 2012 - 2:04 pm UTC

Hi Tom,

I have tried both the solutions suggested by you, the second solution works better and getting executed with an average of 0.8 seconds. The problem is i am running this sql for so many thousands of employees which is taking more time. Please let me know if you have a better solution than this. you are ultimate Tom. thank you very much.
Tom Kyte
February 13, 2012 - 2:56 pm UTC

why not run a single query for all of the employees???? why would you even think about running a single query thousands of times when you can run one query to get them all at once.

In looking at this "deeper" - it seems if the date you pass in (yours was '01-NOV-2011') then you want to count the Y's for tue, wed, thu, fri, sat, sun.

If you passed in 02-nov-2001 - you would want wed, thu, fri, sat, sun, mon.

and so on.

That is what your original query does anyway. Given that, we can simplify this greatly:

ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t
ops$tkyte%ORA11GR2> ( emplid varchar2(10),
ops$tkyte%ORA11GR2>   sun    varchar2(1),
ops$tkyte%ORA11GR2>   mon    varchar2(1),
ops$tkyte%ORA11GR2>   tue    varchar2(1),
ops$tkyte%ORA11GR2>   wed    varchar2(1),
ops$tkyte%ORA11GR2>   thu    varchar2(1),
ops$tkyte%ORA11GR2>   fri    varchar2(1),
ops$tkyte%ORA11GR2>   sat    varchar2(1),
ops$tkyte%ORA11GR2> std_days number
ops$tkyte%ORA11GR2> )
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t
ops$tkyte%ORA11GR2> select to_char(rownum, 'fm000000'),
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        case when dbms_random.value < 0.5 then 'N' else 'Y' end,
ops$tkyte%ORA11GR2>        5
ops$tkyte%ORA11GR2>   from dual
ops$tkyte%ORA11GR2>  connect by level <= 25000
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> column str format a10
ops$tkyte%ORA11GR2> variable dt varchar2(20)
ops$tkyte%ORA11GR2> exec :dt := '01-nov-2011'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set timing on
ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select emplid,
  2             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('1','4','5','6','7') and sun='Y' then 1 else 0 end+
  3             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('1','2','5','6','7') and mon='Y' then 1 else 0 end+
  4             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('1','2','3','6','7') and tue='Y' then 1 else 0 end+
  5             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('1','2','3','4','7') and wed='Y' then 1 else 0 end+
  6             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('1','2','3','4','5') and thu='Y' then 1 else 0 end+
  7             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('2','3','4','5','6') and fri='Y' then 1 else 0 end+
  8             case when to_char( to_date(:dt,'DD-MON-YYYY'), 'D' ) in ('3','4','5','6','7') and sat='Y' then 1 else 0 end cnt,
  9             sun || mon || tue || wed || thu || fri || sat str
 10    from t
 11  /

25000 rows selected.

Elapsed: 00:00:00.28

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 27794 |   569K|    30   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    | 27794 |   569K|    30   (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1774  consistent gets
          0  physical reads
          0  redo size
     691362  bytes sent via SQL*Net to client
      18745  bytes received via SQL*Net from client
       1668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      25000  rows processed

ops$tkyte%ORA11GR2> set autotrace off



so, time to do 25,000 employees is less than 1/4 of a second in the database.

Just add a where clause to get a single employee and the response time would be almost not measurable.



The numbers I used in the "in" will vary based on your NLS_TERRITORY.

sunday for me is '1', if sun is not '1' - adjust the numbers.


Or you can turn '1' into 'sun' and use the dy format instead of 'D'
turn '2' into 'mon' and so on

But then you will be LANGUAGE specific. Your choice :)


NLS nitpicking

Stew Ashton, February 14, 2012 - 6:39 am UTC


"But then you will be LANGUAGE specific. Your choice :)"

Your example is already language specific:
> VARIABLE DT varchar2(24)
> EXEC :DT := '01-nov-2011';

anonymous block completed

> select
TO_CHAR(
  TO_DATE(
    :DT, 'DD-MON-YYYY'
  ), 'DY'
) dy
from DUAL;

Error starting at line 52 in command:
select
TO_CHAR(
  TO_DATE(
    :DT, 'DD-MON-YYYY'
  ), 'DY'
) dy
from DUAL
Error report:
SQL Error: ORA-01843: ce n'est pas un mois valide
01843. 00000 -  "not a valid month"
*Cause:
*Action:

> select
TO_CHAR(
  TO_DATE(
    :DT, 'DD-MON-YYYY', 'nls_date_language=''AMERICAN'''
  ), 'DY'
) dy
from dual;

DY
----------------
MAR.

> select
TO_CHAR(
  TO_DATE(
    :DT, 'DD-MON-YYYY', 'nls_date_language=''AMERICAN'''
  ), 'DY', 'nls_date_language=''AMERICAN'''
) dy
from dual;

DY
------------
TUE
If we use 'D', we depend on NLS_TERRITORY which we cannot specify in the query.
If we use 'DY', we depend on NLS_DATE_LANGUAGE which we can specify in the query.
Tom Kyte
February 14, 2012 - 8:55 am UTC

well, in france you would be using the correct bind variable values wouldn't you?


You are fantastic TOM !!!

pranavmithra, February 16, 2012 - 3:10 am UTC

Hi Tom,

As of now for time being i have used your second option provided in your previous email along with the combination of my peoplesoft logic and i am able to execute within 2-3 mins for 40,000 rows. Your answer was really really helpful to me. YOu are absolutely brilliant. Will try the other option given by you and let you know.

select
2 sum(
3 case TO_CHAR(to_date('01-NOV-2011','DD-MON-YYYY')+r-1, 'dy' )
4 when 'sun' then case when sun = 'Y' then 1 else 0 end
5 when 'mon' then case when mon = 'Y' then 1 else 0 end
6 when 'tues' then case when tues = 'Y' then 1 else 0 end
7 when 'wed' then case when wed = 'Y' then 1 else 0 end
8 when 'thurs' then case when thurs = 'Y' then 1 else 0 end
9 when 'fri' then case when fri = 'Y' then 1 else 0 end
10 when 'sat' then case when sat = 'Y' then 1 else 0 end
11 end
12 ) cnt
13 from PS_RS_WORKER_TBL, (select level R from dual connect by level <= 5)
14 where PS_RS_WORKER_TBL.emplid = '076479';

Query to Get

Raj, February 17, 2012 - 1:54 am UTC

I have item table having items with multiple MRPs with date received like

ITemNo MRP Date
101 50 12/02/2012
101 55 14/02/2012
101 60 22/02/2012
101 65 11/02/2012
101 57 10/02/2012

how to write a query in oracle to get last three MRPs of an item
Tom Kyte
February 17, 2012 - 5:29 am UTC

select *
from (select t.*, row_number() over (partition by itemno order by date DESC)rn
from t)
where rn <= 3;


Query to Get

Raj, February 17, 2012 - 3:34 am UTC

I have item table having items with multiple MRPs with date received like

ITemNo MRP Date
101 50 12/02/2012
101 55 14/02/2012
101 60 22/02/2012
101 65 11/02/2012
101 57 10/02/2012

how to write a query in oracle to get last three MRPs of an item

SQL Query for this business scenario

A reader, February 23, 2012 - 9:26 pm UTC


Hi Tom,


I want a query for the following business scenario.

FUNDID and CID are one to one mapping. The application provides a set of FUNDID and CID as Input to Oracle.


CREATE TABLE HR.DEMO
(
  FUNDID         VARCHAR2(100 BYTE),
  CID            VARCHAR2(100 BYTE),
  SCCGROUP       VARCHAR2(100 BYTE),
  LEVELTYPE      VARCHAR2(100 BYTE),
  ENUM           VARCHAR2(100 BYTE),
  VALUE1         VARCHAR2(100 BYTE),
  VALUE2         VARCHAR2(100 BYTE),
  STARTDATE      DATE,
  ENDDATE        DATE
)


Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '00', 'I', '37', 
    'P', 'sUr', TO_DATE('08/28/2006 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('11/06/2025 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '00', 'I', '89', 
    'CHE', TO_DATE('01/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('11/30/2040 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '00', 'I', '96', 
    'CH1', 'XZ', TO_DATE('09/02/2009 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/09/2072 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '00', 'I', '101', 
    'C1C', 'TXc', TO_DATE('08/04/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('06/20/2068 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '10', 'I', '96', 
    'RC0', TO_DATE('08/21/2001 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/26/2039 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '10', 'I', '101', 
    'CH1', 'gIT', TO_DATE('08/23/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('06/16/2089 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '20', 'E', '71', 
    'C1C', 'mMg', TO_DATE('10/28/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/17/2048 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '20', 'E', '96', 
    '11', 'fe', TO_DATE('05/06/2000 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('02/09/2073 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, VALUE2, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '20', 'I', '89', 
    'CT1', 'vqI', TO_DATE('08/01/2000 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('04/17/2089 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, STARTDATE, ENDDATE)
 Values
   ('F1', 'ACG4Z', '00', 'E', '89', 
    'ABC', TO_DATE('01/16/2007 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('11/30/2040 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
Insert into HR.DEMO
   (FUNDID, CID, SCCGROUP, LEVELTYPE, ENUM, 
    VALUE1, STARTDATE, ENDDATE)
 Values
   ('F2', 'ACG4Z', '00', 'E', '89', 
    'ABC', TO_DATE('01/16/2009 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('11/30/2060 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));



1. Get all the rows of the table DEMO for given FundID and CID.

2. Check all the groups associated with the given FundID and CID. e.g. in this case we have
00
10

3. In case of a single group present for the given FundID and CID, check the ENUM fields. it can be single or duplicated ENUMs
e.g. 'F1', 'ACG4Z', '00', 'E', '89','CHE'
'F1', 'ACG4Z', '00', 'I', '89','ABC'

4. in case only one ENUM is present but with multiple entries, look for the number of rows. in case of multiple rows, check the Parameter_DATE if it falls between STARTDATE and ENDDATE
The logic is like OR logic. If I have three rows for a single ENUM, the date can fall in any of the three rows.
Also for the same ENUM with multiple rows, if it has LEVELTYPE = 'E' then it is an AND condition, and if it is 'I' then its an OR condition.

Note: For a given group, if the condition is satified for one ENUM and we have other ENUMs then we have to check through all the ENUMs. Differnt ENUM means AND condition.
e.g. the records must be in ENUM1, ENUM2, ENUM3.... if present in all ENUM of a given group, return one and do not check futher groups.


5. In case of multiple ENUM, we have to check the rows for each ENUM. In case the Parameter_DATE does not fall between START DATE and END DATE, then we traverse to next ENUM and do the same operation.

6. The parameter value should be present in all the groups. In case it is not satisfied in any one of the groups then, we move to next fund and CID. And the processing continues till the value parameter-date is satified.


Can it be done in one SQL or we need any procedure?

Please do let me know in case you need any more information.
Tom Kyte
February 25, 2012 - 4:56 am UTC

*homework*, this smells a lot like homework.


You are asking for six separate result sets here - obviously you could UNION ALL them together for a single SQL statement to get all of the data.

1) trivial, if you cannot do this, you haven't really done sql ever?
2) see #1
3) what does "check the enum fields" *mean* ??? what do you mean by check and what do you want to do when you check them?

4) sounds like 3 is just a lead in for 4 here? another check again?

......


in short, I have no idea what you really mean here - I see now this isn't six different queries, it was meant to be a specification - but it isn't one I can really follow?


It sounds like you were trying to tell us how to select a record - if so, please try again. and make sure to provide example data that covers ALL CASES - not just one. give us a single group, give us multiple groups, give us data to test EVERY SINGLE CONDITION.

SQL Query

A reader, March 02, 2012 - 3:51 am UTC

hi Tom,

How can we test the following scenario:

I have a table with two columns. If value is present in one column, it should be exactly matched. In case it is present in both columns, it should be in the range like between clause.

CREATE TABLE tbl1(col1 varchar2(3), col2 varchar2(3));


condition 1: Parameter should be exact matched as col2 is null

insert into tbl1 values('A',NULL);


condition 2: Parameter should be between col1 and col2
insert into tb1 values('P','S');



Now the parameter can be single element or it can be a IN list

p1 := 'C';
p2 := 'R,T,W';
p3 := 'A';

so if I pass p1 for first time for the two rows, it will return me zero as it does not match

if I pass p2 it passes, since it falls in range of col1 and col2 or row2

if I pass p3, it passes as it matches col1 of row1


I need to do something like below for an single or inlist parameter.

select 1 from dual where exists(select 1 from tbl1 where 
(case when col2 is null then <parameter_passed> = col1
      when col2 is not null then <parameter_passed> between col1 and col2
 end
)


Please suggest.
Tom Kyte
March 02, 2012 - 5:44 am UTC

ops$tkyte%ORA11GR2> cREATE TABLE t(col1 varchar2(3), col2 varchar2(3));

Table created.

ops$tkyte%ORA11GR2> insert into t values('A',NULL);

1 row created.

ops$tkyte%ORA11GR2> insert into t values('P','S');

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable txt varchar2(20)
ops$tkyte%ORA11GR2> exec :txt := 'C'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> with data
  2  as
  3  (
  4  select
  5    trim( substr (txt,
  6          instr (txt, ',', 1, level  ) + 1,
  7          instr (txt, ',', 1, level+1)
  8             - instr (txt, ',', 1, level) -1 ) )
  9      as token
 10     from (select ','||:txt||',' txt
 11             from dual)
 12   connect by level <=
 13      length(:txt)-length(replace(:txt,',',''))+1
 14   )
 15  select *
 16    from data
 17   where exists ( select null
 18                    from t
 19                   where t.col1 <= data.token
 20                     and nvl(t.col2,t.col1) >= data.token )
 21  /

no rows selected

ops$tkyte%ORA11GR2> exec :txt := 'R,T,W'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> /

TOKEN
----------------------------------
R

ops$tkyte%ORA11GR2> exec :txt := 'A'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> /

TOKEN
----------------------------------
A


many times

sandeep, March 12, 2012 - 2:02 am UTC

i m writing a query

select
count(*),
count(DECODE(to_char(hire_date,'YYYY'), &year, 'c1')) "year"
from employees



the output is
COUNT(*) year
-------- ----------
109 23
but i want to use user input as alias (where i used year)
Tom Kyte
March 12, 2012 - 7:47 am UTC

replace "year" with "&something_else" and something_else just as you did year for &year to work.

no bind variables :(

sqlplus isn't really a programming environment - I hope this isn't something that is going to be used very often.

reader

sandeep, March 12, 2012 - 8:34 am UTC

i m writing a query

select
count(*),
count(DECODE(to_char(hire_date,'YYYY'), &year, 'c1')) "year"
from employees



the output is
COUNT(*) year
-------- ----------
109 23
but i want to use user input as alias (where i used year)



Followup March 12, 2012 - 7am Central time zone:
replace "year" with "&something_else" and something_else just as you did year for &year to work.

no bind variables :(

sqlplus isn't really a programming environment - I hope this isn't something that is going to be used very often.

this is not working can u tll me this with example with result
Tom Kyte
March 12, 2012 - 9:03 am UTC

ops$tkyte%ORA11GR2> !cat test.sql
define year = &1
define something_else ="&2"

select count(*),
             count(DECODE(to_char(hiredate,'YYYY'), &year, 'c1')) "&something_else"
                 from scott.emp
/

ops$tkyte%ORA11GR2> @test 1981 "my tag"

  COUNT(*)     my tag
---------- ----------
        14         10
ops$tkyte%ORA11GR2> 



it is just exactly and precisely as I said...

Full Outer Join

Shimmy, March 12, 2012 - 12:59 pm UTC

When I try to do a FULL OUTER JOIN, I am missing two rows (one row each from the test tables), but when I do a RIGHT OUTER JOIN or LEFT OUTER JOIN, I get the right results. Am I doing something wrong with my FULL OUTER JOIN?
SQL> SELECT * FROM V$VERSION;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production    
PL/SQL Release 11.2.0.2.0 - Production                                          
CORE 11.2.0.2.0 Production                                                      
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production               
NLSRTL Version 11.2.0.2.0 - Production                                          

SQL> 
SQL> DROP TABLE SK_TABLE1 ;

Table dropped.

SQL> DROP TABLE SK_TABLE2 ;

Table dropped.

SQL> 
SQL> CREATE TABLE SK_TABLE1
  2  (ID    NUMBER(10),
  3   VALUE NUMBER(10));

Table created.

SQL> 
SQL> CREATE TABLE SK_TABLE2
  2  (ID    NUMBER(10),
  3   VALUE NUMBER(10));

Table created.

SQL> 
SQL> INSERT INTO SK_TABLE1
  2  VALUES
  3  (1, 100);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE1
  2  VALUES
  3  (1, 110);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE1
  2  VALUES
  3  (2, 200);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE1
  2  VALUES
  3  (3, 300);

1 row created.

SQL> 
SQL> 
SQL> INSERT INTO SK_TABLE1
  2  VALUES
  3  (5, 500);

1 row created.

SQL> 
SQL> 
SQL> INSERT INTO SK_TABLE2
  2  VALUES
  3  (1, 1000);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE2
  2  VALUES
  3  (2, 2000);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE2
  2  VALUES
  3  (3, 3000);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE2
  2  VALUES
  3  (3, 3100);

1 row created.

SQL> 
SQL> INSERT INTO SK_TABLE2
  2  VALUES
  3  (4, 4000);

1 row created.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2  FROM SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2
  3  ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2                                     
---------- ---------- ---------- ----------                                     
         1          1        100       1000                                     
         1          1        110       1000                                     
         2          2        200       2000                                     
         3          3        300       3000                                     
         3          3        300       3100                                     

SQL> 
SQL> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2  FROM SK_TABLE1 T1 RIGHT OUTER JOIN SK_TABLE2 T2
  3  ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2                                     
---------- ---------- ---------- ----------                                     
         1          1        100       1000                                     
         1          1        110       1000                                     
         2          2        200       2000                                     
         3          3        300       3100                                     
         3          3        300       3000                                     
                    4                  4000                                     

6 rows selected.

SQL> 
SQL> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2  FROM SK_TABLE1 T1 LEFT OUTER JOIN SK_TABLE2 T2
  3  ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2                                     
---------- ---------- ---------- ----------                                     
         1          1        110       1000                                     
         1          1        100       1000                                     
         2          2        200       2000                                     
         3          3        300       3000                                     
         3          3        300       3100                                     
         5                   500                                                

6 rows selected.

Tom Kyte
March 12, 2012 - 1:57 pm UTC

ops$tkyte%ORA11GR2> select * from v$version;

BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

ops$tkyte%ORA11GR2> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2    FROM SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2
  3      ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2
---------- ---------- ---------- ----------
         1          1        110       1000
         1          1        100       1000
         2          2        200       2000
         3          3        300       3000
         3          3        300       3100
                    4                  4000
         5                   500

7 rows selected.



please share with us the output of this query:


select name || ' = ' || value from v$parameter where isdefault = 'FALSE';


and post the plan used (use dbms_xplan.display_cursor to get it)

Full Outer Join

A reader, March 12, 2012 - 2:40 pm UTC

SQL> select name || ' = ' || value from v$parameter where isdefault = 'FALSE';

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
processes = 4780                                                                
sessions = 7192                                                                 
timed_statistics = TRUE                                                         
sga_max_size = 3221225472                                                       
java_pool_size = 536870912                                                      
streams_pool_size = 100663296                                                   
shared_pool_reserved_size = 95783360                                            
nls_date_format = DD-MON-RR                                                     
disk_asynch_io = TRUE                                                           
dbwr_io_slaves = 0                                                              
sga_target = 3221225472                                                         

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
control_files = /db/RMD2/oracontrol01/control01.ctl, /db/RMD2/oracontrol02/contr
ol02.ctl, /db/RMD2/oracontrol03/control03.ctl                                   
                                                                                
db_file_name_convert = /db/RMPR, /db/RMD2                                       
log_file_name_convert = /db/RMPR, /db/RMD2                                      
db_block_size = 16384                                                           
db_keep_cache_size = 33554432                                                   
db_recycle_cache_size = 167772160                                               
db_writer_processes = 12                                                        
db_cache_advice = on                                                            
compatible = 11.2.0.2                                                           

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
log_archive_dest = /db/RMD2/oraarchive                                          
log_archive_format = arch%t_%s_%r.log                                           
log_buffer = 6291456                                                            
log_checkpoint_interval = 100000                                                
db_files = 2560                                                                 
db_file_multiblock_read_count = 8                                               
db_create_file_dest = /db/RMD2/oradata10                                        
db_create_online_log_dest_1 = /db/RMD2/oradata10                                
fast_start_mttr_target = 303                                                    
log_checkpoints_to_alert = TRUE                                                 
recovery_parallelism = 24                                                       

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
dml_locks = 5000                                                                
transactions = 3200                                                             
undo_management = AUTO                                                          
undo_tablespace = UNDOTBS1                                                      
undo_retention = 10800                                                          
recyclebin = OFF                                                                
_kgl_large_heap_warning_threshold = 58720256                                    
sec_case_sensitive_logon = FALSE                                                
remote_login_passwordfile = EXCLUSIVE                                           
db_domain =                                                                     
global_names = FALSE                                                            

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
instance_name = RMD2                                                            
session_cached_cursors = 95                                                     
remote_dependencies_mode = SIGNATURE                                            
utl_file_dir = /in/RMD2/oracle/admin/logs                                       
job_queue_processes = 8                                                         
cursor_sharing = similar                                                        
audit_file_dest = /in/RMD2/oracle/admin/adump                                   
object_cache_optimal_size = 1024000                                             
object_cache_max_size_percent = 10                                              
session_max_open_files = 95                                                     
optimizer_features_enable = 11.2.0.2                                            

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
audit_trail = DB                                                                
sort_area_size = 2097152                                                        
sort_area_retained_size = 2097152                                               
db_name = RMD2                                                                  
open_cursors = 3000                                                             
optimizer_mode = CHOOSE                                                         
_unnest_subquery = FALSE                                                        
_optimizer_cost_based_transformation = off                                      
optimizer_index_cost_adj = 1                                                    
optimizer_index_caching = 95                                                    
query_rewrite_enabled = TRUE                                                    

NAME||'='||VALUE                                                                
--------------------------------------------------------------------------------
pga_aggregate_target = 525165824                                                
_allow_level_without_connect_by = TRUE                                          
aq_tm_processes = 1                                                             
diagnostic_dest = /in/RMD2/oracle                                               
max_dump_file_size = 502400                                                     

69 rows selected.

SQL> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2  FROM SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2
  3  ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2                                     
---------- ---------- ---------- ----------                                     
         1          1        100       1000                                     
         1          1        110       1000                                     
         2          2        200       2000                                     
         3          3        300       3000                                     
         3          3        300       3100                                     

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
SQL_ID  49w2pd99xzqqn, child number 0                                           
-------------------------------------                                           
SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2 FROM          
SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2 ON (T1.ID = T2.ID)                    
                                                                                
Plan hash value: 2151566655                                                     
                                                                                
------------------------------------------                                      
| Id  | Operation            | Name      |                                      
------------------------------------------                                      
|   0 | SELECT STATEMENT     |           |                                      

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
|   1 |  VIEW                | VW_FOJ_0  |                                      
|   2 |   MERGE JOIN         |           |                                      
|   3 |    SORT JOIN         |           |                                      
|   4 |     TABLE ACCESS FULL| SK_TABLE2 |                                      
|*  5 |    SORT JOIN         |           |                                      
|   6 |     TABLE ACCESS FULL| SK_TABLE1 |                                      
------------------------------------------                                      
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
   5 - access("T1"."ID"="T2"."ID")                                              
       filter("T1"."ID"="T2"."ID")                                              
                                                                                
Note                                                                            
-----                                                                           
   - rule based optimizer used (consider using cbo)                             
                                                                                

29 rows selected.


Strange thing is after I analyze the tables, it started showing the results correctly. Any idea why it won't do a full outer join with analyzing the tables?


BEGIN
  SYS.DBMS_STATS.GATHER_TABLE_STATS (
      OwnName        => 'CUSTOM'
     ,TabName        => 'SK_TABLE1'
    ,Estimate_Percent  => 10
    ,Method_Opt        => 'FOR ALL COLUMNS SIZE 1'
    ,Degree            => 4
    ,Cascade           => FALSE
    ,No_Invalidate     => FALSE);
END;
/
PL/SQL procedure successfully completed.

BEGIN
  SYS.DBMS_STATS.GATHER_TABLE_STATS (
      OwnName        => 'CUSTOM'
     ,TabName        => 'SK_TABLE2'
    ,Estimate_Percent  => 10
    ,Method_Opt        => 'FOR ALL COLUMNS SIZE 1'
    ,Degree            => 4
    ,Cascade           => FALSE
    ,No_Invalidate     => FALSE);
END;
/
PL/SQL procedure successfully completed.


SQL> SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2
  2  FROM SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2
  3  ON (T1.ID = T2.ID);

      ID_1       ID_2    VALUE_1    VALUE_2                                     
---------- ---------- ---------- ----------                                     
         1          1        110       1000                                     
         1          1        100       1000                                     
         2          2        200       2000                                     
         3          3        300       3000                                     
         3          3        300       3100                                     
                    4                  4000                                     
         5                   500                                                

7 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
SQL_ID  49w2pd99xzqqn, child number 0                                           
-------------------------------------                                           
SELECT T1.ID ID_1, T2.ID ID_2, T1.VALUE VALUE_1, T2.VALUE VALUE_2 FROM          
SK_TABLE1 T1 FULL OUTER JOIN SK_TABLE2 T2 ON (T1.ID = T2.ID)                    
                                                                                
Plan hash value: 502850176                                                      
                                                                                
--------------------------------------------------------------------------------
---                                                                             
                                                                                
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time   

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
  |                                                                             
                                                                                
--------------------------------------------------------------------------------
---                                                                             
                                                                                
|   0 | SELECT STATEMENT      |           |       |       |    49 (100)|        
  |                                                                             
                                                                                
|   1 |  VIEW                 | VW_FOJ_0  |     6 |   312 |    49   (3)| 00:00:0
1 |                                                                             
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
|*  2 |   HASH JOIN FULL OUTER|           |     6 |    72 |    49   (3)| 00:00:0
1 |                                                                             
                                                                                
|   3 |    TABLE ACCESS FULL  | SK_TABLE1 |     5 |    30 |    24   (0)| 00:00:0
1 |                                                                             
                                                                                
|   4 |    TABLE ACCESS FULL  | SK_TABLE2 |     5 |    30 |    24   (0)| 00:00:0
1 |                                                                             
                                                                                
--------------------------------------------------------------------------------
---                                                                             

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
                                                                                
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                
   2 - access("T1"."ID"="T2"."ID")                                              
                                                                                




Tom Kyte
March 12, 2012 - 2:52 pm UTC

you are using the deprecated, unsupported rule based optimizer:


optimizer_mode = CHOOSE



You want to unset virtually ALL OF THOSE init.ora parameters, especially "_" ones and optimizer ones.


this is an RBO issue, you shouldn't be using the RBO at all (and your init.ora scares me a bit, well, a lot)

cursor sharing = similar - ouch, ugh, man oh man

SQL Query

A reader, March 15, 2012 - 5:15 pm UTC

Hi Tom,

I have a requirement like this:
CREATE TABLE demo(col1 varchar2(10), col2 varchar2(10), col3 varchar2(10), col4 varchar2(10), col5 varchar2(10), col6 varchar2(10));

--Here the no occurrence of col3='S' then select all rows
insert into demo values('S1','00','I','37','P',NULL);
insert into demo values('S1','00','E','07','E',NULL);
insert into demo values('S1','20','E','42','M','X');

--Here the number of col3='S' for group '20' is equal to number of 'S' in two rows for the same group number with satisfying col4, col5,col6. 
--So skip this set
insert into demo values('S2','20','S','37','P',NULL);
insert into demo values('S2','20','S','99','R','Y');
insert into demo values('S2','10','I','37','P',NULL);

--Here there are two 'S' but only one of the rows satifies for col4,col5,col6 combination so select all the rows of this set
insert into demo values('S3','00','I','37','P',NULL);
insert into demo values('S3','00','I','37','P',NULL);
insert into demo values('S3','00','S','97','AXP',NULL);
insert into demo values('S3','00','S','37','X','Z');

-- Here col3 = 'S' occurs for two columns but in different groups and they satisfy the matching for second col2 value. so skip all rows of this set
insert into demo values('S4','20','S','97','AOP',NULL);
insert into demo values('S4','10','S','97','AXP',NULL);
insert into demo values('S4','10','E','99','BEP','GRP');


my expected result:
S1   00   I   37   P   
S1   00   E   07   E   
S1   20   E   42   M   X
S3   00   I   37   P   
S3   00   I   37   P   
S3   00   S   97   AXPP   
S3   00   S   37   X   Z
S4   20   S   97   AOP  


I will send the parameters for col4 as
p_37 = 'P'
p_07 = 'E'
p_20 = 'R,T'
p_99 = 'A,X,M'
p_97 = 'AXP, BEP'


AS you can see that the parameters can be in comma-separated form so we need to break and match if the values exists:

IF col6 is null then parameter should be equal to col5
IF col6 is not null then parameter should be between col5 and col6

Please help.

Tom Kyte
March 16, 2012 - 8:09 am UTC

sorry but this is as clear as mud.

--Here the number of col3='S' for group '20' is equal to number of 'S' in two
rows for the same group number with satisfying col4, col5,col6.
--So skip this set


????????????????????

what is a set? you never defined that, I presume you mean "col1" identifies a set - if so, why not say so?

what is 'group 20'. I see no column that refers to group by name. I see a column that has 20 in it sometimes, but it has 20 in other sets of your example so I don't know what its relevance is or is not.


--Here the number of col3='S' for group '20' is equal to number of 'S' in two 
rows for the same group number with satisfying col4, col5,col6. 
--So skip this set


double ??????????????

what is 'group 20'. even so, is 20 special once you've described it. would the same apply for group 30 or group 42?

is equal to number of 'S' in two
rows for the same group number with satisfying col4, col5,col6.


????????????????????????????????????????????????????????????

The number of 'S' - I don't see any 'S'

what is a 'satisfying col4, col5, col6'




I'm stopping right here. None of this makes any sense at all.

Just a mistake in result

A reader, March 15, 2012 - 5:16 pm UTC

result should be

S1   00   I   37   P   
S1   00   E   07   E   
S1   20   E   42   M   X
S3   00   I   37   P   
S3   00   I   37   P   
S3   00   S   97   AXPP   
S3   00   S   37   X   Z

Follow up

A reader, March 16, 2012 - 11:47 am UTC

Ok let me explain:

All the distinct col1 values define a set of rows.
The logi applies for all col2 values. None of them have special handling.


Here if you see the paramters passed:

p_37 = 'P'
p_07 = 'E'
p_20 = 'R,T'
p_99 = 'A,X,M'
p_97 = 'AXP, BEP'

Lets have two cases to understand:

Case 1
Data was:

--Here the number of col3='S' for group '20' is equal to number of 'S' in two rows for the same
group number with satisfying col4, col5,col6.
--So skip this set
insert into demo values('S2','20','S','37','P',NULL);
insert into demo values('S2','20','S','99','R','Y');
insert into demo values('S2','10','I','37','P',NULL);



Now i group this set of data based on col2, you can see I have two records: 10 and 20. Now if you see values of col4: 37 and 99).
These parameter values have a logic (in case col6 was NULL then equal. If col6 was not null, then between)


Now I have passed the parameter, if col4 = 37 then value must be equal or between p_37('P') which it satified,(in case col6 was NULL then equal. If col6 was not null, then between).
if col4 = 99 then value must be equal or between p_99('A,X,M') which it satified('X' falls between 'R' and 'Y'), (in case col6 was NULL then equal. If col6 was not null, then between).
Since this has been satified and col3= 'S', so we don't consider the other group (10 in this case) as well and we dont select the rows which are col1 = 'S2'


Case 2:

--Here there are two 'S' but only one of the rows satifies for col4,col5,col6 combination so select
all the rows of this set
insert into demo values('S3','00','I','37','P',NULL);
insert into demo values('S3','00','I','37','P',NULL);
insert into demo values('S3','00','S','97','AXP',NULL);
insert into demo values('S3','00','S','37','X','Z');



Now i group this set of data based on col2, you can see I have one records: 00. Now if you see values of col4: 37 and 97).
These parameter values have a logic (in case col6 was NULL then equal. If col6 was not null, then between)


Now I have passed the parameter, if col4 = 37 then value must be equal or between p_37('P') which it satified,(in case col6 was NULL then equal. If col6 was not null, then between).
if col4 = 97 then value must be equal or between p_97('AXP,BEP') which it satified, (in case col6 was NULL then equal. If col6 was not null, then between).
if col4 = 37 then value must be equal or between p_37('AXP,BEP') which it not satified, (in case col6 was NULL then equal. If col6 was not null, then between).
So for col3= 'S', we have two rows of which one was satified and second not so, we slect the result where col1 = 'S3'



Some kind of this type:

with t as(
select col1, col2, col3,col4, 
count(case when col3 = 'S' THEN col4 END) over (partition by col1, col2, col3) cnt_S_out
,count(case when col3 ='S'
            and (case when col4 = '37' and col5 in('P') then 1 
            when col4 = '99' AND 'A,X,M' between col5 anc col6 then 1 end) = 1 THEN col4 END) over (partition by col1, col2, col3)
            cnt_S
from demo)
select demo.* from demo where not  exists(
select 1 from t
where t.cnt_S = t.cnt_S_out
and t.col3='S'
and t.col1=demo.col1
);



As you can see I am not able to break the comma separated part to check if it falls between col5 and col6
,count(case when col3 ='S'
            and (case when col4 = '37' and col5 in('P') then 1 
            when col4 = '99' AND 'A,X,M' between col5 anc col6 then 1 end) = 1 THEN col4 END) over (partition by col1, col2, col3)
            cnt_S


also i want to parameterize the query look like:
,count(case when col3 ='S'
            and (case when col4 = '37' and col5 = p_37 then 1 
            when col4 = '99' AND p_99 between col5 anc col6 then 1 end) = 1 THEN col4 END) over (partition by col1, col2, col3)
            cnt_S




Hope you get a better picture now.


Thanks for your time.

Tom Kyte
March 16, 2012 - 12:38 pm UTC

I did not follow this all of the way, but..... I can address this:


when col4 = '99' AND p_99 between col5 anc col6 then 1 end)


In the following, :x is your p_99

ops$tkyte%ORA11GR2> create table t ( col4 varchar2(2), col5 varchar2(5), col6 varchar2(5) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( '99', 'R', 'Y' );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( '99', 'C', 'F' );

1 row created.

ops$tkyte%ORA11GR2> variable x varchar2(20)
ops$tkyte%ORA11GR2> exec :x := 'A,X,M';

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select case when col4 = '99' and exists (select null
  2                                             from (select '''' || :x || '''' txt from dual)
  3                                            where
  4                                            trim( substr (txt, instr (txt, ',', 1, level  ) + 1, instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 ) )
  5                                            between col5 and nvl(col6,col5)
  6                                          connect by level <= length(:x)-length(replace(:x,',',''))+1)
  7          then 1
  8          else 0
  9          end
 10  from t
 11  /

CASEWHENCOL4='99'ANDEXISTS(SELECTNULLFROM(SELECT''''||:X||''''TXTFROMDUAL)WHERE
-------------------------------------------------------------------------------
                                                                              1
                                                                              0



see

http://asktom.oracle.com/Misc/varying-in-lists.html

for 'how this works'

Followup on the above query

A reader, April 11, 2012 - 8:22 pm UTC

Hi Tom,

Apology to reply you late. Thanks a lot for the elegant solution. I have just one concern with the current approach.

In my case I have multiple col4 values, 99 as example you take.. I can have col4 = 12,34,45,98,99...

Now for each of these col4 values I have parameters mapped. for col4=12 I have parameter p12(single or comma-sperated values) and likewise..

So with your solution, I feel I have to write the same query many times like :

select case
when EDITNUM = '10' and exists (
select null from (select ','||:P10||',' txt from dual)
where trim( substr (txt, instr (txt, ',', 1, level ) + 1, instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 ) )
between SCCITEMVALUE1 and nvl(SCCITEMVALUE2,SCCITEMVALUE1)
connect by level <= length(:P10)-length(replace(:P10,',',''))+1
)
then 1
when EDITNUM = '15' and exists (
select null from (select ','||:P15||',' txt from dual)
where trim( substr (txt, instr (txt, ',', 1, level ) + 1, instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 ) )
between SCCITEMVALUE1 and nvl(SCCITEMVALUE2,SCCITEMVALUE1)
connect by level <= length(:P15)-length(replace(:P15,',',''))+1
)
...
end;

In case I have 25 col4 values, is it advisable to write the almost same case statement 25 times.

Could you please advise.
Tom Kyte
April 12, 2012 - 7:38 am UTC

benchmark it.

use sql trace and tkprof - evaluate how it performs on your data.


and consider modeling your data to answer your questions, rather then fighting the model to get your answers in the future ;) when you have to go to such extremes to get an answer out - it almost always is due to "model doesn't support my real needs". Usually this happens when a developer gets "tricky" and does something for their ease and comfort.

Like storing comma separated lists, like using columns named something_1, something_2, something_3, and so on

Minor Correction

AndyP, April 16, 2012 - 5:48 am UTC


Tom, I hope you don't mind if I point out that, in your response of 16 March above, the construction you use in parsing the comma-separated string does not do what I think you were intending. By chance it does work with this data, but only because it is the middle letter of the 3 in the variable that falls into the range. Specifically, where you have:
select '''' || :x || '''' txt from dual

you probably meant to have:
select ',' || :x || ',' txt from dual


Just in case someone goes for a cut'n'paste solution!


Tom Kyte
April 16, 2012 - 7:39 am UTC

you are absolutely correct! thanks much

Data matching between Tables

Rajeshwaran, Jeyabal, April 17, 2012 - 1:43 am UTC

Tom:

I have two tables 'T1' and 'T2'. I want to retrive data from T1 where the data in T2 completely matches with T1.
I have one approach below, is there is any other shortcuts available?

drop table t1 purge;
drop table t2 purge;

create table t1(x varchar2(20),y number);
insert into t1 values('Test1',10);
insert into t1 values('Test1',20);
insert into t1 values('Test1',30);
insert into t1 values('Test2',10);
insert into t1 values('Test2',20);
create table t2(x number,y date);
insert into t2 values(10,sysdate);
insert into t2 values(20,sysdate);
commit;

create or replace type vcarray is table of number;

rajesh@ORA10GR2> select t1.x
  2  from (  select x,cast(collect(y) as vcarray) as y_vals
  3      from t1
  4      group by x ) t1,
  5      ( select cast(collect(x) as vcarray) as x_vals from t2) t2
  6  where  cardinality(t1.y_vals multiset except t2.x_vals) = 0
  7  /

X
--------------------
Test2

Elapsed: 00:00:00.07
rajesh@ORA10GR2>

Tom Kyte
April 17, 2012 - 5:25 am UTC

In 11g, it could be:

ops$tkyte%ORA11GR2> select x
  2    from t1
  3   group by x
  4  having listagg(y,'/') within group (order by y) =
  5         (select listagg(x,'/') within group (order by x) from t2)
  6  /

X
--------------------
Test2



In 10g, it could be



ops$tkyte%ORA11GR2> select x
  2    from ( select x, max(sys_connect_by_path(y,'/')) str
  3             from (select x, y, row_number() over (partition by x order by y) rn
  4                     from t1 )
  5            start with rn = 1
  6          connect by prior rn=rn-1 and prior x = x
  7            group by x ) t1,
  8             ( select max(sys_connect_by_path(x,'/'))str
  9             from (select x, row_number() over (order by x) rn
 10                     from t2)
 11            start with rn=1
 12          connect by prior rn=rn-1 ) t2
 13   where t1.str = t2.str
 14  /

X
--------------------
Test2

ops$tkyte%ORA11GR2> 

Other way(s)

A reader, April 18, 2012 - 8:56 pm UTC

SQL> 
SQL> with base_recs as
  2   (select x,
  3           row_number() over(partition by x order by 1) as rn,
  4           count(*) over() as t2_tot_cnt
  5    from   t2),
  6  check_recs as
  7   (select x,
  8           y,
  9           row_number() over(partition by x, y order by 1) as rn
 10    from   t1)
 11  select x
 12  from   (select x,
 13                 y,
 14                 bx,
 15                 t1_match_cnt,
 16                 t1_tot_cnt,
 17                 t2_tot_cnt
 18          from   (select c.x,
 19                         c.y,
 20                         count(case
 21                                 when c.y = b.x then
 22                                  1
 23                               end) over(partition by c.x) as t1_match_cnt,
 24                         count(*) over(partition by c.x) as t1_tot_cnt,
 25                         b.x bx,
 26                         b.t2_tot_cnt
 27                  from   check_recs c
 28                  left   join base_recs b on c.y = b.x and c.rn = b.rn)
 29          where  t1_tot_cnt = t2_tot_cnt and t1_match_cnt = t2_tot_cnt)
 30  group  by x;
 
X
--------------------
Test2
 
Executed in 0.032 seconds
SQL> -- The above sql takes care of
SQL> -- t2 containing duplicate values in the fields considered for matching (but assuming only not null values) - same number of duplicates present in t1 is considered as match
SQL> -- t1 containing less or more than corresponding t2 records - this is not considered as match
SQL> truncate table t1;
 
Table truncated
 
Executed in 0.016 seconds
SQL> truncate table t2;
 
Table truncated
 
Executed in 0.015 seconds
SQL> insert into t2
  2  select level as x, sysdate
  3  from dual
  4  connect by level <= 10000;
 
10000 rows inserted
 
Executed in 0.015 seconds
SQL> insert into t2
  2  select * from t2;
 
10000 rows inserted
 
Executed in 0.015 seconds
SQL> insert into t1
  2  select 'Test1' as x, x as y from t2
  3  union all
  4  select 'Test1' as x, 9999999 as from dual
  5  union all
  6  select 'Test2' as x, x from t2
  7  union all
  8  select 'Test3', x from t2 where rownum < 1000
  9  union all
 10  select 'Test4', x from t2
 11  union all
 12  select 'Test4', x from t2
 13  commit;
 
81000 rows inserted
 
Executed in 0.093 seconds
SQL> insert into t1
  2  select 'Test5', x from t2;
 
20000 rows inserted
 
Executed in 0.016 seconds
SQL> delete from t1
  2  where x = 'Test5' and y = 1 and rownum  = 1;
 
1 row deleted
 
Executed in 0.015 seconds
SQL> insert into t1 (x,y)
  2  values ('Test5',2);
 
1 row inserted
 
Executed in 0 seconds
SQL> commit;
 
Commit complete
 
Executed in 0.015 seconds
SQL> with base_recs as
  2   (select x,
  3           row_number() over(partition by x order by 1) as rn,
  4           count(*) over() as t2_tot_cnt
  5    from   t2),
  6  check_recs as
  7   (select x,
  8           y,
  9           row_number() over(partition by x, y order by 1) as rn
 10    from   t1)
 11  select x
 12  from   (select x,
 13                 y,
 14                 bx,
 15                 t1_match_cnt,
 16                 t1_tot_cnt,
 17                 t2_tot_cnt
 18          from   (select c.x,
 19                         c.y,
 20                         count(case
 21                                 when c.y = b.x then
 22                                  1
 23                               end) over(partition by c.x) as t1_match_cnt,
 24                         count(*) over(partition by c.x) as t1_tot_cnt,
 25                         b.x bx,
 26                         b.t2_tot_cnt
 27                  from   check_recs c
 28                  left   join base_recs b on c.y = b.x and c.rn = b.rn)
 29          where  t1_tot_cnt = t2_tot_cnt and t1_match_cnt = t2_tot_cnt)
 30  group  by x;
 
X
--------------------
Test2
 
Executed in 0.344 seconds
 
SQL> 

Best way for select

A reader, April 29, 2012 - 3:40 am UTC

hi Tom,

I have a table which has around 3 million rows..

select * from table1 order by id

this type of query takes around 15 sec. What could be the best way to reduce the execution time.

Thanks a lot.
Tom Kyte
April 30, 2012 - 8:15 am UTC

well, 15 seconds to retrieve 3,000,000 rows after sorting them all - so, we have the IO time, the sort time, the transfer of 3,000,000 rows time - seems pretty darn reasonable to me so far.

how fast are you expecting this to be exactly? What is your goal here?

and what is your array fetch size - it should be between 100 and 500.

Inline Table function - Yeilds wrong results

A reader, May 05, 2012 - 4:54 am UTC

Tom:

I was writing some queries to my application and founded that Inline Table function query yeilds improper values. I dont understand why it does like that, can you help me where i am wrong with this query?

This is what my expected output is.
rajesh@ORA10GR2> column val format a25;
rajesh@ORA10GR2> column column_value format a35;
rajesh@ORA10GR2>
rajesh@ORA10GR2> select t.x,t.y,t2.val
  2  from t,(
  3    select x,y,
  4      sys_connect_by_path(y,',') as val
  5    from  (
  6    select t.*,
  7        row_number() over(partition by x order by y) as rnum
  8    from t
  9          )
 10    where connect_by_isleaf = 1
 11    start with rnum = 1
 12    connect by prior rnum = rnum -1 and prior x = x ) t2
 13  where t.x = t2.x;

         X Y          VAL
---------- ---------- -------------------------
         1 val1_3     ,val1_1,val1_2,val1_3
         1 val1_2     ,val1_1,val1_2,val1_3
         1 val1_1     ,val1_1,val1_2,val1_3
         2 val2_2     ,val1_1,val2_2
         2 val1_1     ,val1_1,val2_2

But, when i code using Table function i see the output like this.
rajesh@ORA10GR2> select *
  2  from t t1, table(
  3        cast( multiset(select sys_connect_by_path(y,',')
  4        from (
  5        select t2.x,t2.y,rownum as r
  6        from t t2
  7        where t2.x = t1.x )
  8        where connect_by_isleaf = 1
  9        start with r = 1
 10        connect by prior r = r -1) as sys.odcivarchar2list) ) t2;

         X Y          COLUMN_VALUE
---------- ---------- -----------------------------------
         1 val1_1     ,val1_1,val1_2,val1_3,val1_1,val2_2
         1 val1_2     ,val1_1,val1_2,val1_3,val1_1,val2_2
         1 val1_3     ,val1_1,val1_2,val1_3,val1_1,val2_2
         2 val1_1     ,val1_1,val1_2,val1_3,val1_1,val2_2
         2 val2_2     ,val1_1,val1_2,val1_3,val1_1,val2_2

Elapsed: 00:00:00.03
rajesh@ORA10GR2>
rajesh@ORA10GR2>

Create table and sample datas are below.
drop table t purge;
create table t(x number,y varchar2(10));

insert into t values(1,'val1_1');
insert into t values(1,'val1_2');
insert into t values(1,'val1_3');
insert into t values(2,'val1_1');
insert into t values(2,'val2_2');
commit;

Tom Kyte
May 06, 2012 - 3:19 pm UTC

ops$tkyte%ORA10GR2> select *
  2  from t t1, table(
  3        cast( multiset(select sys_connect_by_path(y,',')
  4        from (
  5        select t2.x,t2.y,rownum as r
  6        from t t2
  7        where t2.x = t1.x )
  8        where connect_by_isleaf = 1
  9        start with r = 1
 10        connect by prior r = r -1) as sys.odcivarchar2list) ) t2;
      where t2.x = t1.x )
                   *
ERROR at line 7:
ORA-00904: "T1"."X": invalid identifier



I'm 10.2.0.5 - what version are you?


but in any case - since the rows we would be assigning ROWNUM to in the second query would be ASSIGNED RANDOMLY - there is no ordering by Y taking place - why do you think they would be the same?

Inline Table function - Yeilds wrong results

Rajeshwaran, Jeyabal, May 06, 2012 - 11:57 pm UTC

1) but in any case - since the rows we would be assigning ROWNUM to in the second query would be ASSIGNED RANDOMLY - there is no ordering by Y taking place - why do you think they would be the same? - I am not concerned with ordering of Y all i need is string concat for given value of X.

2)I'm 10.2.0.5 - what version are you? - I am on 10.2.0.1

rajesh@ORA10GR2> column column_value format a35;
rajesh@ORA10GR2> select *
  2      from t t1, table(
  3            cast( multiset(select sys_connect_by_path(y,',')
  4            from (
  5            select t2.x,t2.y,rownum as r
  6            from t t2
  7            where t2.x = t1.x )
  8            where connect_by_isleaf = 1
  9            start with r = 1
 10           connect by prior r = r -1) as sys.odcivarchar2list) ) t2;

         X Y          COLUMN_VALUE
---------- ---------- -----------------------------------
         1 val1_1     ,val1_1,val1_2,val1_3,val1_1,val2_2
         1 val1_2     ,val1_1,val1_2,val1_3,val1_1,val2_2
         1 val1_3     ,val1_1,val1_2,val1_3,val1_1,val2_2
         2 val1_1     ,val1_1,val1_2,val1_3,val1_1,val2_2
         2 val2_2     ,val1_1,val1_2,val1_3,val1_1,val2_2

Elapsed: 00:00:00.03
rajesh@ORA10GR2>
rajesh@ORA10GR2> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

Elapsed: 00:00:00.04
rajesh@ORA10GR2>

Tom Kyte
May 07, 2012 - 5:19 am UTC

but do you see that without an order by - rownum is being assigned randomly in this query whereas in the analytic query you are SORTING and row_number is being assigned differently


There is a bug that was fixed that will make this query not parse in current releases. T1.X is NOT in scope on line 7, that will stop working as soon as you patch.


In short:

these queries are not even remotely similar. I would not expect them to have the same answer.

this particular query is not syntactically correct and will cease parsing as soon as you patch.


so, ignore this query, it is not useful

amita, June 18, 2012 - 6:03 am UTC

my error is
oracle initialization or shut down in progress

Tom Kyte
June 18, 2012 - 9:10 am UTC

ummm, I'm sorry?


what do you want me to say - I have no idea what your situation is.


If you are trying to restart this database, just use

SQL> startup force;


Copy data from one schema to another

A reader, June 18, 2012 - 9:39 pm UTC

hi Tom,


I have a reuirement where I have to copy 2 million data from one schema from another. I cannot do exp/imp as the same is not getting allowed due to some policy issues. There are 2-3 schemas where I have to copy so I wanted a generic approach.

I wanted to a simple

INSERT INTO SCHEMA1.TABLE select * from SCHEMA2.TABLE

but the redo size seems to be on lower side so I thought to break into smaller chunks. How can I make a DYNAMIC BULK COLLECT to insert into the table?

Please suggest.
Tom Kyte
June 19, 2012 - 8:35 am UTC

2,000,00 rows is pretty teeny tiny.

but the redo size seems to be on lower side so I thought to break into smaller
chunks. How can I make a DYNAMIC BULK COLLECT to insert into the table?


if you want to make that generate MORE redo, please break it into smaller statements or bulk collect and do it procedurally. NOTE: I said *if you want to generate MORE redo*


If you are interested in doing this with minimal redo, you would use insert /*+ APPEND */into t1 select * from t2;

that will do a direct path load (assuming no triggers on table t1 and no foreign keys in place). That direct path load will skip undo for the table. That in turn will skip the redo generated for that undo.

If you want, and you understand the ramifications of, you can also set t1 "nologging", that will skip redo and undo on the table entirely.

Further, if there are indexes, set then unusable before you do this and rebuild them (optionally with nologging) afterwards.


But the amount of redo generated by a small number of rows like this shouldn't be too onerous. A couple hundred megabytes maybe? Nothing.

Thanks tom

A reader, June 19, 2012 - 1:02 pm UTC

Thanks for the tip.. I will use the direct path.

help on this

venkat, June 25, 2012 - 4:08 am UTC

just trying to understand the use of cast function, can you pls explain the difference between the following

select to_number('1') x from dual

select cast('1' as integer) x from dual
Tom Kyte
June 25, 2012 - 9:45 am UTC

to_number is a function that takes a string as input and, expecting a string, converts that string into a number field using the default NLS number settings.

cast( '1' as integer ) is for all intents and purposes, the "same" here - given your inputs.

they could be different - since your use of integer is more specific than to_number, to_number would return a number with decimals whereas your cast would not. So the cast is doing a little more editing of the data.

ops$tkyte%ORA11GR2> select to_number( '1.1' ), cast( '1.1' as integer ) from dual;

TO_NUMBER('1.1') CAST('1.1'ASINTEGER)
---------------- --------------------
             1.1                    1

ops$tkyte%ORA11GR2> 

SQL Query

TCS, July 25, 2012 - 12:27 am UTC

HI,
I have a source Rate table like :

From Currency To Currency Conversion Rate
A B 2
B C 2.5
D B 3
B K 1.4
E F 3.1
R C 2.2
A L 1.9


I want to build the target rate table look like:

From Currency To Currency Conversion Rate
A B 2
B C 2.5
A C 5
D B 3
B K 1.4
A K 2.8
E F 3.1
R C 2.2
A L 1.9


Means there is no direct conversion between A to C but has indirect path like A to B and B to C.

A C 5

Also A to B and B to K so get A to K :
A K 2.8

Please suggets a single sql query to solve this problem...

Tom Kyte
July 30, 2012 - 9:25 am UTC

no creates
no inserts
no look

This might do it...

Paul, July 25, 2012 - 6:38 pm UTC

You have several potential hierarchies starting with each from currency.

It looks to me like you need to preserve the first conversion rate for each hierarchy and if you are at a level in the hierarchy > 1 then multiply the root conversion rate with the current conversion rate.


create table cumm_conversion (curr_from varchar2(5), curr_to varchar2(50), conv_rate number);

insert into cumm_conversion values ('A','B',2);
insert into cumm_conversion values ('B','C',2.5);
insert into cumm_conversion values ('D','B',3);
insert into cumm_conversion values ('B','K',1.4);
insert into cumm_conversion values ('E','F',3.1);
insert into cumm_conversion values ('R','C',2.2);
insert into cumm_conversion values ('A','L',1.9);


SQL> select * from cumm_conversion
  2  ;

CURR_ CURR_  CONV_RATE
----- ----- ----------
A     B              2
B     C            2.5
D     B              3
B     K            1.4
E     F            3.1
R     C            2.2
A     L            1.9


SQL>
 select curr_root root, curr_from, curr_to, 
 first_value(conv_rate)  over (partition by curr_root) curr_first, l, conv_rate,
 case when l > 1 then
   first_value(conv_rate) over (partition by curr_root)*conv_rate
 end linked_rate
 from (
  select connect_by_root curr_from as curr_root, curr_from, curr_to, level l,
    conv_rate
   from cumm_conversion
  connect by prior curr_to=curr_from)
SQL> /

ROOT  CURR_ CURR_ CURR_FIRST          L  CONV_RATE LINKED_RATE
----- ----- ----- ---------- ---------- ---------- -----------
A     A     B              2          1          2
A     B     C              2          2        2.5           5
A     B     K              2          2        1.4         2.8
A     A     L              2          1        1.9
B     B     C            2.5          1        2.5
B     B     K            2.5          1        1.4
D     D     B              3          1          3
D     B     C              3          2        2.5         7.5
D     B     K              3          2        1.4         4.2
E     E     F            3.1          1        3.1
R     R     C            2.2          1        2.2



So A to B is a conversion of 2
B to K is a conversion of 1.4
A to K is 2*1.4
D to B is a converion of 3
B to C is a conversion 2.5
D to C is a conversion 3*2.5



Try something like that.








I did not test that for levels > 2

Paul, July 25, 2012 - 8:03 pm UTC

That might not quite meet the full requirement

sql query

TCS, July 26, 2012 - 1:08 am UTC

It is working fine upto level 2.

but in case A to b is 2
b to c is 3 and c to d is 5 then a to d is 2*3*5= 30 is not giving the proper value.

Right, I realized it wouldn't

Paul, July 26, 2012 - 1:57 pm UTC

Right, I mentioned that it might not.
In addition to having loops in the data if you have C->D and D->C it won't handle 3 levels like that.
I can't seem to get the lag to restrict itself to a branch of the hierarchy.

There is a PL/SQL way that might not be too bad for performance.

SQL> select * from cumm_conversion;

CURR_ CURR_  CONV_RATE
----- ----- ----------
A     B              2
B     C            2.5
D     B              3
B     K            1.4
E     F            3.1
R     C            2.2
A     L            1.9
C     D              5



create or replace
function foo(p_expr in varchar2) return number
as
 v_ret number;
  begin
    /* Add logic to verify parameters and handle expceptions as you wish */
    execute immediate 'select '||p_expr||' from dual' into v_ret;
    return v_ret;
end;

select path, calc, foo(calc), ret
from (
select 
  connect_by_root curr_from root,
  curr_to,
  sys_connect_by_path(curr_from,'/')||'/'||curr_to path, 
  '1'||sys_connect_by_path(conv_rate,'*') calc --prepend 1 to the connect_by_path to deal with leading '*'
 from cumm_conversion
 connect by NOCYCLE prior curr_to=curr_from)
 where curr_to='C' and root='A';
 
 
 PATH CALC   RET
 ------------  ----------       
 /A/B/C 1*2*2.5 5
 
 
 select path, calc, foo(calc), ret
from (
select 
  connect_by_root curr_from root,
  curr_to,
  sys_connect_by_path(curr_from,'/')||'/'||curr_to path, 
  '1'||sys_connect_by_path(conv_rate,'*') calc --prepend 1 to the connect_by_path to deal with leading '*'
 from cumm_conversion
 connect by NOCYCLE prior curr_to=curr_from)
 where curr_to='D' and root='A';
 
 PATH  CALC  RET
 -------------- ----------        -----------
 /A/B/C/D   1*2*2.5        25
 
 The inner query uses connect_by_path to build the math to figure out the calculation for any level of the hierarchy
 The outer only calls foo(calc) for the 1 record you are really looking for based on it's start and end nodes.

 
 

For:TCS from INDIA

Stew Ashton, July 30, 2012 - 10:26 am UTC


Here is a proposal. Thanks to Paul for providing create table and insert statements.
select CURR_FROM_ROOT, CURR_TO, CONV_RATE,
cast(CONV_RATES_IN as varchar2(20)) conv_rates_in,
to_number(column_value) CONV_RATES_OUT
from (
  select connect_by_root(CURR_FROM) curr_from_root, CURR_TO, CONV_RATE,
  SUBSTR(SYS_CONNECT_BY_PATH(CONV_RATE, '*'),2) CONV_RATES_in
  from CUMM_CONVERSION a
  connect by CURR_FROM = prior CURR_TO
),
xmltable(CONV_RATES_in);

CURR_FROM_ROOT CURR_TO CONV_RATE CONV_RATES_IN        CONV_RATES_OUT
-------------- ------- --------- -------------------- --------------
A              B               2 2                                 2 
A              C             2.5 2*2.5                             5 
A              K             1.4 2*1.4                           2.8 
A              L             1.9 1.9                             1.9 
B              C             2.5 2.5                             2.5 
B              K             1.4 1.4                             1.4 
D              B               3 3                                 3 
D              C             2.5 3*2.5                           7.5 
D              K             1.4 3*1.4                           4.2 
E              F             3.1 3.1                             3.1 
R              C             2.2 2.2                             2.2
The trick is that XMLTABLE uses xQuery, which will do arithmetic for you. You didn't say what version of Oracle you have; this will not work in older versions.

optimizer could not merge the view

sachin, August 03, 2012 - 4:06 am UTC

Hi Tom,

When i run the sql advisor, i'm getting below message. What do you think is the problem? 
Is there any change required in the sql statement to tune this query

SQL advisor recommendation 
=========================
The optimizer could not merge the view at line ID 1 of the execution plan.


SELECT  BID_PRICE
FROM    INSTRUMENT_PRICE_VIEW
WHERE   INSTRUMENT_ID  = :B2
        AND VALUE_DATE = :B1


INSTRUMENT_PRICE_VIEW  ---- > View 

SQL> select text from dba_views where view_name='INSTRUMENT_PRICE_VIEW';

TEXT
--------------------------------------------------------------------------------
SELECT INSTRUMENT_ID,BID_PRICE,LAST_TRADED_PRICE,CURR_BUS_DAT VALUE_DATE FROM IN
STRUMENT_PRICE,REF_BANK_PARAMS
UNION ALL
SELECT INSTRUMENT_ID,BID_PRICE,LAST_TRADED_PRICE,VALUE_DATE VALUE_DATE FROM INST
RUMENT_PRICE_HIST




PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID cqy85tpszb80r
--------------------
SELECT BID_PRICE FROM INSTRUMENT_PRICE_VIEW WHERE INSTRUMENT_ID = :B2
AND VALUE_DATE = :B1

Plan hash value: 3917691082

------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                           |       |       |     5 (100)|          |
|   1 |  VIEW                          | INSTRUMENT_PRICE_VIEW     |     2 |    70 |     5   (0)| 00:00:01 |
|   2 |   UNION-ALL                    |                           |       |       |            |          |
|   3 |    NESTED LOOPS                |                           |     1 |    18 |     4   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| INSTRUMENT_PRICE          |     1 |    10 |     1   (0)| 00:00:01 |
|   5 |      INDEX UNIQUE SCAN         | PK1_INSTRUMENT_PRICE      |     1 |       |     1   (0)| 00:00:01 |
|   6 |     TABLE ACCESS FULL          | REF_BANK_PARAMS           |     1 |     8 |     3   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID | INSTRUMENT_PRICE_HIST     |     1 |    18 |     1   (0)| 00:00:01 |
|   8 |     INDEX UNIQUE SCAN          | PK1_INSTRUMENT_PRICE_HIST |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1 / INSTRUMENT_PRICE_VIEW@SEL$1
   2 - SET$1
   3 - SEL$2
   4 - SEL$2 / INSTRUMENT_PRICE@SEL$2
   5 - SEL$2 / INSTRUMENT_PRICE@SEL$2
   6 - SEL$2 / REF_BANK_PARAMS@SEL$2
   7 - SEL$3 / INSTRUMENT_PRICE_HIST@SEL$3
   8 - SEL$3 / INSTRUMENT_PRICE_HIST@SEL$3

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
      OPT_PARAM('query_rewrite_enabled' 'false')
      OPT_PARAM('optimizer_index_cost_adj' 20)
      OPT_PARAM('optimizer_index_caching' 80)
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$1")
      NO_ACCESS(@"SEL$1" "INSTRUMENT_PRICE_VIEW"@"SEL$1")
      INDEX_RS_ASC(@"SEL$3" "INSTRUMENT_PRICE_HIST"@"SEL$3" ("INSTRUMENT_PRICE_HIST"."VALUE_DATE"
              "INSTRUMENT_PRICE_HIST"."INSTRUMENT_ID"))
      INDEX_RS_ASC(@"SEL$2" "INSTRUMENT_PRICE"@"SEL$2" ("INSTRUMENT_PRICE"."INSTRUMENT_ID"))
      FULL(@"SEL$2" "REF_BANK_PARAMS"@"SEL$2")
      LEADING(@"SEL$2" "INSTRUMENT_PRICE"@"SEL$2" "REF_BANK_PARAMS"@"SEL$2")
      USE_NL(@"SEL$2" "REF_BANK_PARAMS"@"SEL$2")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :B2 (NUMBER): 90005
   2 - :B1 (DATE): 06/01/2009 00:00:00


64 rows selected.


Tom Kyte
August 16, 2012 - 8:18 am UTC

what is wrong with the plan?


unique scan on history, unique scan on price and then cartesian join to params.

what else do you think it would or could do???


you do realize you have a cartesian join in there right? is that on purpose?

Query to select column names alphabetically

A reader, August 14, 2012 - 8:13 pm UTC

Hi Tom,

Is it possible to select the rows based on column names alphabetically in a single SQL query:

create table demo(name varchar2(100), age number, gender varchar2(10));

insert into demo values('A',30,'M');

when I do a select on table demo, the result is as
name age gender
A 30 M

i am expecting the result as

age gender name
30 M A

Please suggest.
Tom Kyte
August 17, 2012 - 2:43 pm UTC

sure

select age, gender, name from demo;


since you know you should never code with "select *", this isn't a big deal. do not use select * in real code.

How to merge these queries into one?

rebisco, September 03, 2012 - 5:41 am UTC

Hello Tom,

We have a query below that will update some columns into the final table (I called it here table1) but the data are coming from different staging tables. Can you throw me some light as to how do I merge these queries into a single query or do I have to? table1 have 24K+ rows and transactions table have roughly 600K+ rows.
INSERT INTO table0
SELECT
    b.Id,b.objId,b.txnSK Txn_sk,b.act||'-'||b.spAct act,b.group group,b.product product,
    substr(b.ts,1,15) TxnTime,b.qty qty,b.step step,x.stage stage
FROM transactions b,table1 x
WHERE x.objId=b.objId AND x.sid='123'
    AND b.act||'-'||b.spAct IN (SELECT act FROM activities WHERE name='ABC' AND Category='START')
    AND b.txnSK||'' < x.txnSK;
COMMIT;
INSERT INTO table2
SELECT
    Max(Txn_sk) Txn_sk,Max(Nvl(a.stage, b.stage)) stage
FROM attributes b,table0 a
WHERE
    b.sid='123' AND a.sid='123'
    AND a.group=b.groupname AND a.act IN ('StandardRelease','PhantomRelease') AND a.step=b.step
GROUP BY a.objId,a.act;
COMMIT;
INSERT INTO table2
SELECT
  Min(Txn_sk) Txn_sk,Max(Nvl(a.stage, b.stage)) stage
FROM attributes b,table0 a
WHERE
    b.sid='123' AND a.sid='123' AND a.group=b.groupname
    AND a.act IN (SELECT act FROM activities WHERE name='ABC' AND Category='BREAK')
    AND a.step=b.step
GROUP BY a.objId;
COMMIT;
INSERT INTO table3
SELECT
    b.Id,b.objId,b.Txn_sk,b.qty,b.txntime startdate,b.group,
    b.product,b.act,a.stage
FROM table0 b,table2 a
WHERE
    a.sid='123' and b.sid='123' and a.Txn_sk=b.txn_sk;
COMMIT;
    
CURSOR Cur_upd_START (pid NUMBER) IS
SELECT
  Max(x.pkey) pkey,Max(x.objId) objId,SUBSTR(Max(c.StartDate),1,15) sdate
  ,Max(c.qty) qty,ROUND((Max(c.qty)/1000),3) qtyK
FROM table3 c, table1 x
WHERE
  c.sid=pid AND x.sid=c.sid AND x.objId=c.objId
  AND x.txnSK > c.Txn_sk AND x.stage=c.stage
GROUP BY x.objId,x.stage;

FOR c IN Cur_upd_START ('123') LOOP
BEGIN
  UPDATE table1
  SET
    qty=c.qty,qtyK=c.qtyK,sdate=c.sdate
  WHERE sid='123' AND pkey=c.pkey;
END LOOP; 
COMMIT;

Tom Kyte
September 10, 2012 - 7:03 pm UTC

no creates
no inserts
no look...

sumit, September 07, 2012 - 2:38 am UTC

Hi....tom
CREATE TABLE EMPLOYEE1
(
ID VARCHAR2(4 BYTE) ,
FIRST_NAME VARCHAR2(10 BYTE),
LAST_NAME VARCHAR2(10 BYTE),
START_DATE DATE,
END_DATE DATE,
SALARY NUMBER(8,2),
CITY VARCHAR2(10 BYTE),
DESCRIPTION VARCHAR2(15 BYTE),
MGR NUMBER
)
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('100','A','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',105);
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('101','B','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',null);
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('103','C','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',101);
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('102','D','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',100);
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('104','G','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',null);
Insert into employee1 (ID,FIRST_NAME,LAST_NAME,START_DATE,END_DATE,SALARY,CITY,DESCRIPTION,MGR) values ('105','W','yyyySyyy',to_date('31-JUL-12','DD-MON-RR'),to_date('12-AUG-12','DD-MON-RR'),1000,'sagar','programmere',102);

SELECT * FROM EMPLOYEE1

SELECT b.FIRST_NAME as employee_name , a.FIRST_NAME as mgr_name FROM EMPLOYEE1 A, EMPLOYEE1 B
WHERE B.MGR=A.ID;
SELECT B.FIRST_NAME AS EMPLOYEE_NAME , A.FIRST_NAME AS MGR_NAME FROM EMPLOYEE1 A, EMPLOYEE1 B
WHERE a.id=b.mgr;

I M GETTING SAME RESULT FROM BOTH QUERIES THEN WHAT IS THE DIFFRENCE BETWEEN THEM
ONE MORE THINK if emp do not have mgr so employee_name should be come with null ;
========
O/P
EMPLOYEE_NAME MGR_NAME
------------- ----------
A W
C B
D A
W D

EMPLOYEE_NAME MGR_NAME
------------- ----------
A W
C B
D A
W D

Tom Kyte
September 10, 2012 - 8:03 pm UTC

why would there by any difference between:

b.mgr=a.id

and

a.id=b.mgr

??????


you might want to read about outer joins for the last bit

http://docs.oracle.com/cd/E11882_01/server.112/e26088/queries006.htm#SQLRF52354

Creates and inserts - Part 1

rebisco, September 11, 2012 - 1:21 am UTC

Hello Tom,

I added here the creates and inserts maybe, I will split this into 3 posts since you limited each post up to 1,000 words only.
create table table1 (
  sid   NUMBER(38),
  txnSK NUMBER(38),
  pkey  NUMBER(38,4),
  objId VARCHAR2(45),
  stage VARCHAR2(50),
  qty   NUMBER default 0,
  qtyK  NUMBER default 0,
  sdate VARCHAR2(18),
  pcode VARCHAR2(51),
  grp   VARCHAR2(225)
);

create index IDX$OBJID on TABLE1 (OBJID);
create index IDX$PKEY  on TABLE1 (PKEY);
create index IDX$PCODE on TABLE1 (PCODE);
create index IDX$SID   on TABLE1 (SID);

create table transactions (
  Id      VARCHAR2(40),
  objId   VARCHAR2(45) default sys_guid() not null,
  txnSK   NUMBER(38),
  act     VARCHAR2(40),
  spAct   VARCHAR2(40),
  group   VARCHAR2(40),
  product VARCHAR2(40),
  ts      VARCHAR2(18),
  qty     NUMBER(38),
  step    VARCHAR2(40),
  stage   VARCHAR2(40),
  gpKey   VARCHAR2(45),
  uname   VARCHAR2(40)
)
partition by range (TS)
(
  partition P1 values less than ('20111001') tablespace USERS_S,
  partition P2 values less than ('20120101') tablespace USERS_S,
  partition P3 values less than ('20120401') tablespace USERS_S,
  partition P4 values less than ('20120701') tablespace USERS_S,
  partition PN values less than (MAXVALUE)   tablespace USERS_S
);

alter table TRANSACTIONS add constraint P_TXNS primary key (OBJID) using index;
create index I_TXN_ID  on TRANSACTIONS (ID);
create index IDX_OBJID on TRANSACTIONS (OBJID);
create index IDX_TXNSK on TRANSACTIONS (TXNSK);
create index I_ACT     on TRANSACTIONS (ACT);
create index I_SPACT   on TRANSACTIONS (SPACT);
create index IDX_TS    on TRANSACTIONS (TS);
create index I_STAGE   on TRANSACTIONs (STAGE);

Tom Kyte
September 14, 2012 - 2:58 pm UTC

I'm not limited to 1,000 words, but rather 32k, if you cannot say it in 32k or less, I don't want to read it in a review. 32k is pretty darn large.

Now, in the form of a specification - tell me the processing that has to occur (please don't imagine that I'll become a compiler, compile your code, reverse engineer it and spit out something entirely different)

Creates and inserts - Part 2

rebisco, September 11, 2012 - 1:22 am UTC

create table table0 (
sid NUMBER(38),
id VARCHAR2(40),
objId VARCHAR2(45),
txnSK NUMBER(38),
act VARCHAR2(50),
group VARCHAR2(40),
product VARCHAR2(40),
ts VARCHAR2(18),
qty NUMBER(38),
step VARCHAR2(40),
stage VARCHAR2(40)
);

create index IDX_SID on TABLE0 (SID);
create index IDX$TXNSK on TABLE0 (TXNSK);

create table attributes (
sid NUMBER,
stage VARCHAR2(50),
groupname VARCHAR2(50),
step VARCHAR2(45)
);

create index IDX$SID_GPNAME on ATTRIBUTES (SID, GROUPNAME);

create table table2 (
txn_SK NUMBER(38),
stage VARCHAR2(40)
);

create table table3 (
id VARCHAR2(40),
objId VARCHAR2(45),
txnSK NUMBER(38),
qty NUMBER(38),
ts VARCHAR2(18),
group VARCHAR2(40),
product VARCHAR2(40),
stage VARCHAR2(40)
);

create index IDXTBL3$OBJID on TABLE3 (OBJID);

create table activities (
name VARCHAR2(40) not null,
CATEGORY VARCHAR2(40),
ACT VARCHAR2(255)
);

create index I_ACT on ACTIVITIES (NAME, CATEGORY);

Creates and inserts - Part 3

rebisco, September 11, 2012 - 1:24 am UTC

-------INSERTS-----------------
insert into TABLE1 values (123, 700737553, 6169475, '4f714901', 'TEST', 8640, 8.64, null, 'PC001', 'PG01');
insert into TABLE1 values (123, 700733368, 6169476, '4f5af0d2', 'TEST', 8640, 8.64, null, 'PC002', 'PG01');
insert into TABLE1 values (123, 700733366, 6169477, '4f41cb3c', 'TEST', 8640, 8.64, null, 'PC002', 'PG01');
COMMIT;

insert into TRANSACTIONS values ('LOT1', '00000e31.8ac6c57a.4e3d0210.000003aa.2019', 700000032, 'StandardRelease', 'Release1', 'PG01', 'PROD1', '20110806 214120', 8640, 'STEP01', '20', '201108062121110000000030231019', '080032');
insert into TRANSACTIONS values ('LOT2', '00000e31.8ac6c57a.4e3d0207.000071d6.2525', 700000033, 'PhantomRelease', 'Release1', 'PG01', 'PROD1', '20110806 214119', 8640, 'STEP01', '20', '201108062121110000000030231020', '080032');
insert into TRANSACTIONS values ('LOT3', '00000e31.8ac6c57a.4e3d0204.00000551.1900', 700000034, 'PhantomRelease', 'Release1', 'PG01', 'PROD1', '20110806 214117', 4183, 'STEP02', '20', '201108062121110000000030231021', '112322');
COMMIT;


insert into ACTIVITIES values ('ABC', 'START', 'StandardRelease-Release1');
insert into ACTIVITIES values ('ABC', 'BREAK', 'StandardRelease-Release1');
insert into ACTIVITIES values ('CDE', 'BREAK', 'PhantomRelease-Release1');
insert into ACTIVITIES values ('EFG', 'END'  , 'Terminate-Terminated');
COMMIT;

insert into ATTRIBUES values (123, 'TEST', 'PG01', 'STEP1');
insert into ATTRIBUES values (123, 'TEST', 'PG01', 'STEP1');
insert into ATTRIBUES values (123, 'ASSEMBLY', 'PG02', 'STEP1');
commit;

Thank you for taking the time in helping me tune this queries.

Clarification

rebisco, September 15, 2012 - 1:38 am UTC

Hello Tom,

As what I said in my very first post. I want to merge these queries into one query if possible. But, since you are asking the creates and inserts, I gave it to you as you can see in my previous posts.

These queries are basically retrieving from multiple staging tables which I think is not necessary. The real purpose of the queries is to get the needed columns and put it in the so-called staging tables which are eventually retrieved back to update the target table(table1).

If I can just remove the unnecessary "Inserts" which are for staging tables and make a single straight-forward "Update" that would make my day.

I hope I make myself clear in the specification I need.

Thank you again,
rebisco
Tom Kyte
September 16, 2012 - 4:03 am UTC

A specification is something that tells a developer specifically what to do - before code exists.

I have the creates, I have the inserts, I have no specification (remember, the code does not exist yet, specifications precede code - just post the specification you used to write your code from...)

ORA-03113: end-of-file on communication channel occurs in SQL query

Prakash Rai, September 20, 2012 - 3:19 pm UTC

Hi Tom -
My question does not exactly fit to this thread for which I apologize. I wonder, why does a query fail to end the session when explicit NULL column with SET operator has a IS NOT NULL predicate.

e.g. COLUMN X below with the filter raise the error.

And this version specific;

Falis in Exadata "Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production"

Works fine in 11g XE


SELECT AA, SUM(A), SUM(X), SUM(RSLT) FROM
(
SELECT
AA, A,B,C,X, (NVL(X,0)+NVL(Z,0)) AS RSLT FROM (
SELECT 'A' AS AA, 1 AS A, 2 AS B, 3 AS C, NULL AS X, 8 AS Z FROM DUAL
UNION ALL
SELECT 'A' AS AA, 1 AS A, 2 AS B, 3 AS C, NULL AS X, 9 AS Z FROM DUAL
)
)
WHERE X IS NOT NULL
GROUP BY AA

Thanks in advance as always!
Prakash
Tom Kyte
September 26, 2012 - 12:08 pm UTC

please utilize support for 3113, 7445, 600's.


it does reproduce on 11.2.0.3 on linux as well.

CTAS Query Error

sam, October 25, 2012 - 11:27 am UTC

Tom:

I have a CTAS SQL statement like this

insert into T1@db_link(col1,col2,col3...)
select col1.col2,col3... from T3, T4, T5, T6
where T1.col1 = T2.col1 and T2.col2 = T3.col3
UNION
select col1.col2,col3... from T7, T8, T9, T10
where T1.col1 = T2.col1 and T2.col2 = T3.col3
order by 2,3,4;

etc......

When i run it , I keep getting this error even though the SELECT part works fine and
if i remove the ORDER BY caluse it works fine.

INSERT into T1@db_link (
*
ERROR at line 1:
ORA-00907: missing right parenthesis
ORA-02063: preceding line from db_link




1) Is there a way to get this working with ORDER BY.

2) is the ORDER BY useful in loading the data in speific order to the database or it is really USELESS.



Tom Kyte
October 25, 2012 - 11:39 am UTC

that isn't a create table as select Sam. that is an insert.


and you probably know what I'm going to say next...

where is the example? where is the reproducible test case. That is the only way we'll be able to see your typo.



yes, the order by can be very useful. It clusters data together that is queried together when used appropriately, reducing the number of physical IO's you might have to perform to retrieve the data.

query

sam, October 25, 2012 - 11:44 am UTC

Tom:

yes it is an INSERT (not CTAS)

So you are saying it should work in SQL with ORDER BY and there must be a typo.

It is a long query.

But if i have a typo why does it work when i remove the ORDER BY clause?

Tom Kyte
October 28, 2012 - 10:12 pm UTC

Sam

example or it didn't happen. It is as simple as that. I have no idea what you actually did - I cannot see your terminal from over here and my crystal ball is in the shop. You say you just removed the order by clause. Maybe you did, maybe you didn't. You said you did a create table as select and then you post an insert. Who knows what you actually did in real life.

example or it didn't happen.

query

sam, October 25, 2012 - 2:18 pm UTC

Tom:

This seems to be related to the database link somehow.
When I try the test the insert locally it works fine.

This is 9iR2 database.


SQL> desc test@db_link
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               VARCHAR2(10)
 COL2                                               DATE


SQL> insert into test@db_link (col1, col2)
      select '5', sysdate from dual
      union
     select '6', sysdate from dual
    order by 1

insert into test@db_link (col1, col2)
*
ERROR at line 1:
ORA-00907: missing right parenthesis
ORA-02063: preceding line from DB_LINK





If i do it in the local database it works fine.

SQL> desc test
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               VARCHAR2(10)
 COL2                                               DATE

SQL> insert into test (col1, col2)
  2  select '3', sysdate from dual
  3  union
  4  select '4', sysdate from dual 
  5  order by 1;

2 rows created.

Tom Kyte
October 28, 2012 - 10:56 pm UTC

works fine in current releases.

here is a workaround for you in that really old release:

insert into t@ora9ir2@loopback (col1, col2 )
select * from (
select '5', sysdate from dual
union
select '6', sysdate from dual
)
order by 1 ;


Function in Group By not working

Joe, November 13, 2012 - 9:50 am UTC

Tom,

I have a query that returning a "ORA-00979: not a GROUP BY expression" error and I can't figure out why. I have simplified the actual query into something that you can run so please keep in mind that this is NOT my real query but it represents the query I'm trying to run.

select t.table_name,
       (select z.tablespace_name
          from user_tables z
         where z.table_name = t.table_name) tablespace_name
  from (select nvl(a.table_name, 'MISSING') table_name
          from user_tables a
         group by nvl(a.table_name, 'MISSING')) t


It appears the NVL function (or any other function) in the group by is causing the problem.

I was told that this query works on 11g (which I'm not able to confirm/deny) but does not work on 10g.

Thanks.

Tom Kyte
November 14, 2012 - 8:16 pm UTC

works in 10g for me

ops$tkyte%ORA10GR2> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Prod
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

select t.table_name,
       (select z.tablespace_name
          from user_tables z
         where z.table_name = t.table_name) tablespace_name
  from (select nvl(a.table_name, 'MISSING') table_name
          from user_tables a
  7           group by nvl(a.table_name, 'MISSING')) t;

TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T1_T2_MV                       USERS
RUPD$_EMP
DEPT                           USERS
MV                             USERS
TEST_3                         USERS
T2                             USERS

Joe, November 15, 2012 - 12:31 pm UTC

Thanks Tom. Unfortunately, we only have version 10.2.0.4. I will try to get the latest patch for 10g. Surprisingly, it also worked in 9.2.0.8.0.


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production

TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> 
SQL> select t.table_name,
  2         (select z.tablespace_name
  3            from user_tables z
  4           where z.table_name = t.table_name) tablespace_name
  5    from (select nvl(a.table_name, 'MISSING') table_name
  6            from user_tables a
  7           group by nvl(a.table_name, 'MISSING')) t;

select t.table_name,
       (select z.tablespace_name
          from user_tables z
         where z.table_name = t.table_name) tablespace_name
  from (select nvl(a.table_name, 'MISSING') table_name
          from user_tables a
         group by nvl(a.table_name, 'MISSING')) t

ORA-00979: not a GROUP BY expression

SQL> 

ora-30927

aliyar elyas, May 11, 2013 - 7:14 am UTC

Hi Tom,

Thanks for all the help giving to Us.

we are running a query and failing with ora-30927.

Exadata 08 node rac active standby database. query is running on stanby site.

11.2.0.3..

intermittendly query fails with error ora-30927. searched metalink and google. did not find any useful link.

query simple body is like below

with
inline view ..
inline view
-
-
-
;

any alternate work around for this issue.? how to fix it ?

can you please provide some guild lines?

Thanks
Aliyar

Tom Kyte
May 12, 2013 - 3:41 pm UTC

please open a service request with support for something like this.

Grouping Sets

Shimmy, May 15, 2013 - 5:47 pm UTC

CREATE TABLE SK_20130515
(ITEM    VARCHAR2(25) NOT NULL,
 LINK_ID NUMBER(10)   NOT NULL);
 
INSERT INTO SK_20130515
VALUES
('ITEM1', 1);
 
INSERT INTO SK_20130515
VALUES
('ITEM1', 2);

INSERT INTO SK_20130515
VALUES
('ITEM1', 3);

INSERT INTO SK_20130515
VALUES
('ITEM2', 1);

INSERT INTO SK_20130515
VALUES
('ITEM2', 5);

INSERT INTO SK_20130515
VALUES
('ITEM3', 2);

INSERT INTO SK_20130515
VALUES
('ITEM4', 6);

INSERT INTO SK_20130515
VALUES
('ITEM5', 6);

INSERT INTO SK_20130515
VALUES
('ITEM5', 7);

INSERT INTO SK_20130515
VALUES
('ITEM6', 8);


SELECT *
FROM SK_20130515
ORDER BY 1,2;

ITEM                         LINK_ID
------------------------- ----------
ITEM1                              1
ITEM1                              2
ITEM1                              3
ITEM2                              1
ITEM2                              5
ITEM3                              2
ITEM4                              6
ITEM5                              6
ITEM5                              7
ITEM6                              8

I want to group ITEMs that share LINK_IDs.
In the above example, ITEM1 and ITEM2 share LINK_ID 1 and ITEM2 and ITEM3 share LINK_ID 2, so ITEM1, ITEM2 and ITEM3 gets grouped together and assigned GROUPING NO 1.

Desired output
ITEM                       GROUPING_NO
-------------------------  -----------
ITEM1                                1 
ITEM2                                1 
ITEM3                                1
ITEM4                                2
ITEM5                                2 
ITEM6                                3

Is it possible to do this through SQL?

Thank you
Tom Kyte
May 16, 2013 - 8:40 am UTC

how many layers down does this have to go? is it just item1 to 2 to 3 and stop, or could it go infinitely deep?

Grouping Sets

Shimmy, May 16, 2013 - 1:08 pm UTC

It can have infinite layers.

Grouping Sets

Shimmy, May 16, 2013 - 1:15 pm UTC

By infinite layer I meant, ITEM1 may be linked to ITEM2, ITEM4 may be linked to ITEM2, ITEM100 may be linked to ITEM4. All these should be grouped together and assigned one number and the next set of the group should be assigned next number and so on...

I have a crude SQL I use using MIN() analytical function, CONNECT BY, DENSE RANK to do it, but I don't know it's it's 100% fool proof/efficient.

WITH A AS
(SELECT link_id, item, (SELECT MIN(l2.link_id) 
                             FROM SK_20130515 l2
                             WHERE (l2.item = l1.item OR l2.link_id = l1.link_id)) min_link
 FROM SK_20130515 l1),
C AS
(SELECT DISTINCT link_id, item, MIN(min_link) OVER (PARTITION BY link_id) min_link
 FROM A),
B AS
(SELECT link_id, item, CONNECT_BY_ROOT min_link root_link
 FROM C
 CONNECT BY NOCYCLE min_link = PRIOR link_id),
D AS
(SELECT DISTINCT d1.link_id, d1.item, d2.root_link, 
        MIN(CASE WHEN d1.root_link < d2.root_link THEN d1.root_link ELSE d2.root_link END) 
           OVER (PARTITION BY d1.root_link) min_link_val
 FROM b d1, b d2
 WHERE d2.link_id = d1.root_link),
E AS
(SELECT DISTINCT link_id, item, MIN(min_link_val) OVER (PARTITION BY LINK_ID) min_link_val
 FROM D)
SELECT DISTINCT item, DENSE_RANK() OVER (ORDER BY min_link_val) grouping_no
FROM E
ORDER BY 1,2;  

Tom Kyte
May 18, 2013 - 6:44 am UTC

tell you what -

explain your approach, describe in psuedo code, text what you are doing - and then we can evaluate it.

Comparing string and display diffs

Rajeshwaran, Jeyabal, May 16, 2013 - 5:31 pm UTC

Tom,

How can I compare two columns (having string) and display the diff.
drop table t purge;
create table t(x number,c1 varchar2(40), c2 varchar2(40));
insert into t values(1,'A,B,C','A,C');
insert into t values(2,'A,C','A,D,C');
commit;

Compare two columns c1 and c2 and display results in diff: (sample output below)
c1    c2    diff 
A,B,C A,C   B 
A,C A,D,C      D

Tom Kyte
May 21, 2013 - 1:55 pm UTC

look below, someone beat me to it.

MC, May 17, 2013 - 4:55 pm UTC

Rajeshwaran - this is a bit rough and ready and I haven't tested it thoroughly so I make no promises but this might do the job :


select *
from (

    with ilv_main as (
            
          select x,
                 c1,
                 c2,
                 substr(c1_raw,rnum,1) c1_char,
                 substr(c2_raw,rnum,1) c2_char
          from   (select replace(c1,',') c1_raw,
                         replace(c2,',') c2_raw,
                         c1,
                         c2,
                         x
                  from t),
                
          
                (select rownum rnum
                 from dual
                 connect by level <= (select max(greatest(length(replace(c1,',')),
                                                          length(replace(c2,',')))) 
                                      from t))
                 
                )
            
    select ilv1.x,ilv1.c1,ilv1.c2,c1_char diff
    from   (select x,c1,c2,c1_char from ilv_main where c1_char is not null) ilv1,
           (select x,c1,c2,c2_char from ilv_main where c2_char is not null) ilv2
    where ilv1.c1_char = ilv2.c2_char(+)
    and   ilv1.x = ilv2.x(+)
    and   ilv2.c2_char is null
    
    union all
    
    select ilv4.x,ilv4.c1,ilv4.c2,c2_char
    from   (select x,c1,c2,c1_char from ilv_main where c1_char is not null) ilv3,
           (select x,c1,c2,c2_char from ilv_main where c2_char is not null) ilv4
    where ilv3.c1_char(+) = ilv4.c2_char
    and   ilv3.x(+) = ilv4.x
    and   ilv3.c1_char is null)

order by x,diff






It will :

- Strip out the commas from the text strings
- Pivot the resultant strings into two new 'columns'
- Perform an 'anti join' to find values in one column that
have no match in the other

It does this in both directions (i.e. compare c1 to c2 and then c2 to c1)


MC, May 17, 2013 - 4:57 pm UTC

Might help if I included the results!!!



1 A,B,C A,C B
2 A,C A,D,C D



@MC

Rajeshwaran, Jeyabal, May 18, 2013 - 6:52 am UTC

Thanks for your help, I got that in a different approach.

rajesh@ORA10G> select x,c1,c2,
  2     trim( translate(
  3             case when length(c1) >= length(c2) then c1 else c2 end,
  4             case when length(c1) >= length(c2) then c2 else c1 end,
  5             rpad(' ',greatest(length(c1),length(c2)))
  6                      ) ) diff
  7  from t;

         X C1         C2         DIFF
---------- ---------- ---------- ----------
         1 A,B,C      A,C        B
         2 A,C        A,D,C      D

2 rows selected.

Elapsed: 00:00:00.01
rajesh@ORA10G>

On "Grouping Sets" from Shimmy

Stew Ashton, May 19, 2013 - 2:38 pm UTC


Here is another approach to the problem. Apologies if I misunderstood.

I start by ordering the items with the same link_id, so that the first item has a prev_item of null and the next item has a prev_item equal to the preceding item.

Then I add rows that reverse the positions of item and prev_item, so the hierarchical query can go in both directions without getting into a loop too early.

Finally I do a hierarchical query, starting with the null prev_items. This will group everything together, but may create duplicate groups. So I group by item to eliminate the duplicates and use DENSE_RANK to number the groups in order.

Warning: performance can vary widely depending on the data.
WITH DATA AS (
  SELECT distinct item,
    lag(item) OVER(PARTITION BY link_id ORDER BY item) prev_item
    FROM (select distinct item, link_id from SK_20130515)
), filtered AS (
  SELECT prev_item, item FROM DATA
    WHERE prev_item IS NOT NULL 
    OR item not in (select item from data where prev_item is not null)
  UNION ALL
  SELECT item, prev_item FROM DATA
    WHERE prev_item IS NOT NULL
)
SELECT dense_rank() over(order by MIN(CONNECT_BY_ROOT(item))) grp, item
  FROM filtered
  START WITH prev_item IS NULL
  CONNECT BY NOCYCLE prev_item = PRIOR item
  GROUP BY item
  ORDER BY 1,2;

       GRP ITEM                    
---------- -------------------------
         1 ITEM1                     
         1 ITEM2                     
         1 ITEM3                     
         2 ITEM4                     
         2 ITEM5                     
         3 ITEM6

translate solution

Mark, May 20, 2013 - 8:15 pm UTC

Rajesh, the use of translate is brilliant!

@Rajesh: nice one

A reader, May 21, 2013 - 4:45 pm UTC


Comparing columns of csv data

AndyP, May 31, 2013 - 11:43 am UTC

Re the problem of comparing columns containing comma-separated data

I thought it might be interesting to expand a bit on this to address multi-character data elements and also to present the output so you know which column is deficient

This use of some of Tom's methods kind of works, although doesn't preserve the order of, or any duplicates in, the elements


col diffs format a40

prompt This is an attempt to present differences between columns containing comma-separated data
prompt It does identify the differences (both ways) but
prompt it loses any ordering and condenses multiple occurrences

with t as
(
select 1 x,'fred,bob,mary,sue,joe' y,'joe,mary' z from dual
union all
select 2,'A,B,C','C,A' from dual
union all
select 3,'comma,stored as,a','a,longer,comma,separated,list,stored as,a,string' from dual
union all
select 4,'D,N,C,L,A,B,C,D,E','D,A' from dual
union all
select 5,'A,x,y,B,C','C,m,n,D' from dual
)
,ycol as
(
select x,column_value ytokens
  from (select x,y,','||y||',' txt from t) t
      ,table(cast(multiset(select trim(substr(txt,instr(txt,',',1,level) + 1,instr(txt,',',1,level+1) - instr(txt,',',1,level) -1)) token from dual connect by level<=length(y)-length(replace(y,',',''))+1) as sys.odcivarchar2list))
)
,zcol as
(
select x,column_value ztokens
  from (select x,z,','||z||',' txt from t) t
      ,table(cast(multiset(select trim(substr(txt,instr(txt,',',1,level) + 1,instr(txt,',',1,level+1) - instr(txt,',',1,level) -1)) token from dual connect by level<=length(z)-length(replace(z,',',''))+1) as sys.odcivarchar2list))
)
,yminusz as
(
select ycol.x,ycol.ytokens from ycol
minus
select zcol.x,zcol.ztokens from zcol
)
,zminusy as
(
select zcol.x,zcol.ztokens from zcol
minus
select ycol.x,ycol.ytokens from ycol
)
select x,y,z,which,listagg(ytokens,',') within group (order by which,x) diffs
  from
(
select t.x,t.y,t.z,ytokens,'YOnly' which from yminusz,t where yminusz.x=t.x
union all
select t.x,t.y,t.z,ztokens,'ZOnly' from zminusy,t where zminusy.x=t.x
)
group by which,x,y,z
order by which,x
/

SQL > @tok
This is an attempt to present differences between columns containing comma-separated data
It does identify the differences (both ways) but
it loses any ordering and condenses multiple occurrences
 X Y                     Z                                                WHICH DIFFS
-- --------------------- ------------------------------------------------ ----- ----------------------------
 1 fred,bob,mary,sue,joe joe,mary                                         YOnly bob,fred,sue
 2 A,B,C                 C,A                                              YOnly B
 4 D,N,C,L,A,B,C,D,E     D,A                                              YOnly B,C,E,L,N
 5 A,x,y,B,C             C,m,n,D                                          YOnly A,B,x,y
 3 comma,stored as,a     a,longer,comma,separated,list,stored as,a,string ZOnly list,longer,separated,string
 5 A,x,y,B,C             C,m,n,D                                          ZOnly D,m,n

6 rows selected.


Need help on this query

Abhi, April 19, 2014 - 6:09 pm UTC

Hi Tom,

I have a query


CREATE TABLE "DEMO"
( "ACCOUNTID" NUMBER,
"EXPDATE" DATE,
"PRODLINE" VARCHAR2(100)
)
REM INSERTING into DEMO
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (1,to_date('01/01/2014','MM/DD/YYYY'),'PROD001');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (1,to_date('01/02/2014','MM/DD/YYYY'),'PROD002');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (1,to_date('01/02/2014','MM/DD/YYYY'),'PROD001');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (2,to_date('01/01/2014','MM/DD/YYYY'),'PROD002');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (2,to_date('01/02/2014','MM/DD/YYYY'),'PROD001');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (2,to_date('01/02/2014','MM/DD/YYYY'),'PROD002');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (3,to_date('01/01/2014','MM/DD/YYYY'),'PROD001');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (3,to_date('01/02/2014','MM/DD/YYYY'),'PROD003');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (3,to_date('01/03/2014','MM/DD/YYYY'),'PROD003');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (2,to_date('01/03/2014','MM/DD/YYYY'),'PROD001');
Insert into DEMO (ACCOUNTID,EXPDATE,PRODLINE) values (2,to_date('01/03/2014','MM/DD/YYYY'),'PROD003');

My expected result is:

For a given accountID, select the comnination of AccountID+ExpDate. Take the maximum count and selec the rows. In case there are situation,
where we have multiple set with maximum combination count, take the first one

for AccountID1: we have two set of data - AccountID+ExpirationDate
Here I will check the count of combination and consider 01/02/2014 as it has two prodlines so will discard 01/01/2014 which has only one

For accountID2: we have count as 2 for sets 01/02/2014 and 01/03/2014 so we will consider 01/02/2014 and discard 01/03/2014

For accountID: since all have different expiration dates, for PROD003, we will select 01/02/2014 and discard 01/03/2014
1 01/02/2014 PROD001
1 01/02/2014 PROD002

2 01/02/2014 PROD001
2 01/02/2014 PROD002

3 01/01/2014 PROD001
3 01/02/2014 PROD003

Thanks a lot.

NESTED QUERIE

KEDARNATH, July 06, 2014 - 1:53 pm UTC

SELECT * FROM EMP WHERE DEPTNO IN(SELECT DEPTNO FROM EMP WHERE JOB IN(SELECT JOB FROM EMP WHERE ENAME IN('SCOTT','SMITH'))); WHAT MISSING IN NESTED SUBQUERY

Sql Query - need help.

V i k a s, November 13, 2014 - 6:24 am UTC

Dear Mr. Kyte,

Greetings. Hope you are doing great.:-)

I have the following scenario where by I have-

1) A regular user-segments (system table)
2) A set of few tables which have a column by the name "When_created timestamp"

Requirement:
============
To generate a report that would provide -

Table_name (segment_name from user_segments).
Oldest Record (min(when created) from the actual tables.
Size of Tables (bytes/1024/1024/1024) from user_segments.

Ex -
=====

TABLE NAME TABLE SIZE (GB) WHEN_CREATED (OLDEST)
======================= ================= ===========================
TABLE-1 8.61 27-SEP-11 22.15.03.000000
TABLE-2 16.25 07-JUN-12 21.53.52.000000
TABLE-3 2.06 27-SEP-11 22.15.03.000000


Though I was able to do this using Pl/Sql, but I am just wondering if I could have done this using simple SQL?

I tried achieving the above desired output using various SQL queries, but couldn't get it right. Either I got the above details for just one table or the script just failed returning errors.


Need your Insight:
==================
Would you please let me know -

1) If this is doable using the simple SQL at all?
2) If yes, then for my better understanding, can you share the SQL code that shows how can be achieved using regular SQL?
3) What tip would you like to give us when met by such requirements, how can we fasten up the performance from such queries?

Thanks & regards
Vikas.

VERY VERY COMPLEX QUERY

Praveen Ray, April 20, 2016 - 8:30 am UTC

Hi - the question is -

This is actually a very very complex/challenging query, at least for me. If you see the data, the item for example say 10 laptops were purchased on 15-Jan and another 10 on 25-Jan. On Mar-15, 15 laptops were distributed among emplyees or sold out. What my boss wants is a report as given belo. Since, the first given-out number of laptops is 15, and bought in 10 lots - divide those 15 into 10 and 5 so that he can calculated for how many days those were kept in custody (SOLD_DAYS).


create table t (item_code varchar2(10), in_out varchar2(1), purchase_date date, amt number);

insert into t values ('X', 'I', to_date('01/15/2016','MM/DD/yyyy'), 10);
insert into t values ('X', 'I', to_date('01/25/2016','MM/DD/yyyy'), 10);
insert into t values ('X', 'O', to_date('03/15/2016','MM/DD/yyyy'), 15);
insert into t values ('X', 'I', to_date('03/25/2016','MM/DD/yyyy'), 20);
insert into t values ('X', 'O', to_date('04/15/2016','MM/DD/yyyy'), 15);
insert into t values ('X', 'O', to_date('04/20/2016','MM/DD/yyyy'), 8);


CODE I IN_DATE QTY O OUT_DATE OUT_QTY QTY_LEFT SOLD_DAYS
---- --- ---------- --- --- ---------- ------- --------- ---------
X I 01/15/2016 10 O 03/15/2016 10 0 60 -- THE FIRST OUT TO FIRST IN, IF OUT_QTY > QTY (STORE)
X I 01/25/2016 10 O 03/15/2016 5 5 50 -- THEN BREAK OUT_QTY INTO TWO ROWS LIKE 10 (= IN_QTY), 5 (REMAINING)
X I 01/25/2016 5 O 04/15/2016 5 0 81
X I 03/25/2016 20 O 04/15/2016 10 10 21
X I 03/25/2016 10 O 04/20/2016 8 2 26
X I 03/25/2016 2


about earlier query

Praveen Ray, April 22, 2016 - 5:44 am UTC

Hi - could you please give some idea on my last query? I tried to develop a query but it has become very large and cumbersome to look and also seems a performance killer. Any idea will help/ thanks.

complex query - done

Praven Ray, April 22, 2016 - 8:56 am UTC

I just did it. A couple of analytics and conditions (constructional) and appeared to be a quite simple calculation. I feel lucky ;)
Connor McDonald
April 22, 2016 - 9:31 am UTC

Glad to hear it. Could you share what you did so others can learn?

Complex Query - Solution

Praveen Ray, April 22, 2016 - 12:29 pm UTC

Hi...I just checked your response. please, take a look at my approach. Request you to suggest any modification as this is raw solution and still working on this. But anyway this will solve my need for sure.

select tab_x.*
, case when qty_hand = 0 then least(amt,s_amt)
when qty_hand <> 0 and prev_qty is null then least(amt,s_amt)
when qty_hand <> 0 and prev_qty <> 0 then least(abs(prev_qty),s_amt)
when qty_hand <> 0 and nvl(prev_qty,0) = 0 then least(amt,s_amt) end txn_qty
from (
select a.item_code, a.item_name, a.purchase_date, a.in_out, a.amt
,a.unit_start, a.unit_end
,s.purchase_date s_purchase_date, s.in_out s_in_out, s.amt s_amt
,s.unit_start s_unit_start, s.unit_end s_unit_end
,nvl((a.unit_end - s.unit_end),a.amt) qty_hand
,lag(a.unit_end - s.unit_end) over (partition by a.item_code order by a.purchase_date, s.purchase_date) prev_qty
from (
select a.*,
sum(amt) over (partition by item_code order by purchase_date range between unbounded preceding and 1 preceding) unit_start
,sum(amt) over (partition by item_code order by purchase_date) unit_end
from (select * from x_tst where /*item_name = 'XX' and*/ in_out = 'I') a
) a
left join (
select s.*,
sum(amt) over (partition by item_code order by purchase_date range between unbounded preceding and 1 preceding) unit_start
,sum(amt) over (partition by item_code order by purchase_date) unit_end
from (select * from x_tst where /*item_name = 'XX' and*/ in_out = 'O') s
) s
on a.item_code = s.item_code and a.unit_end > nvl(s.unit_start, 0) and s.unit_end > nvl(a.unit_start, 0)
) tab_x
order by item_code, purchase_date, s_purchase_date;

Chris Saxon
April 24, 2016 - 6:19 am UTC

Thanks for sharing your code.

Martins owolabi adeniyi, January 28, 2020 - 3:36 pm UTC

generate a report for shippers by specifying the shipper id, company name and phone of all shippers. Alias the table s and reference the columns in your select statement.

More to Explore

SQL

The Oracle documentation contains a complete SQL reference.