Skip to Main Content
  • Questions
  • SQL Query with dynamic number of columns

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Bhavani.

Asked: July 21, 2002 - 12:34 pm UTC

Last updated: January 22, 2021 - 10:50 am UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Hi Tom,

I have a table like this

create table temp1
(f_name varchar2(100),
f_date date,
f_amount integer
);

Records in this table will be like this:

Ajay 15-JUL-02 500
Bhavani 15-JUL-02 700
Chakri 17-JUL-02 200
Ajay 17-JUL-02 100

Given two dates(Say between 15-JUL-02 and 18-JUL-02), I want to get the Output like this between those two dates.

15-JUL-02 16-JUL-02 17-JUL-02 18-JUL-02
Ajay 500 0 100 0
Bhavani 700 0 0 0
Chakri 0 0 200 0


I think, I have a way to do this. Creating a View Dynamically using Execute Immediate and getting the values from that view.

But I am wondering is there any better way to get data like this from a single query. Please Advise.

Sincerely
Bhavani

and Tom said...

Since the number of columns is dynamic -- every time you run this "query" they could be different -- you'll use a stored procedure which returns a dynamically opened ref cursor (not a view -- a view would be useless here)

Here is an example of what I mean by that:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace package demo_pkg
2 as
3 type rc is ref cursor;
4
5 procedure get_query( p_cursor in out rc, p_start date, p_end date );
6 end;
7 /

Package created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace package body demo_pkg
2 as
3
4 procedure get_query( p_cursor in out rc, p_start date, p_end date )
5 is
6 l_query long := 'select name ';
7 begin
8
9 for i in 1 .. trunc(p_end)-trunc(p_start)+1
10 loop
11 l_query := l_query || ', sum( decode( trunc(d), ' ||
12 'to_date( ''' || to_char(p_start+i-1,'yyyymmdd') ||
13 ''', ''yyyymmdd'' ), amt, 0 )) "' ||
14 to_char(p_start+i-1) || '"';
15 end loop;
16 l_query := l_query || ' from t group by name';
17 open p_cursor for l_query;
18 end;
19
20 end;
21 /

Package body created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autoprint on
ops$tkyte@ORA817DEV.US.ORACLE.COM> variable x refcursor
ops$tkyte@ORA817DEV.US.ORACLE.COM> exec demo_pkg.get_query( :x, '15-jul-2002', '18-jul-2002' );

PL/SQL procedure successfully completed.


NAME 15-JUL-02 16-JUL-02 17-JUL-02 18-JUL-02
---------- ---------- ---------- ---------- ----------
Ajay 500 0 100 0
Bhavani 700 0 0 0
Chakri 0 0 200 0

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec demo_pkg.get_query( :x, '14-jul-2002', '18-jul-2002' );

PL/SQL procedure successfully completed.


NAME 14-JUL-02 15-JUL-02 16-JUL-02 17-JUL-02 18-JUL-02
---------- ---------- ---------- ---------- ---------- ----------
Ajay 0 500 0 100 0
Bhavani 0 700 0 0 0
Chakri 0 0 0 200 0

ops$tkyte@ORA817DEV.US.ORACLE.COM>



Rating

  (520 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

SQL query

munz, July 21, 2002 - 5:07 pm UTC

TOm:

Based on this, can you always define a variable, define a string for it and the opne cursor for it.

will this also work from in list search like:

l_list=('va','ca','pa');
l_query:='select * from table where col in '||l_list;
open cursor for l_query;

Tom Kyte
July 21, 2002 - 11:56 pm UTC

yes it will but it'll violate my "you better use bind variables" rule wouldn't it.

Make sure to put an

execute immediate 'alter session set cursor_sharing=force';
open ...
execute immediate 'alter session set cursor_sharing=exact';

around the dynamic open to lessen the impact. I would prefer NOT to use this approach at all.

Great

Bhavani, July 21, 2002 - 5:38 pm UTC

This Solves Lot of My problems. Once again, you are the Best.

Ope ncursor

munz, July 22, 2002 - 9:19 am UTC

TOm:

A followup to what you wrote:

execute immediate 'alter session set cursor_sharing=force';
execute immediate 'alter session set cursor_sharing='exact';

what doest this do?

Thanks

Tom Kyte
July 22, 2002 - 10:03 am UTC

search this site for cursor_sharing and read all about it!

or read chapter 10 of my book "Expert one on one"

Great!

A reader, July 22, 2002 - 11:16 am UTC

Simple, concise suggestion and explanation of a useful development principle!

Thanks Tom!

that's cross tab query

Bhavesh Tailor, July 22, 2002 - 1:21 pm UTC

Hi Tom:

The above result can be easily obtained in MS Access using crosstab query. Why Oracle does NOT provide crosstab query?

Tom Kyte
July 22, 2002 - 6:23 pm UTC

MS Access the GUI reporting tool can do a cross tab (and heck, you can use that tool against Oracle if you like).


Oracle reports the GUI reporting tool -- does crosstabs.
Oracle Discover -- the GUI ad-hoc query tool -- does crosstabs.

We have it. If you want to do it in PURE SQL in Access (eg: not using the GUI reporting tool) what then? Show me the code you would write in VB using Access directly that will do a cross tab? That is what we should be comparing here.

Jim, July 22, 2002 - 6:08 pm UTC

Although not recommended, liked the example using
cursor_sharing parameter.

Very Nice

R.Chakravarthi, August 29, 2003 - 12:33 am UTC

Dear Sir,
can I have a query that selects rows from all tables in my
schema.Is a single select possible?Please inform me if you
have a query like that.
Thanks.

Tom Kyte
August 29, 2003 - 9:06 am UTC

you could write one that union alls all of the tables.

you would have to make every table look structurally the same (same number of columns -- eg, select NULL to make up columns when a table doesn't have enough, same types of columns -- eg, use to_char on number/dates to make everything a string)

of what possible use this could ever be -- well, that I cannot fathom.

Nice

R,Chakravarthi, September 02, 2003 - 1:23 am UTC

Dear Sir,
Well and wish the same from you.I have some questions related to queries.
1)Suppose if I have a string like "Oracle",How to write a query that prints each character in each line?
2)Have a look at the following queries.
(i)select ename,deptno,sal from emp e1 where sal = (select max(sal) from emp e2 where e1.deptno = e2.deptno) order by deptno;
(ii)select ename,deptno,sal from emp where sal in(select
max(sal) from emp group by deptno;
My Question is Is there any other way that this query can
be issued?
3) How to fix the "Single row subquery returns more than one
row" message?
Expecting your reply.
Thanks




Tom Kyte
September 02, 2003 - 7:26 am UTC

1) 


ops$tkyte@ORA920> variable x varchar2(20)
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec :x := 'Oracle'
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select substr( :x, rownum, 1 )
  2    from all_objects
  3   where rownum <= length(:x);
 
S
-
O
r
a
c
l
e
 
6 rows selected.



2) given that you gave me 2 totally different queries that return 2 totally different answers -- I'd have to say "i must be able to give an infinite set of responses since you are just looking for random queries" :)


ops$tkyte@ORA920> drop table emp;
 
Table dropped.
 
ops$tkyte@ORA920> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA920> drop table dept;
 
Table dropped.
 
ops$tkyte@ORA920> create table dept as select * from scott.dept;
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> update emp set sal = 2000;
 
14 rows updated.
 
ops$tkyte@ORA920> update emp set sal = 3000 where rownum = 1;
 
1 row updated.
 
ops$tkyte@ORA920> commit;
 
Commit complete.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename,deptno,sal
  2    from emp e1
  3   where sal = (select max(sal)
  4                  from emp e2
  5                 where e1.deptno = e2.deptno)
  6   order by deptno;
 
ENAME          DEPTNO        SAL
---------- ---------- ----------
CLARK              10       2000
KING               10       2000
MILLER             10       2000
SMITH              20       3000
ALLEN              30       2000
TURNER             30       2000
JAMES              30       2000
WARD               30       2000
MARTIN             30       2000
BLAKE              30       2000
 
10 rows selected.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename,deptno,sal
  2    from emp
  3   where sal in (select max(sal)
  4                   from emp
  5                  group by deptno );
 
ENAME          DEPTNO        SAL
---------- ---------- ----------
ALLEN              30       2000
WARD               30       2000
JONES              20       2000
BLAKE              30       2000
SCOTT              20       2000
TURNER             30       2000
JAMES              30       2000
MILLER             10       2000
FORD               20       2000
ADAMS              20       2000
KING               10       2000
CLARK              10       2000
MARTIN             30       2000
SMITH              20       3000
 
14 rows selected.


<b>see, totally different answers to your query with the same data</b>  This is why I have a chapter in my new book on "don't tune a query, tune a question"

I have to assume that you want the set of people, by department, that make the TOP salary in that dept...  queries such as:

select ename, deptno, sal
  from ( select ename, deptno, sal,
                rank() over ( partition by deptno order by sal desc nulls last ) r
           from emp )
 where r = 1;
                                                                                                                
select ename, deptno, sal
  from ( select ename, deptno, sal,
                max(sal) over ( partition by deptno ) max_sal
           from emp )
 where sal = max_sal
/
                                                                                                                
select ename, deptno, sal
  from ( select ename, deptno, sal,
                first_value(sal) over ( partition by deptno order by sal desc nulls last ) first_sal
           from emp )
 where sal = first_sal
/

will do that.


3) there is nothing to fix here.  

You have a logic/algorithm problem.

You are basically saying something like:


select * from t where x = ( subquery )

and subquery ISN'T a single value but a set.  what would you have us do?  you need to phrase your problem in english and then we might be able to tell you how to solve it.  

the error "single row subquery" isn't an "oracle" error -- it is a bug in your logic/code/data....



 

Nice

R.Chacravarthi, September 03, 2003 - 5:29 am UTC

Dear Sir,
 Thanks for your reply.My second question was
   2)I have put a query which returns highest salary getting employees departmentwise.Both the queries return the same resultset in my system.I use oracle v8.0 personal edition.Queries and their resultsets follow after this.
** What I asked you was "Is there any other way that this query can be expressed?".But you are agonising over my question.I don't know why
Any how Thanks for your time spending.If you have any other way of expressing
the same query,Please do write a follow up.

SQL> select ename,sal,deptno from emp e1 where sal = (select max(sal) from
  2  emp e2 where e1.deptno = e2.deptno) order by deptno;

ENAME             SAL     DEPTNO                                                
---------- ---------- ----------                                                
KING             5000         10                                                
SCOTT            3000         20                                                
FORD             3000         20                                                
BLAKE            2850         30                                                

SQL> select ename,sal,deptno from emp where sal in(select max(sal) from emp
  2  group by deptno) order by deptno;

ENAME             SAL     DEPTNO                                                
---------- ---------- ----------                                                
KING             5000         10                                                
SCOTT            3000         20                                                
FORD             3000         20                                                
BLAKE            2850         30                                                

SQL> spool off
 

Tom Kyte
September 03, 2003 - 7:05 am UTC

your queries -- in general -- do NOT RETURN THE SAME data -- i proved that. so what if you data is setup such that they do not. they are *not* the same query. they ask totally totally different questions. they are not even a little comparable.


that is why i "agonized" over this -- you are making a very very very false assumption. One cannot tell you a 3rd way to write a query given that the first two queries are not even remotely similar.

remember -- it only takes one counter case to prove something wrong. your queries are not similar.

no matter how many times you run them -- they are not similar. they are not the same.

I did try to answer your question which I asssume was "find the highest paid people by deptno". I gave you many examples. your first query does that. your second query DOES NOT..

your second query returns the set of people who make as much as the highest paid person IN ANY DEPTNO. not just theirs -- ANY deptno.

(i'm trying to teach you how to read the sql so you do not make errors like this, that is the "agonization" here)....



Nice

R.Chacravarthi, September 08, 2003 - 9:12 am UTC

Dear sir,
well and wish the same from you.
1)I tried to put a query which groups employees under their managers.But it always returned errors.I tried a pl/sql block which returned the result set I was looking for.Can this anonymous block be transformed into a sql query?Could you please help? Please don't forget to specify the different formats of the same query.
Thanks in advance.
SQL> begin
  2   for x in (select distinct mgr from emp order by mgr) loop
  3     dbms_output.put_line('Manager:'||x.mgr);
  4    for y in (select ename from emp where mgr = x.mgr) loop
  5      dbms_output.put_line(y.ename);
  6    end loop;
  7    dbms_output.put_line(' ');
  8   end loop;
  9  end;
 10  /
Manager:7566                                                                    
SCOTT                                                                           
FORD                                                                            
Manager:7698                                                                    
ALLEN                                                                           
MARTIN                                                                          
TURNER                                                                          
JAMES                                                                           
Manager:7782                                                                    
MILLER                                                                          
Manager:7788                                                                    
ADAMS                                                                           
Manager:7839                                                                    
JONES                                                                           
BLAKE                                                                           
CLARK                                                                           
JOE                                                                             
Manager:7902                                                                    
SMITH                                                                           
Manager:                                                                        

PL/SQL procedure successfully completed.

SQL> spool off
2)Is there a query format like 
   select 'select * from a' from b;
 If yes,please provide some examples.
 Thanks for your previous reply. 

Tom Kyte
September 08, 2003 - 12:33 pm UTC

  1  select mgr, cursor( select ename from emp emp2 where emp2.mgr = emp.mgr )
  2* from ( select distinct mgr from emp ) emp
ops$tkyte@ORA920LAP> /

       MGR CURSOR(SELECTENAMEFR
---------- --------------------
      7566 CURSOR STATEMENT : 2

CURSOR STATEMENT : 2

ENAME
----------
SCOTT
FORD

      7698 CURSOR STATEMENT : 2

CURSOR STATEMENT : 2

ENAME
----------
ALLEN
WARD
MARTIN
TURNER
JAMES


is one way.  

ops$tkyte@ORA920LAP> break on mgr skip 1
ops$tkyte@ORA920LAP> select mgr, ename from emp order by mgr, ename;

       MGR ENAME
---------- ----------
      7566 FORD
           SCOTT

      7698 ALLEN
           JAMES
           MARTIN
           TURNER
           WARD

      7782 MILLER

      7788 ADAMS

      7839 BLAKE
           CLARK
           JONES

      7902 SMITH

           KING


14 rows selected.

is perhaps another. 

Sql Query

archary, September 09, 2003 - 11:14 am UTC

Hi Tom,
Need your help.
For example I have tables and data as bellow.
I have same empname in all deptno with diff in sal.
If there is no employee in a deptno. still i have to display empname with sal = 0.
Please help. Thanks in advance.

SQL> desc tt
 Name                                      Null?    Type
 ----------------------------------------- -------- -------- ENAME                                         VARCHAR2(10)
 SAL                                           NUMBER
 DNO                                           NUMBER

SQL> 
SQL> 
SQL> select * from tt;

ENAME             SAL        DNO
---------- ---------- ----------
A                 100         10
A                 200         10
A                 300         20

Elapsed: 00:00:00.66
SQL> desc t
 Name                                      Null?    Type
 ----------------------------------------- -------- -------- DNO                                              NUMBER

SQL> select * from t;

       DNO
----------
        10
        20
        30

Like to display output as bellow.

ENAME             SAL        DNO
---------- ---------- ----------
A                 100         10
A                 200         10
A                 300         20
A                 0           30

 

Tom Kyte
September 09, 2003 - 11:56 am UTC

where the heck did the ename come from?????? if there is no emp in dno=30, how do they have a name????


in general, this is an outer join

select ename, nvl(sal,0), dno
from tt, t
where tt.dno(+) = t.dno;


Sql Query

archary, September 10, 2003 - 3:56 am UTC

Hi,
Thank you very much.
My intension is
In a year 2003 I have fixed customers for 12 months.
So I have year and month info in one table.
My senario is I haven't made any sales to my fixed customers in some months in a year.
This info I am not logging in my transaction table.
Means I don't have any info about customers for few months.
So in this case I need to generate a report for 12 months. If no sales in a moth still i have to show customer name with zero amount.

SQL> desc t_cust
 Name                                      Null?    Type
 ----------------------------------------- -------- -------
 CNO                                                NUMBER
 AMT                                                NUMBER
 MTH                                                NUMBER
 YR                                                 NUMBER

SQL> select * from t_cust;

       CNO        AMT        MTH         YR
---------- ---------- ---------- ----------
      1001        100          1       2003
      1001        200          2       2003
      1001          4          4       2003
      1001        500          2       2003
      1002       2000          1       2003
      1002       3000          2       2003
      1002       2000          3       2003
      1002       3000          3       2003
      1002       6000          6       2003
SQL> desc myr
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------
 MTH                                                NUMBER
 YR                                                 NUMBER

SQL> select * from myr;

       MTH         YR
---------- ----------
         1       2003
         2       2003
         3       2003
         4       2003
         5       2003
         6       2003
         7       2003
         8       2003
         9       2003
        10       2003
        11       2003
        12       2003
Please provide solution. Thanks in advance.
I am able to get for one customer. 
select mth, yr, cno, sum(amt)
from
(
select a.mth mth, a.yr yr,
decode(a.cnt,0,(select unique cno from t_cust where cno = 1001),cno) cno, nvl(a.amt,0) amt, a.c
from
(
select cno cno, myr.mth mth, myr.yr yr, amt amt, count(cno) over ( partition by myr.mth ) cnt
from t_cust, myr
where t_cust.mth(+) = myr.mth
and t_cust.yr(+) = myr.yr
and t_cust.cno(+) = 1001
) a
)
group by mth, yr, cno


 

Tom Kyte
September 10, 2003 - 7:33 pm UTC

you need to join your list of distinct customer ids to the year/month table -- and then outer join to that.

outer join to:


select year, month, cust_id
from year_month_table, (select distinct cust_id from customer_table)



query

A reader, October 17, 2003 - 4:43 am UTC

Hi Tom
I have a data in two table as given below

Table..A
colA1 colA2
1 null
2 null
3 null
4 11
5 null
6 null

Table..B
colB1 colA2
1 11
2 12
3 13

I want to update column colA2 of table..A with colA2 of tableB
in a single statement.The final result of Table..A is like this

Table..A
colA1 colA2
1 null
2 12
3 null
4 11
5 13
6 null

Here Value 11 already exist in TableA so need not to update
so the rest 12,13 from TableB should update in TableA
any order.

Can you please give me the solution .



Tom Kyte
October 17, 2003 - 10:05 am UTC

search for

update join

on this site

Sql Query to cut string

ARC, October 23, 2003 - 8:28 am UTC

Hi Tom,
Need your help to write query to cut a string into parts.
I have data as bellow.

Table : TB_ODW_GO

REF_NBR LOG_DESC
----------------------------------------------------------
0234567 -111: ORDER:10901 NOT FOUND-222 ZIPCODE: NOTFOUND
1274033 -111: ORDER:12343 NOT FOUND

I want to display output as bellow.

REF_NBR ERR_CD ERR_MSG
-----------------------------------------------------------
0234567 -111 ORDER:10901 NOT FOUND
0234567 -222 ZIPCODE: NOTFOUND
1274033 -111 ORDER:12343 NOT FOUND

Thanking you in advance.

ARC.


Tom Kyte
October 23, 2003 - 12:52 pm UTC

you want a single row as two? is that correct? is it "two" or "n"



Sql Query

ARC, October 28, 2003 - 12:47 am UTC

Hi Tom,
I need it for 'n' lines.

Thanks
ARC

Tom Kyte
October 28, 2003 - 7:55 am UTC

you'll be doing it procedurally -- in 9i, you can write a pipelined function to do it.

something like this.  I made a simplifying assumping that the string was formated as

-<ERROR_CODE>: <MESSAGE>-<ERROR_CODE>: <MESSAGE>


that is, - precedes error code, error code terminated by :.  If that is not your case, you need to write the code to parse your string:


ops$tkyte@ORA920PC> create table t ( ref_nbr number, log_desc varchar2(80) );
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into t values ( 0234567, '-111: ORDER:10901 NOT FOUND-222: ZIPCODE: NOTFOUND' );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into t values ( 1274033, '-111: ORDER:12343 NOT FOUND' );
 
1 row created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace type myScalarType as object
  2  ( ref_number number, err_cd number, err_msg varchar2(40) )
  3  /
 
Type created.
 
ops$tkyte@ORA920PC> create or replace type myTableType as table of myScalarType
  2  /
 
Type created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace function parse( p_cur in sys_refcursor )
  2  return myTableType
  3  PIPELINED
  4  as
  5      type array is table of t%rowtype index by binary_integer;
  6      l_data array;
  7      l_rec  myScalarType := myScalarType(null,null,null);
  8      l_tmp  long;
  9      n      number;
 10  begin
 11      loop
 12          fetch p_cur bulk collect into l_data limit 100;
 13          for i in 1 .. l_data.count
 14          loop
 15              l_rec.ref_number := l_data(i).ref_nbr;
 16              l_data(i).log_desc := l_data(i).log_desc || '-';
 17
 18              loop
 19                  n := instr( l_data(i).log_desc, '-', 2 );
 20                  exit when (nvl(n,0)=0);
 21
 22                  l_tmp := substr( l_data(i).log_desc, 1, n-1);
 23                  l_data(i).log_desc := substr( l_data(i).log_desc, n );
 24
 25                  n := instr( l_tmp, ':' );
 26                  l_rec.err_cd  := substr(l_tmp, 1, n-1 );
 27                  l_rec.err_msg := trim( substr( l_tmp, n+1 ) );
 28                  pipe row(l_rec);
 29              end loop;
 30          end loop;
 31          exit when p_cur%notfound;
 32      end loop;
 33      close p_cur;
 34      return;
 35  end;
 36  /
 
Function created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select * from TABLE( parse( cursor(select * from t) ) );
 
REF_NUMBER     ERR_CD ERR_MSG
---------- ---------- ----------------------------------------
    234567       -111 ORDER:10901 NOT FOUND
    234567       -222 ZIPCODE: NOTFOUND
   1274033       -111 ORDER:12343 NOT FOUND
 
 

Sql Query cut string

ARC, October 30, 2003 - 12:56 am UTC

Tom,
Thanks for solution. Really it is very good and also I came to know about "pipelined" with this solution which I am not aware of before.
The same thing I achived using PL/SQL records and table in version 8.1.7
Thanks again.

ARC


Sort Explicit Static Cursor at Runtime

robert, February 06, 2004 - 7:13 pm UTC

Tom,
Is this possible. ?
I want to web app user sorting option.

No error raised below but the ORDER BY didn't happen
------------------------
DECLARE
l_sort INTEGER := 2;

CURSOR empcur
IS
SELECT empno, ename FROM emp ORDER BY l_sort ;

CURSOR empcur2(psort IN INTEGER)
IS
SELECT empno, ename FROM emp ORDER BY psort ;

BEGIN
FOR i IN empcur
LOOP
dbms_output.put_line(i.ename);
END LOOP;
dbms_output.put_line('-----------------');

FOR i IN empcur2(l_sort)
LOOP
dbms_output.put_line(i.ename);
END LOOP;
END ;

thanks

Tom Kyte
February 07, 2004 - 2:22 pm UTC

that is alot like:

select * from emp order by '2';

it ordered by a CONSTANT, the constant value "2"

You can either:

a) use dynamic sql
b) use case or decode

eg:

select empno, ename from emp
order by decode( psort, 1, to_char(empno,'00000009'), 2, ename );


you have to to_char numbers and dates since you'll be ordering by a character string -- use YYYYMMDDHH24MISS on dates. If your numbers go negative -- you'll have to deal with that somehow as well.



Nice

Gerhard, February 09, 2004 - 10:03 am UTC

Dear Sir,
Well and wish the same from you.
Is there any other way to put this query?
sql> select ename,nvl(comm,'Commission is null') from emp
or
sql> select ename,decode(comm,null,'Commission is null',comm) from emp
It is just a matter of eagerness and curiosity.I think
*case* is possible but is there any other way?
Please do reply.
Bye!

Tom Kyte
February 09, 2004 - 10:10 am UTC

there are thousands of ways.


select ename, 'commission is null' from emp where comm is null
union all
select ename, comm from emp where comm is not null;

is yet another, case, nvl2, user defined functions -- would all work as well.


NVL() however is the "correct answer" -- but beware -- comm is a number and 'commision is null' is not a number - beware IMPLICIT conversions. best to use to_char() on comm to make it a string (since commission is null cannot be made into a number)

How to write this sql

Bole Taro Dol Baje, March 10, 2004 - 1:47 pm UTC

CREATE TABLE T ( GP VARCHAR2(10), GPP VARCHAR2(20))
/

Table created.

ALTER TABLE T ADD CONSTRAINT PK1 PRIMARY KEY ( GP, GPP);

/

INSERT INTO T VALUES('A', 'A');



INSERT INTO T VALUES('A', 'B');



INSERT INTO T VALUES('Z', 'A');


I want to find out all those GP values which have a common GPP value? How can I do it, and what are
the multiple ways I can do it. Please help.

Tom Kyte
March 10, 2004 - 3:39 pm UTC

ops$tkyte@ORA9IR2> select *
  2    from (
  3  select gp, gpp, count(*) over (partition by gpp) cnt
  4    from t
  5         )
  6   where cnt > 1;
 
GP         GPP                         CNT
---------- -------------------- ----------
A          A                             2
Z          A                             2


first one that popped into my head... 

Is this OK ?

A reader, March 10, 2004 - 5:39 pm UTC

select a.* from t a , t b where a.gpp = b.gpp and a.rowid <> b.rowid
GP GPP
---------- --------------------
Z A
A A
2 rows selected

Tom Kyte
March 10, 2004 - 6:37 pm UTC

there are potentially an infinite number of ways to do this, yes.  

That one would overdo it in the event there are three records:

ops$tkyte@ORA9IR2>  INSERT INTO T VALUES('x', 'A');
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from (
  3  select gp, gpp, count(*) over (partition by gpp) cnt
  4    from t
  5         )
  6   where cnt > 1;
 
GP         GPP                         CNT
---------- -------------------- ----------
A          A                             3
Z          A                             3
x          A                             3
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select       a.* from t a , t b where a.gpp = b.gpp and a.rowid <> b.rowid
  2  /
 
GP         GPP
---------- --------------------
Z          A
x          A
A          A
x          A
A          A
Z          A
 
6 rows selected.
 

Which of the above two query will be more performant ?

A reader, March 10, 2004 - 5:49 pm UTC

I think yours may be.

Tom Kyte
March 10, 2004 - 6:39 pm UTC

they give different answers :)

not fair to compare.

Ah missed the distinct

A reader, March 10, 2004 - 9:07 pm UTC

select distinct a.* from t a , t b where a.gpp = b.gpp and
a.rowid <> b.rowid.

Now they give identical results. Do u agree.
Will the analytic soln outperform the distinct self join soln
Thanx


Tom Kyte
March 10, 2004 - 9:13 pm UTC

i would prefer analytics over a partial cartesian product

experts' suggestions

A reader, March 11, 2004 - 12:35 am UTC

hi Tom & all learners

I am a beginner with oracle.
what would be the best suggestion to get a perfect hand in oracle other than time and practice
really appreciate suggestions/ books name/sites form all learners/experts on my email.
THANKS in advance to all



Tom Kyte
March 11, 2004 - 8:28 am UTC

"other than time and practice" -- well, until we as a human race develop the ability to imprint knowledge directly on the brain.......


Read the concepts guide (available on otn for all releases) from cover to cover and if you remember 10% of it, you'll already know 90% more than most people do about Oracle.

A reader, March 11, 2004 - 12:40 am UTC

learnz2004@yahoo.com

return true only if all deptnos exists in child table ver.9.2

A reader, March 12, 2004 - 5:50 pm UTC

The famous DEPT and EMP tables.
How to write an SQL which returns true only if all depts have employees.

My attempt works but I need tom's Brilliant and simple solution and other possible solutions.
1)
select count(*) from dual where exists(
select null from dept having count(*) = (select count(distinct deptno) from emp).
2)select count(*) from emp e,dept d where d.deptno = e.deptno(+) and e.deptno is null

The basic emp table has no emps for deptno = 40, once one is added the above query returns 1 else 0.
Can u please give atleast 3 or 4 or more alternatives ?
Thanx much

Tom Kyte
March 12, 2004 - 8:00 pm UTC

quiz time?


select decode( count(*), 0, 1, 0 )
from dept
where deptno not in ( select deptno from emp );


add where rownum = 1 (and you'll have 2 more, but thats cheating perhaps...)



select decode( count(*), 0, 1, 0 )
from dept
where NOT EXISTS ( select null from emp where emp.deptno = dept.deptno )
and rownum = 1;


select decode( count(*), 0, 1, 0 )
from ( select deptno from dept
MINUS
select deptno from emp );





U r Great

A reader, March 12, 2004 - 8:13 pm UTC

Thanx for the Quiz answers.You have a lot of gray cells.

Complicated query??

Thiru, May 04, 2004 - 4:12 pm UTC

Tom,


getting stuck with this query. Need your help.

create table t1 (c1 char(2),c2 number,c3 number,c4 date);
create table t2 (c1 char(2),c2 number,c3 number,c4 date);

insert into t1 values('a',100,0,'03-MAY-04');
insert into t1 values('b',100,0,'03-MAY-04');
insert into t1 values('a',0,100,'03-MAY-04');
insert into t1 values('b',0,100,'03-MAY-04');
insert into t1 values('a',150,0,'05-MAY-04');
insert into t1 values('a',0,175,'05-MAY-04');

insert into t values('a',100,100,'03-MAY-04');
insert into t values('b',100,100,'03-MAY-04');
insert into t values('a',150,175,'03-MAY-04');


table: t1
c1 c2 c3 C4

a 100 0 03-MAY-04
b 100 0 03-MAY-04
a 0 100 03-MAY-04
b 0 100 03-MAY-04
a 150 0 05-MAY-04
a 0 175 05-MAY-04


table t2:

c1 c2 c3
a 100 100 03-MAY-04
b 100 100 03-MAY-04
a 150 175 05-MAY-04



How to compare these two tables. The data in the two tables
are actually the same except the way it is spread. How to write
a query to prove thay they are the same. Also how do I get the
table t1 data to look like so that I can use MINUS operator against
the two tables.

table t1:

c1 c2 c3 c4
a 100 100 03-MAY-04
b 100 100 03-MAY-04
a 150 175 05-MAY-04






Tom Kyte
May 04, 2004 - 7:36 pm UTC

what do you mean "it is the same but the spread"?

You seem to be assigning some relevance to the order of rows in a table, but of course without and order by -- there is no such concept as "order of rows"

I see nothing remotely similar between those two tables.

Think he means...

Gary, May 05, 2004 - 12:11 am UTC

I think he is looking for a simple :

select c1, sum(c2), sum(c3), c4
from t1
group by c1,c4
minus
select c1,c2,c3,c4
from t2

Thiru, May 05, 2004 - 9:19 am UTC

I think I used the incorrect word "spread". It is just that
the second table holds the same values as the first table but in
a different way. Also I am not looking at the Sum function there as suggested. My aim is to find out if there is any difference in the two tables. Just a MINUS wil show differences as the data in table1 exists in two different rows while the data in table2 has the same data in one row.

For eg: table1:

a 100 0 03-MAY-04
a 0 100 03-MAY-04

Table2:

a 100 100 03-MAY-04

In this case, the data is the same in both the tables, but how do I
do that through a query? If I can get the data in the table1 to look
like
a 100 100 03-MAY-04

then I can use the MINUS function to get the differences in the entire
table. Hope I am clear.

Thanks for all the help.


Tom Kyte
May 05, 2004 - 10:03 am UTC

if the answer wasn't the one given by gary above, then I'd be at a loss as to how to know those two rows should be combined.

so, if gary's answer is not accurate, you'll need to explain in lots more detail...

thiru, May 05, 2004 - 10:44 am UTC

Excuse me for not making things clear for you.
This is what the scenario looks like:

CREATE OR REPLACE TYPE my_obj AS OBJECT (
c1 VARCHAR2(6),
c2 VARCHAR2(6),
c3 number,
c4 DATE,

);

/
CREATE OR REPLACE TYPE my_list AS TABLE OF my_OBJ;
/

My_pkg has the undernoted declaration:

TYPE my_list_pkg IS TABLE OF my_OBJ INDEX BY PLS_INTEGER;

My_pkg_body has ---the actuals are lot bigger.The relevant part shown

Procedure get_list(c1 in varchar2,c2 in varchar2,c3 in number,c4 in date, p_out out my_list)

as

l_listA My_pkg.my_list_pkg;
l_listB My_pkg.my_list_pkg;

----
I have two queries, the first doing a bulk collect into l_listA and second a bulk
collect into l_listB. And then doing the following :

size := l_listA.count;

p_out := my_list();


for cnt in 1..l_listB.count loop
l_listA(size + cnt) := l_listB(cnt);
end loop;

p_out.EXTEND(l_listA.COUNT);
for cnt in 1..l_listA.count loop
p_out(cnt) := l_listA(cnt);
end loop;
RETURN ;


When this list is inserted into table1 it goes in this fashion ( in two rows):

c1 c2 c3 c4

a 100 0 03-MAY-04
a 0 150 03-MAY-04

There is another table table2 that has the same data (in one row )but in this fashion:


c1 c2 c3 c4
a 100 150 03-MAY-04

The relationship is: where table1.c1=table2.c1 for all columns

My point is to compare the two tables to see if there is any difference. With this data,
the result should be that there is no difference. how should the query be written to
get the difference in the two tables?



Tom Kyte
May 05, 2004 - 2:34 pm UTC

i see no way, other than what gary above said -- to group by c1 and c4, to accomplish your goal

A reader, May 06, 2004 - 9:40 am UTC

Yes Tom, I got it with Gary's query. Thanks again.

Another one!!!!!!!!!!!!!!!!

A reader, May 06, 2004 - 12:45 pm UTC

hi tom,
i am executing this query (first with all_rows optimizer)
SELECT * FROM(  SELECT R.RECNUMBER, COUNT(R.RECNUMBER) APUESTAS
FROM STKTRANSACTION T, STKRECORD R
WHERE T.TRNSORTEODATE BETWEEN TO_DATE('01-01-2004','DD-MM-YYYY')
AND TO_DATE('31-01-2004','DD-MM-YYYY')
AND R.RECTRNID = T.TRNID
GROUP BY (R.RECNUMBER)  ORDER BY 2 DESC)
WHERE ROWNUM <6

Plan:

Elapsed: 00:00:21.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16195 Card=5 Bytes
          =227420)

   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=16195 Card=11371 Bytes=227420)
   3    2       SORT (ORDER BY STOPKEY) (Cost=16195 Card=11371 Bytes=2
          84275)

   4    3         SORT (GROUP BY) (Cost=16195 Card=11371 Bytes=284275)
   5    4           HASH JOIN (Cost=14991 Card=223186 Bytes=5579650)
   6    5             INDEX (FAST FULL SCAN) OF 'INDX_CMPID_SRTDATE' (
          NON-UNIQUE) (Cost=2918 Card=70301 Bytes=984214)

   7    5             INDEX (FAST FULL SCAN) OF 'INDX_FK_RECTRNID' (NO
          N-UNIQUE) (Cost=6801 Card=15004857 Bytes=165053427)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
    1132328  consistent gets
       3254  physical reads
          0  redo size
        522  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed


Now with the First_Rows Hint:

  1  SELECT * FROM(  SELECT /*+ FIRST_ROWS */R.RECNUMBER, COUNT(R.RECNUMBER) APUESTAS
  2  FROM STKTRANSACTION T, STKRECORD R
  3  WHERE T.TRNSORTEODATE BETWEEN TO_DATE('01-01-2004','DD-MM-YYYY')
  4  AND TO_DATE('31-01-2004','DD-MM-YYYY')
  5  AND R.RECTRNID = T.TRNID
  6  GROUP BY (R.RECNUMBER)  ORDER BY 2 DESC)
  7* WHERE ROWNUM <6
SQL> /

RECNUMBER    APUESTAS
---------- ----------
1362            10896
327              7966
315              6081
362              5282
321              5054

Elapsed: 00:00:20.05

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=16195 Card
          =5 Bytes=227420)

   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=16195 Card=11371 Bytes=227420)
   3    2       SORT (ORDER BY STOPKEY) (Cost=16195 Card=11371 Bytes=2
          84275)

   4    3         SORT (GROUP BY) (Cost=16195 Card=11371 Bytes=284275)
   5    4           HASH JOIN (Cost=14991 Card=223186 Bytes=5579650)
   6    5             INDEX (FAST FULL SCAN) OF 'INDX_CMPID_SRTDATE' (
          NON-UNIQUE) (Cost=2918 Card=70301 Bytes=984214)

   7    5             INDEX (FAST FULL SCAN) OF 'INDX_FK_RECTRNID' (NO
          N-UNIQUE) (Cost=6801 Card=15004857 Bytes=165053427)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
    1132335  consistent gets
          0  physical reads
          0  redo size
        522  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

As you can see the consistent gets are very high on both accounts. But while using First_rows the physical reads come down to 0. So are the consistent gets are high b'coz of count. The stktransaction table has 4752646 records and stkrecord table has twice that. Is there a better way to do this query. We need to generate the Top 5 only.
Any other data required, i will be glad to provide. Waiting for your response. 

Tom Kyte
May 06, 2004 - 4:53 pm UTC

more likely because the first query ran the data into the cache the second query ran it out of the cache (if not, use tkprof, it'll show us where the PIO's are occurring and we'd need that to say anything more)

not knowing what indexes are on what tables.... or how many records would actually be returned by the variable predicates (eg: how many rows are actually in that "between" clause there.....


If this were a query I needed to run lots and lots and lots -- i might (MIGHT, really -- only MIGHT) look to changing the model a tad so that we aggregate on the insert/update of rectrnid/delete on R (maintaining a running total).

If this isn't something I needed to run lots and lots -- guess I'd be going with what you have (unless there is some relationship between rectrnid and trnsorteodate -- eg: bigger rectrnid implies bigger date or something like that.


Multiple sub query

Thiru, June 17, 2004 - 4:04 pm UTC

Tom
Getting lost with this update.
table a:

tid cash key

100 1999.00 U
100 -99.00 G


s
table b: ( id is unique)

id tid

200 100

table c:

id cash key
200 9.00 U
200 -1.00 G

I need to update table c with cash values from table a using table b for gettting
the id value.

Result expected:

table c:

id cash key

200 1999.00 U
200 -99.00 G


My trial: ( Looks horrible!)

update c set cash = ( select cash from a where a.tid =(select b.tid from b,c
where b.id = c.id and a.key=c.key) ;


Tom Kyte
June 17, 2004 - 6:29 pm UTC

no create tables.....
no insert into's.....

hmmmmmm

excuse me.. Here it is.

A reader, June 17, 2004 - 11:04 pm UTC

table a:

tid cash key

100 1999.00 U
100 -99.00 G


create table a (tid number,cash number(10,2),key char(1));

insert into a values(100,1999.00,'U');
insert into a values(100,-99.00,'G');
commit;

table b: ( id is unique)

id tid

200 100

Create table b (id number unique,tid number);
insert into b values(200,100);
commit;

table c:

id cash key
200 9.00 U
200 -1.00 G

create table c(id number,cash number(10,2),key char(1));
insert into c values(200,9.00,'U');
insert into c values(200,-1.00,'G');
commit;

I need to update table c with cash values from table a using table b for
gettting
the id value.

Result expected:

table c:

id cash key

200 1999.00 U
200 -99.00 G




Tom Kyte
June 18, 2004 - 10:31 am UTC

sorry if that annoyed you, I'm just trying to figure out how to make this text more clear, it doesn't seem to be working.  It is the text you read before submitting comment:

<b>
Please limit this to a comment or question relevant to the other text on this page.  Please do not try to start a new thread of discussion here (I will ignore any followups that are not related to the discussion on this page). Also if your followup includes an example you want me to look at, I'll need it to have a create table, insert into statements and such that I can easily cut and paste into sqlplus myself (like I give you) in order to play with. I spent too many hours turning something like:

I have a table like:

scott@ORA9IR2> desc dept
 Name                                 Null?    Type
 ------------------------------------ -------- -------------------------
 DEPTNO                                        NUMBER(2)
 DNAME                                         VARCHAR2(14)
 LOC                                           VARCHAR2(13)

with the data:
 
scott@ORA9IR2> select * from dept;
 
    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

I *need* to have

I have a table:

create table dept( deptno number(2) primary key, dname varchar2(14), loc varchar2(13) );

with this data:

insert into dept values ( 10, 'accounting', 'new york' );
....

and please -- NO tablespaces/storage clauses, etc. simple, concise examples!
</b>

And the problem was you gave me exactly what I said "don't do".  If anyone has ideas on better wording -- please feel free to contribute.


I'm looking for that sort of stuff (create table, inserts) before I even read the question anymore.



ops$tkyte@ORA9IR2> select * from c;
 
        ID       CASH K
---------- ---------- -
       200          9 U
       200         -1 G
 
ops$tkyte@ORA9IR2> update c
  2     set cash = ( select a.cash
  3                    from a,b
  4                   where b.id = c.id
  5                     and b.tid = a.tid
  6                     and a.key = c.key )
  7  /
 
2 rows updated.
 
ops$tkyte@ORA9IR2> select * from c;
 
        ID       CASH K
---------- ---------- -
       200       1999 U
       200        -99 G
 
ops$tkyte@ORA9IR2>

 

great

sapna chhetri, June 18, 2004 - 6:42 am UTC

the queries are resolved greatly but i dont have much idea about plsql kindly suggest how to learn the same in a short period.

sapna

Tom Kyte
June 18, 2004 - 10:50 am UTC

read?
practice?

just like learning how to write C, or Java, or VB, or SQL, or <technology X>

SQL Query

Sreedhar, June 22, 2004 - 5:52 am UTC

Tom,

I have the script below script which gives the values with different datebands entered in master table. I am looking for a output that can be run only with SQL query instead of going for PL/SQL program.

Please advice

-----------------------------------------------------------

SCRIPT :

prompt Created on 22 June 2004 by Sreedhar

set feedback off
set define off

create table CRP_T_PRICE
(
  DATEBAND_FROM          DATE,
  DATEBAND_TO            DATE,
  PRICE                 NUMBER
);

prompt Loading CRP_T_PRICE Table...

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-03-2004', 'dd-mm-yyyy'), to_date('31-03-2004', 'dd-mm-yyyy'), 0);

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-04-2004', 'dd-mm-yyyy'), to_date('11-05-2004', 'dd-mm-yyyy'), 0);

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('12-05-2004', 'dd-mm-yyyy'), to_date('11-06-2004', 'dd-mm-yyyy'), 6522);

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('12-06-2004', 'dd-mm-yyyy'), to_date('19-06-2004', 'dd-mm-yyyy'), 6522);

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('20-06-2004', 'dd-mm-yyyy'), to_date('31-07-2004', 'dd-mm-yyyy'), 6885);

insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-08-2004', 'dd-mm-yyyy'), to_date('31-10-2004', 'dd-mm-yyyy'), 6522);

Commit;


set feedback on
set define on
prompt Done.

-----------------------------------------------------------


SQL> SELECT * FROM CRP_T_PRICE;


DATEBAND_FROM    DATEBAND_TO    PRICE
01/03/2004    31/03/2004    0
01/04/2004    11/05/2004    0
12/05/2004    11/06/2004    6522
12/06/2004    19/06/2004    6522
20/06/2004    31/07/2004    6885
01/08/2004    31/10/2004    6522


OUTPUT I AM LOOKING FOR IS ............


DATEBAND_FROM    DATEBAND_TO    PRICE
01/03/2004    11/05/2004    0
12/05/2004    19/06/2004    6522
20/06/2004    31/07/2004    6885
01/08/2004    31/10/2004    6522 

Tom Kyte
June 22, 2004 - 8:48 am UTC

ops$tkyte@ORA9IR2> select min(dateband_from), max(dateband_to), price
  2    from (
  3  select dateband_from, dateband_to, price,
  4         max(rn) over (order by dateband_from) grp
  5    from (
  6  select dateband_from, dateband_to, price,
  7         case when lag(price) over (order by dateband_from) <> price or
  8                   row_number() over (order by dateband_from) = 1
  9              then row_number() over (order by dateband_from)
 10          end rn
 11    from crp_t_price
 12         )
 13         )
 14   group by grp, price
 15   order by 1
 16  /
 
MIN(DATEB MAX(DATEB      PRICE
--------- --------- ----------
01-MAR-04 11-MAY-04          0
12-MAY-04 19-JUN-04       6522
20-JUN-04 31-JUL-04       6885
01-AUG-04 31-OCT-04       6522
 
ops$tkyte@ORA9IR2>


see
https://asktom.oracle.com/Misc/oramag/on-format-negation-and-sliding.html
"analytics to the rescue"

this is a technique i call "carry forward" -- the max() trick.  we simply mark the beginning of whatever defines our "group" with row_number() (and if we need to carry forward something else -- row_number() || something_else) and use max to carry it forward.

suggest you run the query bit by bit to see what it does (run the inline views one by one, building on each one -- that is how I develop them).
 

very neat!

A reader, June 22, 2004 - 8:50 pm UTC

I have a quick question. Given the data below:
----
scott@ORA10G> drop table t1;

Table dropped.

scott@ORA10G> create table t1
2 (
3 x number,
4 y number,
5 z number
6 );

Table created.

scott@ORA10G>
scott@ORA10G> insert into t1 values( 1, 2, 3 );

1 row created.

scott@ORA10G> insert into t1 values( 2, 2, 10 );

1 row created.

scott@ORA10G> insert into t1 values( 3, 5, 5 );

1 row created.

scott@ORA10G> commit;

Commit complete.

scott@ORA10G>
scott@ORA10G> select * from t1;

X Y Z
---- ---- ----
1 2 3
2 2 10
3 5 5
------

I want to get all the column values along with average
of each column for all rows. Since the average
will be the same in this case in all rows, is
it possible to not repeat the average values?
Apart from the network traffic saved (say from a
JDBC client query), do you see any other pros/cons
with this approach versus writing 2 different queries
(one for the column and one for the average values)?

I used your "carry forward" trick for this (see below)
but you may have a simpler solution in mind:)

---

scott@ORA10G> column x format 999
scott@ORA10G> column y format 999
scott@ORA10G> column z format 999
scott@ORA10G> column avg_x format 999
scott@ORA10G> column avg_y format 999
scott@ORA10G> column avg_z format 999
scott@ORA10G>
scott@ORA10G> select x, y, z, decode( rn, 1, avg_x, null) avg_x,
2 decode( rn, 1, avg_y, null) avg_y,
3 decode( rn, 1, avg_z, null) avg_z
4 from
5 (
6 select x, y, z, avg_x, avg_y, avg_z,
7 case when avg_x != lag_avg_x or
8 row_num = 1 then
9 row_number() over (order by x, y, z)
10 end rn
11 from
12 (
13 select x, y, z, avg_x, avg_y, avg_z,
14 lag( avg_x) over(order by x, y, z) lag_avg_x,
15 lag( avg_y) over(order by x, y, z) lag_avg_y,
16 lag( avg_z) over(order by x, y, z) lag_avg_z,
17 row_number() over ( order by x, y, z ) row_num
18 from
19 (
20 select x, y, z,
21 avg(x) over() avg_x,
22 avg(y) over() avg_y,
23 avg(z) over() avg_z
24 from t1
25 )
26 ))
27 ;

X Y Z AVG_X AVG_Y AVG_Z
---- ---- ---- ----- ----- -----
1 2 3 2 3 6
2 2 10
3 5 5




Tom Kyte
June 22, 2004 - 10:35 pm UTC

ops$tkyte@ORA10G> select avg(x), avg(y), avg(z),
  2         decode( grouping(x), 1, '<<<== avg' )
  3    from t1
  4   group by grouping sets((x,y,z),())
  5  /
 
    AVG(X)     AVG(Y)     AVG(Z) DECODE(GR
---------- ---------- ---------- ---------
         1          2          3
         2          2         10
         3          5          5
         2          3          6 <<<== avg
 

thanx!!

A reader, June 23, 2004 - 9:54 am UTC

something new to learn!:) yet again!:)

grouping set

A reader, June 23, 2004 - 10:34 am UTC

Hi tom
in the above select what does
"group by grouping sets((x,y,z),())"

mean? In particular, what does the "()" imply?
I notice that if you remove it we dont get the
last row (the one with the average)?

Thanx!

Tom Kyte
June 23, 2004 - 11:15 am UTC

getting the average of x,y,z by x,z,y is the meaning of (x,y,z) (eg: no grouping really, the "details")

getting the average of x,y,z by "nothing" is the meaning if () (eg: like no group by at all.

scott@ORA9IR2> select avg(sal) from emp group by sal;

AVG(SAL)
----------
4050
4809.38
5568.75
6328.13
6581.25
7593.75
8100
12403.13
14428.13
15060.95
15187.5
25312.5

12 rows selected.

scott@ORA9IR2> select avg(sal) from emp /* group by 'nothing' */;

AVG(SAL)
----------
10495.65


it just did both of those.

ok

A reader, June 23, 2004 - 10:36 am UTC

I guess "group by grouping sets((x,y,z),())"

menas union the result of
group by x, y, z - with that obtained by no grouping by.
Is that correct?

thanx!


Tom Kyte
June 23, 2004 - 11:20 am UTC

see above

thanx!

A reader, June 23, 2004 - 11:58 am UTC

that makes it clear!

another similar question

A reader, June 23, 2004 - 7:56 pm UTC

hi tom
another question with a suggested solution - let me know
if you have an alternative solution.
same schema as above -this time I want to calculate
the result of a function that takes the minimum of
all x, y, z values and max of all x, y, z values and
then select this result along with all x, y, z values.

Following is my code with the schema and a solution..
thanx!
---
scott@ORA10G> set echo on
scott@ORA10G> set head on
scott@ORA10G> drop table t1;

Table dropped.

scott@ORA10G> create table t1
2 (
3 x number,
4 y number,
5 z number
6 );

Table created.

scott@ORA10G>
scott@ORA10G> insert into t1 values( 1, 2, 3 );

1 row created.

scott@ORA10G> insert into t1 values( 2, 2, 10 );

1 row created.

scott@ORA10G> insert into t1 values( 2, 2, 9 );

1 row created.

scott@ORA10G> insert into t1 values( 3, 5, 5 );

1 row created.

scott@ORA10G> commit;

Commit complete.

scott@ORA10G>
scott@ORA10G> select * from t1;

X Y Z
---- ---- ----
1 2 3
2 2 10
2 2 9
3 5 5

scott@ORA10G>
scott@ORA10G> column x format 999
scott@ORA10G> column y format 999
scott@ORA10G> column z format 999
scott@ORA10G> column max_x format 999.99
scott@ORA10G> column max_y format 999.99
scott@ORA10G> column max_z format 999.99
scott@ORA10G> column max format 999.99
scott@ORA10G> column details format 999.99
scott@ORA10G>
scott@ORA10G> select x, y, z
2 from t1;

X Y Z
---- ---- ----
1 2 3
2 2 10
2 2 9
3 5 5

scott@ORA10G>
scott@ORA10G> /* a function that calculates something based on the
scott@ORA10G> the max and min of all x,y,z each.
scott@ORA10G>
scott@ORA10G> */
scott@ORA10G> create or replace function f( max_x_y_z number, min_x_y_z number )
2 return varchar2
3 is
4 begin
5 -- some computation
6 return 'TESTING';
7 end;
8 /

Function created.

scott@ORA10G> show errors;
No errors.
scott@ORA10G> column f_result format a10
scott@ORA10G> /* we need the result of x, y, z, rows alongwith the
scott@ORA10G> result of the above function f()
scott@ORA10G> */
scott@ORA10G>
scott@ORA10G> select x, y, z, f( max_all, min_all ) f_result
2 from
3 (
4 select x, y, z, greatest( max_x, max_y, max_z) max_all,
5 least ( min_x, min_y, min_z ) min_all
6 from
7 (
8 select x, y, z,
9 max(x) over() max_x,
10 max(y) over() max_y,
11 max(z) over() max_z,
12 min(x) over() min_x,
13 min(y) over() min_y,
14 min(z) over() min_z
15 from t1
16 )
17 );

X Y Z F_RESULT
---- ---- ---- ----------
1 2 3 TESTING
2 2 10 TESTING
2 2 9 TESTING
3 5 5 TESTING

scott@ORA10G> spool off

Tom Kyte
June 24, 2004 - 9:12 am UTC

we can certainly simplify it a bit and given that the f(a,b) is constant for all rows in the query -- optimize it a bit as well:

ops$tkyte@ORA9IR2> create or replace function f(a in number, b in number) return varchar2
  2  as
  3  begin
  4      dbms_application_info.set_client_info( userenv('client_info')+1 );
  5      return 'testing';
  6  end;
  7  /
 
Function created.

that'll let us count how often the function is called:

 
ops$tkyte@ORA9IR2> column f format a10
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_application_info.set_client_info( 0 )
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select x, y, z, f( min_xyz, max_xyz ) f
  2    from (
  3  select x, y, z,
  4         min(least(x,y,z)) over () min_xyz,
  5         max(greatest(x,y,z)) over () max_xyz
  6    from t1
  7         )
  8  /
 
         X          Y          Z F
---------- ---------- ---------- ----------
         1          2          3 testing
         2          2         10 testing
         2          2          9 testing
         3          5          5 testing
 
ops$tkyte@ORA9IR2> select userenv('client_info') from dual;
 
USERENV('CLIENT_INFO')
----------------------------------------------------------------
4


<b>that shows the simplification, but also shows that the function was called 4 times.  We can use a scalar subquery here to reduce that to one in this case (since the inputs to the function are invariant)</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_application_info.set_client_info( 0 )
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select x, y, z, (select f( min_xyz, max_xyz )  from dual) f
  2    from (
  3  select x, y, z,
  4         min(least(x,y,z)) over () min_xyz,
  5         max(greatest(x,y,z)) over () max_xyz
  6    from t1
  7         )
  8  /
 
         X          Y          Z F
---------- ---------- ---------- ----------
         1          2          3 testing
         2          2         10 testing
         2          2          9 testing
         3          5          5 testing
 
ops$tkyte@ORA9IR2> select userenv('client_info') from dual;
 
USERENV('CLIENT_INFO')
----------------------------------------------------------------
1
 
 

awesome!!

A reader, June 24, 2004 - 10:45 am UTC

Thank you!! I liked the simplification a lot and
also the scalar subquery idea!!


one question

A reader, June 24, 2004 - 10:50 am UTC

Hi Tom
Could you kindly explain why the scalar subquery
trick reducees the function call to 1 from 4?


thank you!!!

A reader, June 24, 2004 - 2:54 pm UTC


A reader, June 30, 2004 - 6:20 am UTC

Tom,

Thanks, very much useful to me.

I have one doubt, how to limit, the result that you get in analytics for the first record or till 10 records, as we do say row_num <2 or < 10

i tried this ..

SQL> SELECT month_num,year_num,
  2  MIN(month_num) OVER(PARTITION BY year_num ORDER BY month_num) as p_cmin
  3  --RANGE UNBOUNDED PRECEDING) as p_cmin
  4  FROM tb_etr003_geci_etmon_financial a
  5  where a.type_nam='Project Space'
  6                     AND object_nam = 'Development of DEQ coil manuf'
  7                     AND revision_nam='2910885710716221'
  8                     AND relobj_type_nam='Benefit Details';

 MONTH_NUM   YEAR_NUM     P_CMIN
---------- ---------- ----------
        10       2004         10
        10       2004         10
        10       2004         10
        10       2004         10
        11       2004         10
        11       2004         10
        11       2004         10
        11       2004         10
        12       2004         10
        12       2004         10
        12       2004         10

 MONTH_NUM   YEAR_NUM     P_CMIN
---------- ---------- ----------
        12       2004         10
         1       2005          1
         1       2005          1
         1       2005          1
         1       2005          1

i have multiple values for 10 2004 ..(first 4 records), how to get only the first one and how to get the first 4 (here, since they are having same value)

RANGE UNBOUNDED PRECEDING -- means? .. where can i see this explanation, or can you explain in your quick words...

thanks
 

Tom Kyte
June 30, 2004 - 10:17 am UTC

selelct * from (
SELECT month_num,year_num,
row_number() over (partition by year_num order by month_num) RN
FROM tb_etr003_geci_etmon_financial a
where a.type_nam='Project Space'
AND object_nam = 'Development of DEQ coil manuf'
AND revision_nam='2910885710716221'
AND relobj_type_nam='Benefit Details'
)
where rn = 1;


you can either read the chapter on analytics in my book "Expert one on one Oracle" if you have it or check out the data warehouse guide (freely available on otn.oracle.com) to find out what analytics are all about.



simle question

A reader, June 30, 2004 - 1:40 pm UTC

Hi Tom
Please consider the following schema:
---
scott@ORA92I> drop table t;

Table dropped.

scott@ORA92I> create table t
2 ( x number,
3 y date
4 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into t
2 select rownum x, sysdate
3 from all_objects
4 where rownum <= 10;

10 rows created.

scott@ORA92I>
scott@ORA92I> -- want to set the value of y to descending order of
scott@ORA92I> -- date based on the order of x values - e.g. the largest
scott@ORA92I> -- value of x should correspond to today, the second
scott@ORA92I> -- largest to yesterday and so on.
scott@ORA92I> update ( select x, y, rownum from t order by x )
2 set y = sysdate - rownum;
update ( select x, y, rownum from t order by x )
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

----

How can I update the table t so that the largest value of
x has y as sydate, the second largest has y as sysdate-1
and so on...This would be done through rowum but not sure
how..

Thanx for a great site!!

Tom Kyte
June 30, 2004 - 2:03 pm UTC

ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t
  2  ( x number PRIMARY KEY,
  3    y date
  4  );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
  2  select rownum x, sysdate
  3  from all_objects
  4  where rownum <= 10;
 
10 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> merge into t a
  2  using ( select x, rownum-1 r
  3            from ( select *
  4                     from t
  5                    order by x) ) b
  6  on (a.x = b.x)
  7  when matched then update set y = sysdate-r
  8  when not matched then insert (x) values(null)
  9  /
 
10 rows merged.


the when not matched will never happen of course. 

thank you!!

A reader, June 30, 2004 - 2:28 pm UTC


sql query question

A reader, July 02, 2004 - 1:13 pm UTC

Consider the following schema..
----
scott@ORA92I> set head on
scott@ORA92I> set echo on
scott@ORA92I> drop table t;

Table dropped.

scott@ORA92I> create table t
2 (
3 x number,
4 y number,
5 dt date
6 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/01/2004', 'mm/dd/yyyy') );

1 row created.

scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/02/2004', 'mm/dd/yyyy') );

1 row created.

scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/03/2004', 'mm/dd/yyyy') );

1 row created.

scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/05/2004', 'mm/dd/yyyy') );

1 row created.

scott@ORA92I>
scott@ORA92I> commit;

Commit complete.

scott@ORA92I>
scott@ORA92I> column x format 99999
scott@ORA92I> column y format 99999
scott@ORA92I> alter session set nls_date_format='mm/dd/yyyy';

Session altered.

scott@ORA92I>
scott@ORA92I> select * from t;

X Y DT
------ ------ ----------
100 200 06/01/2004
100 200 06/02/2004
100 200 06/03/2004
100 200 06/05/2004

I want to get an output of
X Y DT
------ ------ ----------
100 200 06/01/2004
100 200 06/02/2004
100 200 06/03/2004
0 0 06/04/2004 <--- extra row for the missing day
100 200 06/05/2004

Is it possible?

Thanx!

Tom Kyte
July 02, 2004 - 2:34 pm UTC

you need to generate a set of those dates:

(
select to_date( '....starting date....' ) + rownum-1 DT
from all_objects
where rownum <= NUMBER_OF_DAYS_YOU_ARE_INTERESTED_IN
) dates

and outer join your query to that

where dates.dt = t.dt(+)



thanx!

A reader, July 02, 2004 - 3:41 pm UTC


sql query

A reader, July 26, 2004 - 4:14 pm UTC

Hi tom
Following is the schema:
scott@ORA10G>
scott@ORA10G> drop table t1;

Table dropped.

scott@ORA10G> create table t1
2 (
3 name varchar2(30),
4 type varchar2(30),
5 value number
6 );

Table created.

scott@ORA10G>
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 1 );

1 row created.

scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 2 );

1 row created.

scott@ORA10G> insert into t1( name, type, value )
2 values( 'name2', 'type1', 3 );

1 row created.

scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 5 );

1 row created.

scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type2', 5 );

1 row created.

scott@ORA10G> commit;

Commit complete.

scott@ORA10G>
scott@ORA10G> column name format a15
scott@ORA10G> column type format a10
scott@ORA10G> column value format 999
scott@ORA10G> column shared_count format 999
scott@ORA10G> select name, type, value
2 from t1;

NAME TYPE VALUE
--------------- ---------- -----
name1 type1 1
name1 type1 2
name2 type1 3
name1 type1 5
name1 type2 5

----------------------------------
Now the required result is as shown below..
I need to find the count of cases where columns
name and type have same value and then display
them along side the above results as:
scott@ORA10G> NAME TYPE VALUE SHARED_COUNT
scott@ORA10G> --------------- ---------- -----
scott@ORA10G> name1 type1 1 3
scott@ORA10G> name1 type1 2 3
scott@ORA10G> name2 type1 3 1
scott@ORA10G> name1 type1 5 3
scott@ORA10G> name1 type2 5 1

------------

My answer to this is a little convoluted - I thought
you may be able to give a better answer...
Let me know..
Following is the way I did it..
-----
scott@ORA10G> select name, type, value,
2 max( shared_count ) over( partition by name, type order by name, type)
3 shared_count
4 from
5 (
6 select name, type, value,
7 row_number()
8 over( partition by name, type order by name, type) shared_count
9 from t1
10 order by name, type
11 );

NAME TYPE VALUE SHARED_COUNT
--------------- ---------- ----- ------------
name1 type1 1 3
name1 type1 2 3
name1 type1 5 3
name1 type2 5 1
name2 type1 3 1

------------

Thank you!
--


Tom Kyte
July 26, 2004 - 5:21 pm UTC

ops$tkyte@ORA9IR2> l
  1  select name, type, value, count(*) over (partition by name, type) cnt
  2    from t1
  3* order by name, type
ops$tkyte@ORA9IR2> /
 
NAME   TYPE        VALUE        CNT
------ ------ ---------- ----------
name1  type1           1          3
name1  type1           2          3
name1  type1           5          3
name1  type2           5          1
name2  type1           3          1
 
 

thanx!

A reader, July 26, 2004 - 9:40 pm UTC


SQL Query

A reader, July 27, 2004 - 1:50 am UTC

Hi Tom,

In order to get the nth maximum salary, which approach is good in terms of performance and efficiency?

1. SELECT MIN(sal)
FROM ( SELECT sal
FROM ( SELECT DISTINCT sal
FROM emp
ORDER BY sal
DESC )
WHERE ROWNUM <= &nthmaxsalary );

2. SELECT MAX(sal)
FROM emp
WHERE LEVEL = &nthmaxsalary
CONNECT BY PRIOR sal > sal;

3. SELECT DISTINCT t.sal
FROM ( SELECT sal, DENSE_RANK()
OVER (ORDER BY sal DESC) salrank
FROM emp)t
WHERE t.salrank = &nthmaxsalary ;


Thxs in advance

Tom Kyte
July 27, 2004 - 7:16 am UTC

#3 would be my first choice.

another sql query

A reader, July 27, 2004 - 1:02 pm UTC

hi tom
slight change in my requirements:
----
schema:
scott@ORA10G> drop table t1;

Table dropped.

scott@ORA10G> create table t1
2 (
3 name varchar2(30),
4 type varchar2(30),
5 id number,
6 value number
7 );

Table created.

scott@ORA10G>
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type1', 1, 1 );

1 row created.

scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type1', 2, 2 );

1 row created.

scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name2', 'type1', 3, 3 );

1 row created.

scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type2', 4, 5 );

1 row created.

scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name2', 'type1', 1, 5 );

1 row created.

scott@ORA10G> commit;

Commit complete.

scott@ORA10G>
scott@ORA10G> column name format a15
scott@ORA10G> column type format a10
scott@ORA10G> column value format 999
scott@ORA10G> column id format 999
scott@ORA10G> column shared_count format 999
scott@ORA10G> set head on
scott@ORA10G> select name, type, id, value
2 from t1;

NAME TYPE ID VALUE
--------------- ---------- ---- -----
name1 type1 1 1
name1 type1 2 2
name2 type1 3 3
name1 type2 4 5
name2 type1 1 5

----

The required results:
---
cott@ORA10G> NAME TYPE ID VALUE SHARED_COUNT
scott@ORA10G> --------------- ---------- ---- ----- -----------
scott@ORA10G> name1 type1 1 1 1
scott@ORA10G> name1 type1 2 2 2
scott@ORA10G> name2 type1 3 3 1
scott@ORA10G> name3 type2 4 5 1
scott@ORA10G> name2 type1 1 5 2
scott@ORA10G>
scott@ORA10G>

---
Basically
Count rows such that different names of the same type have the same ID. For example, in the above case, count for row # 2 and 5 is 2 because for the same type type1, there are two different names - name1 and name2 that share the same ID of 1.

Thank you!


ok - i think i got it..

A reader, July 27, 2004 - 1:05 pm UTC

scott@ORA10G> select name, type, id, value,
2 count(*) over( partition by type, id order by type, id ) shared_count
3 from t1;

NAME TYPE ID VALUE SHARED_COUNT
--------------- ---------- ---- ----- ------------
name1 type1 1 1 2
name2 type1 1 5 2
name1 type1 2 2 1
name2 type1 3 3 1
name1 type2 4 5 1
----
lemme know if this is what you would have suggested..
thanx!


oops - spoke too soon!

A reader, July 27, 2004 - 1:09 pm UTC

The above solution does not take care of
the case when you have another row
that has name1, type1 and the same global id :(

Tom Kyte
July 27, 2004 - 2:08 pm UTC

a verbaitim reading of:

Count rows such that different names of the same type have the same ID. 

results in this query:

ops$tkyte@ORA9IR2> select name, type, id, value, count(distinct name) over (partition by type, id) cnt
  2    from t1;
 
NAME                           TYPE             ID      VALUE        CNT
------------------------------ -------- ---------- ---------- ----------
name1                          type1             1          1          2
name2                          type1             1          5          2
name1                          type1             2          2          1
name2                          type1             3          3          1
name1                          type2             4          5          1
 

thanx !!!

A reader, July 27, 2004 - 2:40 pm UTC

that works perfectly!:)

Have a great day!:)

Thiru, August 05, 2004 - 11:46 am UTC

stuck with this query:

create table tt1( a number,b varchar2(10));
create table tt2( a number,c varchar2(10));

insert into tt1 values(100,'abc');
insert into tt1 values(200,'abc');
insert into tt1 values(200,'abc');
insert into tt1 values(200,'bcd');

insert into tt2 values(100,'abc');
insert into tt2 values(100,'bcd');


output reqd: ( combining the values from both tables)

t1.a t1.b
-----------

600 abc
300 bcd

select tt1.b,sum(tt1.a+tt2.a)
from tt1,tt2
where tt1.b=tt2.c
group by tt1.b;

gives

B SUM(TT1.A+TT2.A)
---------- ----------------
abc 800
bcd

Tom Kyte
August 05, 2004 - 1:12 pm UTC

ops$tkyte@ORA9IR2> select sum(a), x
  2    from (select sum(a) a, b x from tt1 group by b
  3          union all
  4          select sum(a) a, c x from tt2 group by c)
  5   group by x
  6  /
 
    SUM(A) X
---------- ----------
       600 abc
       300 bcd
 

A reader, August 05, 2004 - 12:55 pm UTC

stuck with this query:

create table tt1(id number,sdate date,edate date,amt number,flag char(1));

create table tt2(id number,sdate date,edate date,amt number);


insert into tt1 values(1,'01-AUG-04','31-AUG-04',100,NULL);
insert into tt1 values(2,'01-AUG-04','31-AUG-04',100,NULL);
insert into tt1 values(3,'02-AUG-04','31-AUG-04',100,'Y');
insert into tt1 values(4,'02-AUG-04','31-AUG-04',100,'Y');
insert into tt1 values(5,'03-AUG-04','31-AUG-04',100,NULL);

insert into tt2 values(3,'02-AUG-04','31-AUG-04',200);
insert into tt2 values(4,'02-AUG-04','31-AUG-04',200);


SHOULD BE GROUPED BY TT1.SDATE,TT1.EDATE AND
(IF TT1.FLAG='Y' THEN TAKE THE CORRESPONDING AMT FROM TT2 WHERE TT1.ID=TT2.ID AND TT1.SDATE=TT2.SDATE
AND TT1.EDATE = TT2.EDATE)

THE OUTPUT TO BE:


TT1.SDATE TT1.EDATE AMT

01-AUG-04 31-AUG-04 200
02-AUG-04 31-AUG-04 400
03-AUG-04 31-AUG-04 100


Tom Kyte
August 05, 2004 - 1:16 pm UTC

see above and see if you can adapt it to your needs..

Answer using Scalar Subquery

Logan Palanisamy, August 06, 2004 - 8:01 pm UTC

 1  select sdate, edate,
  2  sum(decode(flag, null, amt, (select amt from tt2 where tt1.id = tt2.id))) amt
  3  from tt1
  4* group by sdate, edate
SQL> /

SDATE     EDATE            AMT
--------- --------- ----------
01-AUG-04 31-AUG-04        200
02-AUG-04 31-AUG-04        400
03-AUG-04 31-AUG-04        100 

Answer Using Scalar Subquery

Logan Palanisamy, August 06, 2004 - 8:07 pm UTC

  1  select sdate, edate,
  2  sum(decode(flag, null, amt, (select amt from tt2 where tt1.id = tt2.id))) total
  3  from tt1
  4* group by sdate, edate
SQL> /

SDATE     EDATE          TOTAL
--------- --------- ----------
01-AUG-04 31-AUG-04        200
02-AUG-04 31-AUG-04        400
03-AUG-04 31-AUG-04        100 

q on a scenario

A reader, August 09, 2004 - 6:09 pm UTC

hi tom
i have scenario where I am generating report based
on a period. The numbers involved are rolled up - so
we have a table where we store hourly data points,
another table where we store daily data points etc.

I am writing a query where if the data is in the daily point I pick data from there - otherwise, I pick
it from the hourly data points table. I am writing
it in two queries. The first selects data from the
daily points table. If there are no records in this
I select data from hourly points table. Is this the
right approach to do it or would you have done it
differently?

Thank you!

Tom Kyte
August 09, 2004 - 8:49 pm UTC

if you used a materialized view to roll the hourly up to daily -- you could write all of your queries against the details -- and when appropriate, the database would rewrite them against the aggregated daily data.......



thanx tom!

A reader, August 10, 2004 - 11:54 am UTC

unfortunately we are not in a position to use MVs
as the infrastructure is already in place...Given that
is the reality, how would you approach this issue?

btw, how would the MVs know if they have to go to daily
and if not to hourly data - is there a mechanism that
helps you with this in MV?

Thanx!

Tom Kyte
August 10, 2004 - 3:42 pm UTC

in reality (this will shock you) i would use mv's -- as the in place infrastructure could just be "erased"


Yes, MV's are built to do this sort of stuff, check out the data warehousing guide. MV's are like the "indexes of your data warehouse". they are used to pre-aggreate, etl, pre-join, whatever data -- so you don't have to do that over and over in response to every single query.

thanx!

A reader, August 10, 2004 - 4:20 pm UTC

one reason I believe we dont have MVs is that
we need history of rollup points...


Tom Kyte
August 10, 2004 - 4:27 pm UTC

so? if you have a history of houring points, you have a history of daily points in your mv's

I see...

A reader, August 10, 2004 - 5:14 pm UTC

good point :-) - so the way you would have done it is:
store history of points at the lowest granular level
(hourly)

1. The daily, weekly, monthly, quarterly, yearly
etc. would come from materialized views built on
top of the hourly table, correct?
2. I suppose stats like min, max, avg etc would
also come from MVs?

3. Assuming I have an MV based solution,
can you briefly explain how MVs can automatically
go to hourly if data does not exist for daily? Is
this one of the functionalities of MV?

thanx!

Tom Kyte
August 10, 2004 - 7:31 pm UTC

1) actuallhy, daily on hourly, weekly on daily, monthly on weekly, and so on (you can have mv's of mv's in 9i)

2) absolutely

3) absolutely - you want to read the data warehousing guide (one chapter -- really, the one on MV's). much will be gleaned from that.

OK

James, August 11, 2004 - 10:31 am UTC

Dear Tom,
Is there any other way to put this query?

sql> select * from t where x is not null and y is not null;

Please do reply.


Tom Kyte
August 11, 2004 - 1:12 pm UTC

sure, where y is not null and x is not null

:)

nvl
decode
case

come to mind immediately

thnax for answering my mv related questions above!:)

A reader, August 11, 2004 - 1:23 pm UTC


full outer join

thiru, August 11, 2004 - 2:17 pm UTC

Full outer join: Is this the right way? Looks like
I am missing something here. Please have a look.

create table p1 ( sdate date,ccy char(3),amt number);
create table p2 as select * from p1;
create table p3 (sdate date,ccy char(3),amt_p1 number,amt_p2 number);

insert into p1 values('16-AUG-04','BCD',100);
insert into p1 values('16-AUG-04','CDE',200);
insert into p1 values('16-AUG-04','ABC',300);
insert into p1 values('16-AUG-04','XYZ',300);

insert into p2 values('16-AUG-04','EFG',100);
insert into p2 values('16-AUG-04','CDE',200);
insert into p2 values('16-AUG-04','ABC',300);
insert into p2 values('16-AUG-04','BCD',100);

table p3 to be combined from p1 and p2: (output required)

sdate ccy amt_p1 amt_p2


16-AUG-04 ABC 300 300
16-AUG-04 BCD 100 100
16-AUG-04 CDE 200 200
16-AUG-04 EFG 0 100
16-AUG-04 XYZ 300 0

I used this query but the sdate and ccy are missing for one row ie., 'EFG' for p2.

SELECT P1.SDATE,P1.CCY,nvl(P1.AMT,0),nvl(P2.AMT,0)
FROM P2 FULL OUTER JOIN P1
ON P1.SDATE = P2.SDATE
AND P1.CCY = P2.CCY;

SDATE CCY AMT AMT
--------- --- ---------- ----------
16-AUG-04 ABC 300 300
16-AUG-04 BCD 100 100
16-AUG-04 CDE 200 200
0 100
16-AUG-04 XYZ 100 0

Tom Kyte
August 12, 2004 - 7:32 am UTC

well, if p1 is "made up" - it's values are going to be null.

so, perhaps you want

nvl(p1.sdate,p2.sdate), nvl(p1.ccy,p2.ccy), ....

in the select list.

weeks do not always roll-up

Gabe, August 11, 2004 - 2:20 pm UTC

<quote>actuallhy, daily on hourly, weekly on daily, monthly on weekly, and so on</quote>

monthly on weekly? ... hmmm ... weeks do not always fit into months, or years for that matter. The concept of the "week" is quite murky ... use 'IW' or 'WW' format date masks and one gets different definitions ... with 'W' and 'WW' weeks don't always have 7 days ... it is true though that 'W'-weeks roll-up into months (and hence years) and 'WW'-weeks roll-up into years (but not to months).

The hierarchies are:
Hour >-- Day >-- Month >-- Quarter >-- Year
Hour >-- Day >-- Week ('IW' ISO)
Hour >-- Day >-- Week ('W') >-- Month >-- Quarter >-- Year
Hour >-- Day >-- Week ('WW') >-- Year


Tom Kyte
August 12, 2004 - 7:33 am UTC

ok ok, daily into monthly ;)

it was the fact that a mv can roll into an mv can roll into an mv i was trying to get across. you can aggregate the aggregates as needed.

"Full outerjoin" resolved for thiru

Logan Palanisamy, August 11, 2004 - 6:51 pm UTC

Thiru,

You need to NVL the first two columns also. 

 1  SELECT nvl(P1.SDATE, p2.sdate), nvl(P1.CCY, p2.ccy), nvl(P1.AMT,0),nvl(P2.AMT,0)
  2  FROM P2 FULL OUTER JOIN P1
  3  ON P1.SDATE  = P2.SDATE
  4* AND P1.CCY = P2.CCY
SQL> /

NVL(P1.SD NVL NVL(P1.AMT,0) NVL(P2.AMT,0)
--------- --- ------------- -------------
16-AUG-04 BCD           100           100
16-AUG-04 CDE           200           200
16-AUG-04 ABC           300           300
16-AUG-04 EFG             0           100
16-AUG-04 XYZ           300             0 

thiru, August 11, 2004 - 10:47 pm UTC

Thanks a lot.

greta thread!

A reader, August 12, 2004 - 11:32 am UTC

"it was the fact that a mv can roll into an mv can roll into an mv i was trying
to get across. you can aggregate the aggregates as needed. "

In our case the hourly data is kept for 2 weeks, the
daily data for a month and the monthly for a year.
Does this have a bearing on an MV based solution
since a query will depend on the real data underlying
in the tables. In other words, how can we have mvs on
mvs with different "purging" periods?


Tom Kyte
August 12, 2004 - 11:40 am UTC

oh, well, that'll not work -- you are back into DIY land and you have to train your users to query the right data at the right time. MV's cannot have different "purge periods"

ok - thanx!

A reader, August 12, 2004 - 11:52 am UTC

I guess in cases where you have to do this (due to sheer volume of data) mvs are not possible. So in such a case
if we have, say hourly data to yearly data and if we want
an mv based solution, we will have to keep the hourly
data for a year for it to work, correct?

thank you!!

Tom Kyte
August 12, 2004 - 12:29 pm UTC

correct, you have the details, and summations of the details

thanx!

A reader, August 12, 2004 - 12:31 pm UTC


query

A reader, August 13, 2004 - 2:48 pm UTC

First the schema:

scott@ORA92I> drop table t1;

Table dropped.

scott@ORA92I> create table t1
2 (
3 x varchar2(10),
4 y number
5 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into t1( x, y ) values ( 'A', 2 );

1 row created.

scott@ORA92I> insert into t1( x, y ) values ( 'A', 3 );

1 row created.

scott@ORA92I> insert into t1( x, y ) values ( 'B', 3 );

1 row created.

scott@ORA92I> --insert into t1( x, y ) values ( 0, 0 );
scott@ORA92I> --insert into t1( x, y ) values ( 0, 0 );
scott@ORA92I>
scott@ORA92I> select * from t1;

X Y
---------- ----------
A 2
A 3
B 3
------------------------------------
requirement is to get a flag indicating if
at least one record exists for x = 'A'
and so on. Possible values of x are
'A', 'B', 'C'; for the above data we
would get

A B C
- - -
Y Y N

my solution is:

scott@ORA92I> select max(decode( x, 'A', 'Y', 'N')) A_count,
2 max(decode( x, 'B', 'Y', 'N')) B_count,
3 max(decode( x, 'C', 'Y', 'N')) C_count
4 from t1;

A B C
- - -
Y Y N

the problem is I dont wanna count beyond the first
record...Do you have a better solution?

thanx!

Tom Kyte
August 13, 2004 - 6:16 pm UTC

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select nvl((select 'Y' from dual where exists(select null from t1 where x='A')),'N') A,
  2         nvl((select 'Y' from dual where exists(select null from t1 where x='B')),'N') B,
  3         nvl((select 'Y' from dual where exists(select null from t1 where x='C')),'N') C
  4    from dual;
 
A B C
- - -
Y Y N
 
 

Thank you!

A reader, August 13, 2004 - 6:38 pm UTC


q on a query

A reader, August 30, 2004 - 7:15 pm UTC

First the schema and data
----
scott@ORA92I> set echo on
scott@ORA92I> set head on
scott@ORA92I> alter session set nls_date_format = 'YYYY-MM-DD HH:MI:SS';

Session altered.

scott@ORA92I> --schema
scott@ORA92I>
scott@ORA92I> drop table curr_t1;

Table dropped.

scott@ORA92I> create table curr_t1
2 (
3 a varchar2(10),
4 value number,
5 time_of_entry date
6 );

Table created.

scott@ORA92I>
scott@ORA92I> drop table daily_rollup_t1;

Table dropped.

scott@ORA92I> create table daily_rollup_t1
2 (
3 a varchar2(10),
4 maximum number,
5 minimum number,
6 average number,
7 sample_count number,
8 time_rolled_up date
9 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into curr_t1( a, value, time_of_entry )
2 values( 'a', 1, sysdate );

1 row created.

scott@ORA92I> insert into curr_t1( a, value, time_of_entry )
2 values( 'b', 7, sysdate );

1 row created.

scott@ORA92I>
scott@ORA92I> commit;

Commit complete.

scott@ORA92I> select * from curr_t1;

A VALUE TIME_OF_ENTRY
---------- ---------- -------------------
a 1 2004-08-30 03:50:09
b 7 2004-08-30 03:50:09

scott@ORA92I>
scott@ORA92I> insert into daily_rollup_t1( a, maximum, minimum, average, sample_count,
2 time_rolled_up )
3 values( 'a', 10, 3, 5, 2, sysdate-1 );

1 row created.

scott@ORA92I> commit;

Commit complete.

scott@ORA92I>
scott@ORA92I> select *
2 from daily_rollup_t1;

A MAXIMUM MINIMUM AVERAGE SAMPLE_COUNT TIME_ROLLED_UP
---------- ---------- ---------- ---------- ------------ -------------------
a 10 3 5 2 2004-08-29 03:50:09

----------------



First table curr_t1 contains the current values corresponding to the column a( one row per unique value of a).Second table daily_rollup_t1 contains the rolled up value of acolumn.

-- Now I want to get the average max, min of all daily
-- rollup values including the value in curr_t1. The degenerate case would be when there are no records in the daily_rollup_t1 in which case I will get the value in curr_t1 as average, max, min etc. There will be only one value of "a" column and the latest
date in the table curr_t1.
In the above data, my output should be:
A AVERAGE MAXIMUM MINIMUM LATEST_VALUE
----------- ------- ------- ------- ------------
a 3.67 10 1 1
b 7 7 7 7

My attempt is given below - do you have any other better
alternatives in mind?

scott@ORA92I> select a,
2 sum_value/sum_sample_count as average,
3 maximum,
4 minimum,
5 last_known_value
6 from
7 (
8 select a,
9 sum( value ) sum_value,
10 sum( sample_count ) sum_sample_count,
11 max( maximum ) maximum,
12 min( minimum ) minimum,
13 max( last_known_value ) as last_known_value
14 from
15 (
16 select a, value as maximum,
17 value as minimum,
18 value as last_known_value,
19 1 as sample_count,
20 value as value
21 from curr_t1
22 union
23 select a, max( maximum), min(minimum),
24 max(0) as last_known_value,
25 sum(sample_count),
26 sum(average*sample_count) as value
27 from daily_rollup_t1
28 group by a
29 )
30 group by a
31 );

A AVERAGE MAXIMUM MINIMUM LAST_KNOWN_VALUE
---------- ---------- ---------- ---------- ----------------
a 3.66666667 10 1 1
b 7 7 7 7




Tom Kyte
August 30, 2004 - 7:35 pm UTC

using column "a" and value "a" and "b" in column "a" on a really long followup -- that we have to page up and down to read -- is really confusing :)

i don't see why you need to query anthing other then the daily rollup? why not just "refresh" the daily rollup and query it directly?

sorry Tom!

A reader, August 31, 2004 - 11:52 am UTC

I am posting the same schema with more meaningful
schema names. If you did not intend me to post
this again, please accept my apologies and ignore this
query.
Also, the reason we need to query the current value
is that the daily rollup happens once a day and this value is useful till usch time that the rollup has not happenned.
We dont use MVs because we need history and cant afford
(spacewise) to store all the data for the lowest level.
----
scott@ORA92I> set head on
scott@ORA92I> alter session set nls_date_format = 'YYYY-MM-DD HH:MI:SS';

Session altered.

scott@ORA92I> --schema
scott@ORA92I>
scott@ORA92I> drop table curr_t1;

Table dropped.

scott@ORA92I> create table curr_t1
2 (
3 info_type varchar2(10),
4 value number,
5 time_of_entry date
6 );

Table created.

scott@ORA92I>
scott@ORA92I> drop table daily_rollup_t1;

Table dropped.

scott@ORA92I> create table daily_rollup_t1
2 (
3 info_type varchar2(10),
4 maximum number,
5 minimum number,
6 average number,
7 sample_count number,
8 time_rolled_up date
9 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into curr_t1( info_type, value, time_of_entry )
2 values( 'a', 1, sysdate );

1 row created.

scott@ORA92I> insert into curr_t1( info_type, value, time_of_entry )
2 values( 'b', 7, sysdate );

1 row created.

scott@ORA92I>
scott@ORA92I> commit;

Commit complete.

scott@ORA92I> select * from curr_t1;

INFO_TYPE VALUE TIME_OF_ENTRY
---------- ---------- -------------------
a 1 2004-08-31 08:33:04
b 7 2004-08-31 08:33:04

scott@ORA92I>
scott@ORA92I> insert into daily_rollup_t1( info_type, maximum, minimum, average, sample_count,
2 time_rolled_up )
3 values( 'a', 10, 3, 5, 2, sysdate-1 );

1 row created.

scott@ORA92I> commit;

Commit complete.

scott@ORA92I>
scott@ORA92I> select *
2 from daily_rollup_t1;

INFO_TYPE MAXIMUM MINIMUM AVERAGE SAMPLE_COUNT TIME_ROLLED_UP
---------- ---------- ---------- ---------- ------------ -------------------
a 10 3 5 2 2004-08-30 08:33:04


First table curr_t1 contains the current values corresponding
to the column info_type( one row per unique value of info_type).
Second table daily_rollup_t1 contains the rolled up value of info_type
column.
Now I want to get the average max, min of all daily
rollup values including the value in curr_t1. The degenerate
case would be when there are no records in the daily_rollup_t1
in which case I will get the value in curr_t1 as average, max
min etc. There will be only one value of "info_type" column and the latest
date in the table curr_t1.
In the above data, my output should be:

INFO_TYPE AVERAGE MAXIMUM MINIMUM LATEST_VALUE
----------- ------- ------- ------- ------------
a 3.67 10 1 1
b 7 7 7 7

scott@ORA92I> -- my attempt
scott@ORA92I>
scott@ORA92I> select info_type,
2 sum_value/sum_sample_count as average,
3 maximum,
4 minimum,
5 last_known_value
6 from
7 (
8 select info_type,
9 sum( value ) sum_value,
10 sum( sample_count ) sum_sample_count,
11 max( maximum ) maximum,
12 min( minimum ) minimum,
13 max( last_known_value ) as last_known_value
14 from
15 (
16 select info_type, value as maximum,
17 value as minimum,
18 value as last_known_value,
19 1 as sample_count,
20 value as value
21 from curr_t1
22 union
23 select info_type, max( maximum), min(minimum),
24 max(0) as last_known_value,
25 sum(sample_count),
26 sum(average*sample_count) as value
27 from daily_rollup_t1
28 group by info_type
29 )
30 group by info_type
31 );

INFO_TYPE AVERAGE MAXIMUM MINIMUM LAST_KNOWN_VALUE
---------- ---------- ---------- ---------- ----------------
a 3.66666667 10 1 1
b 7 7 7 7


Tom Kyte
August 31, 2004 - 1:31 pm UTC

looks ok to me then, use union all instead of union if you can (understand the fundemental differences between union and union all first!)

thanx!

A reader, August 31, 2004 - 1:59 pm UTC


cumulative resultset

Thiru, September 09, 2004 - 10:45 pm UTC

Hi Tom,

How do we get a cumulative resultset for a column. To be more clear, if I have:

create table t (dt date,id varchar2(6),amt number);
insert into t values('09-SEP-04','ABC',1000);
insert into t values('10-SEP-04','ABC',1000);
insert into t values('13-SEP-04','ABC',1000);

How would we get a result wherein the amt column is summed up for each day and also the days in the month does not have gaps ( assuming we have another table if required for the days of the month)

So the result would be:

dt id cumulative_amt

09-SEP-04 ABC 1000
10-SEP-04 ABC 2000
11-SEP-04 ABC 2000
12-SEP-04 ABC 2000
13-SEP-04 ABC 3000

Do we have to use analytic function for this query?

Thanks a million.


Tom Kyte
September 10, 2004 - 8:04 am UTC

you need analytics for the running total.

getting the "gap free" stuff will depend on "more information"

a) do we need to partition by ID -- that is, if we insert 'DEF' for sept 12th, are there
- 5 records, if so, then I'm confused
- 10 records

b) if we partition by id do you have version 10g? that has partitioned outer joins

c) if 9i and before, we have to cartesian product and outer join to that.


see:

</code> https://www.oracle.com/technetwork/issue-archive/2014/14-jan/o14asktom-2079690.html <code>

"partitioned outer joins". it demonstrates what you would have to do in 9i and 10g.


select dt, id, sum(amt) over (partition by id order by dt) amt
from t;

is the inline view you would outer join to.

Thiru, September 10, 2004 - 10:12 am UTC

If the query is for 12-SEP-04, then it should return 4 rows with the same cumulative amount of 10th as there are no records for 11th and 12th. If I have a table with all dates for the month/year, then a join with this second cal_days table produce the required result. I am running on 9.2 and your query worked fine.
SQL> select dt, id, sum(amt) over (partition by id order by dt) amt
  2    from t;

DT        ID            AMT
--------- ------ ----------
09-SEP-04 ABC          1000
10-SEP-04 ABC          2000
13-SEP-04 ABC          3000 

Thiru, September 12, 2004 - 9:58 pm UTC

If I have a table with calendar days, then can I join this table for getting the gaps in days and the cumulative amount of the previous day carried over?

Thanks

Tom Kyte
September 13, 2004 - 7:33 am UTC

maybe, given the level of detail specified, I supposed anything is possible -- or not.

but in any case -- what does this have even remotely to do with "estimate the size of an IOT"???? (meaning, don't try to clarify the question here -- it isn't even remotely related to the original question)

A reader, September 13, 2004 - 11:42 am UTC

I am sorry but my question did not mention a word about "estimate the size of an IOT".

Tom Kyte
September 13, 2004 - 2:00 pm UTC

sorry that was my mistake. was looking at the wrong subject when I got that one.

SQL Query, Oracle 9i

Jaravam, November 09, 2004 - 2:29 pm UTC

Hi tom

I am using Oracle 9i. I have a table with column Dept and Empnum. This table will have multiple Empnum for a Dept.

For Example:

Dept Empnum

1 A1
1 A2
1 A3
2 B1
2 B2
3 C1

I need a single query which gives me the output as


Dept Empnum(s)
1 A1,A2,A3
2 B1,B2
3 C1

Can you please help me in this.

Thanks
Jaravam

Tom Kyte
November 09, 2004 - 3:26 pm UTC

search this site for

stragg



count the number of rows in the table.

Sean, November 10, 2004 - 9:49 am UTC

Hi Tom,

There are three ways to count the number of rows in the table. It seems that there are no performance differences even if it is a million rows table with 400 columns. Is it?

select (*) from t1;
select (1) from t1;
select (primary_key_column) from t1;


Oracle 9204. Solaris 5.9

Thanks so much for your help.



Tom Kyte
November 10, 2004 - 12:10 pm UTC

there is one correct way

select count(*) form t1;


select count(1) is optimized to be that already


select count(pk) would either be the same or less efficient, not worth testing -- count(*) is right.

Outer Join

A reader, November 15, 2004 - 12:43 pm UTC

Can you please explain diff between following queries?

SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+) and co.customer_no is null


SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+)

Tom Kyte
November 15, 2004 - 8:56 pm UTC

first one is known as an "anti join", probably better written as :

select * from cust
where customer_no not in ( select customer_no from consult );

it shows all rows in cust not in consult.

the other shows all rows in cust and if they have a match in consult, show that too.

Anti Join

A reader, November 16, 2004 - 8:25 am UTC

I executed following query

select * from cust where customer_no not in
(select customer_no from consult)

No rows returned

SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+) and co.customer_no is null

100000+ rows returned

Should't they return same number of rows?








OK

Esther, November 16, 2004 - 12:27 pm UTC

Hello Tom,
I need a query which calculates the avg(sal) for every three rows.Is there any straight SQL approach available or we have to go in for Analytic functions?Please do reply.
Bye

Tom Kyte
November 16, 2004 - 1:05 pm UTC

scott@ORA9IR2> select *
2 from ( select trunc(rownum/3-0.1) r, sal
3 from (select sal from emp order by empno)
4 )
5 /

R SAL
---------- ----------
0 800
0 1600
0 1250
1 2975
1 1250
1 2850
2 2450
2 3000
2 5000
3 1500
3 1100
3 950
4 3000
4 1300

14 rows selected.

scott@ORA9IR2>
scott@ORA9IR2> select r, count(*), avg(sal)
2 from ( select trunc(rownum/3-0.1) r, sal
3 from (select sal from emp order by empno)
4 )
5 group by r
6 /

R COUNT(*) AVG(SAL)
---------- ---------- ----------
0 3 1216.66667
1 3 2358.33333
2 3 3483.33333
3 3 1183.33333
4 2 2150

scott@ORA9IR2>


Smart

A reader, November 16, 2004 - 2:27 pm UTC


Thanks

Esther, November 17, 2004 - 7:25 am UTC

Hello Tom,
Thanks for your reply.I used a query like
sql>select avg(sal) from emp
group by mod(rownum,3)
having mod(rownum,3) = 0;
But this results in only one row.Can this query be modified
to achieve the result as you have done?
Please reply.
Bye!

Tom Kyte
November 17, 2004 - 10:53 am UTC

but that is not even remotely simlar to what I have. redo it the way I did it please.

mod(rownum,3) returns only 0, 1, 3

it will not group 3 records at a time, it'll create 3 groups period.

A reader, December 01, 2004 - 5:52 pm UTC

So far good. But If I use a ref cursor in JAVA or Oracle Forms, How would I display the result set? Is it something that the GUI tools has to deal with?

Thanks

Tom Kyte
December 01, 2004 - 7:59 pm UTC

to java -- it is not any different than a result set? there is nothing to compare it to in java -- it is just a result set and that is what java uses.

in developer (forms), you can build blocks on stored procedures that return result sets as well -- using ref cursors.

Another query

Scott, January 17, 2005 - 2:53 am UTC

Hello Tom,
I have some questions for you. I am planning to delete some records from one of the tables
Table structure is

DESC T

Name Null? Type
---------------- ------- --------
CHANGE_ID NOT NULL VARCHAR2(10)
CLAIM_CLAIM_ID NOT NULL VARCHAR2(10)
TABLE_NAME NOT NULL VARCHAR2(2)
FIELD_NAME NOT NULL VARCHAR2(2)
ACTION NOT NULL VARCHAR2(2)
USER_USER_ID NOT NULL VARCHAR2(10)
CHANGED_DATE NOT NULL DATE
OLD_VALUE VARCHAR2(40)
NEW_VALUE VARCHAR2(40)

I am trying to delete records on the basis of CHANGED_DATE column. Due to some policy we have to keep records for 7 years so for deleting data older than 7 years I wrote following query but I think I am doing something wrong here.
Query which I used for counting rows to be deleted was

select count(*) from t where changed_date<(select (sysdate-(365*7)) from dual);


COUNT(*)
----------
16330

Now when I was crosschecking the result I came to know that I have done something wrong> I selected min(changed_date)and result was as under.

select min(to_date(changed_date,'MM-DD-YYYY')) from t;

MIN(TO_DAT
----------
06-12-0001

So what I guess that my table has data which starts from 12 July 2001 so I cant delete any data as per our company's data retention policy of 7 years.

So please help me in solving my doubt that in which step I did mistake? I mean did I do any mistake while selecting data older than 7 years from the table? If so please give me correct SQL to featch data older than 7 years.

Thanks,
Scott




Tom Kyte
January 17, 2005 - 8:28 am UTC

beware implicit conversions.

changed_date is a date.

your default date format probably has YY or RR in it.

yoou to_date( changed_date , .... ) implicitly converts the changed_date into a string, for to-date takes a STRING.

Consider:

ops$tkyte@ORA9IR2> create table t ( dt date );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( sysdate );
 
1 row created.
 
ops$tkyte@ORA9IR2> select to_char(dt,'dd-mon-yyyy'),
  2         to_date(dt,'dd-mon-yyyy'),
  3         to_CHAR( to_date(dt,'dd-mon-yyyy') , 'dd-mon-yyyy' )
  4    from t;
 
TO_CHAR(DT, TO_DATE(D TO_CHAR(TO_
----------- --------- -----------
17-jan-2005 17-JAN-05 17-jan-0005



the DT is really 2005, to_date( dt, .... ) converted it using my default date mask (dd-mon-yy) and then we applied YYYY to it -- that is the year "5"


So the DT data is OK, your implicit conversions -- NOT..


Try this:

select to_CHAR( min(changed_date), 'MM-DD-YYYY' ) from t;


and also -- there are not 365 days in a year, there are however 12 months.  You might consider:

select count(*) 
  from t 
 where changed_date < ADD_MONTHS( sysdate, -7*12 );


instead. 

another query

A reader, January 17, 2005 - 4:27 am UTC

Hello TOm,
It's me again. Sorry but I thought I didn't provide much information. I am on 8.1.7 and in my above question

select sysdate from dual;

sysdate
--------
17-JAN-2005


Thanks,
Scott

Attn :

Parag Jayant Patankar, January 17, 2005 - 9:42 am UTC

Hi Reader,

Regarding your question Tom is suggesting this error because of your date format ( not with data ) mask. Ideally you should use "RRRR" format instead of "YY" format. It is better if you put same test case here including table creation script, inserting records sql statement, and your deletion script.

So it will be easy for everybody.


regards & thanks
pjp

Tom Kyte
January 17, 2005 - 10:06 am UTC

it is better to avoid implicit conversions all together! period.

Thanks Tom

Scott, January 17, 2005 - 3:52 pm UTC

Thanks for your help Tom. It was very helpful.

Thanks,
Scott

tricky query can you help

Esg, January 18, 2005 - 6:46 pm UTC

our reports insert a row into the below table before running and after running.

CREATE TABLE testlog (
REPORTNAME CHAR (50) NOT NULL,
STARTEND CHAR (1) NOT NULL,
PARMSTRING CHAR (255) NOT NULL,
TIMESTAMP DATE NOT NULL,
ID CHAR (50) NOT NULL,
SOURCE CHAR (10)
/

Before the report runs

insert into test log values( 'Repname1', 'S' , 'name=ken, phone=234434', sysdate, user, 'US')

After the report runs

insert into test log values( 'Repname1', 'E' , 'name=ken, phone=234434', sysdate, user, 'US')

We have 100's of reports run by 100's of users.

Now we want to find out those report which are MOST run simultaneously. Our timestamp column

has even time portion inserted into the database.

Now the challenge is , in the existing data, there is no way we can tie a before report run inserted row to a after report inserted row.

A combination of id and timestamp helps in identyfying a before report insert row uniquely, but there is no way we can tie it the after report run row belonging to it.

ID above is not the unique identifier nor primary key but just the user name.

Now what is best possible way to find out those report which are MOST run simultaneously, i.e. those reports which are most often found running simulteneously as multiple users are running it at the same time.



Tom Kyte
January 19, 2005 - 10:14 am UTC

<quote>
Now the challenge is , in the existing data, there is no way we can tie a before
report run inserted row to a after report inserted row.
</quote>

sorry -- not too much anyone could do then.....

should have been:

insert
run report
UPDATE....

that is the way I do it myself.


if you cannot tie the start/stop records together, you cannot get to a "start time", "run time" single record -- hence you cannot get the answer to your question.

query

Menon, January 20, 2005 - 4:23 pm UTC

--first the schema

scott@ORA10GR1> drop table child_table;

Table dropped.

scott@ORA10GR1> drop table parent_table;

Table dropped.

scott@ORA10GR1> drop table data_table;

Table dropped.

scott@ORA10GR1> create table parent_table ( parent_name varchar2(10) primary key );

Table created.

scott@ORA10GR1> create table child_table ( parent_name references parent_table,
2 child_name varchar2(10) );

Table created.

scott@ORA10GR1> create table data_table( parent_or_child_name varchar(10),
2 data1 number,
3 data2 number,
4 data3 number
5 );

Table created.

scott@ORA10GR1>
scott@ORA10GR1> insert into parent_table( parent_name) values( 'p1' );

1 row created.

scott@ORA10GR1> insert into parent_table( parent_name) values( 'p2' );

1 row created.

scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p1', 'c11' );

1 row created.

scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p1', 'c12' );

1 row created.

scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p2', 'c21' );

1 row created.

scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p2', 'c22' );

1 row created.

scott@ORA10GR1>
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'p1', 1, 2, 3);

1 row created.

scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c11', 1, 2, 3);

1 row created.

scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c12', 1, 4, 3);

1 row created.

scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'p2', 1, 2, 3);

1 row created.

scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c21', 1, 2, 3);

1 row created.

scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c22', 2, 2, 3);

1 row created.

scott@ORA10GR1> commit;

Commit complete.

scott@ORA10GR1> select * from parent_table;

PARENT_NAM
----------
p1
p2

scott@ORA10GR1> select * from child_table;

PARENT_NAM CHILD_NAME
---------- ----------
p1 c11
p1 c12
p2 c21
p2 c22

scott@ORA10GR1> select * from data_table;

PARENT_OR_ DATA1 DATA2 DATA3
---------- ---------- ---------- ----------
p1 1 2 3
c11 1 2 3
c12 1 4 3
p2 1 2 3
c21 1 2 3
c22 2 2 3

6 rows selected.

parent_table contain parent records.
child_table contain child records.
data_table contains data values for either parent or
children. Each row either belongs to the parent or
to one of the child...

The data should be such that for each parent and
child, all data columns (data1, data2 etc.) should
contain the same value. The requirement is to create a report that violates this - e.e. in the above case the
report looks like:

Following data columns don't match for parent and child
values:

parent name: p1, child name: c12 mismatch columns: data2
child name: c21, child name: c22 mismatch columns: data1

Formatting the report as above is not important as
long as we get the answer...

I wrote the following to find out those
records in which parents dont match the child
data value...Before realizing each child
also has to be matched with other child records
for the same parent also...

scott@ORA10GR1> select parent_name, child_name,
2 max( decode( parent_data1, child_data1, null, 'data1') ) data1_compare,
3 max( decode( parent_data2, child_data2, null, 'data2')) data2_compare,
4 max( decode( parent_data3, child_data3, null, 'data3')) data3_compare
5 from
6 (
7 select p.parent_name, p.parent_data1, p.parent_data2, p.parent_data3,
8 c.child_name, c.child_data1, c.child_data2, c.child_data3
9 from
10 (
11 select d.parent_or_child_name parent_name,
12 d.data1 parent_data1,
13 d.data2 parent_data2,
14 d.data3 parent_data3
15 from data_table d, parent_table p
16 where d.parent_or_child_name = p.parent_name
17 ) p,
18 (
19 select d.parent_or_child_name child_name,
20 c.parent_name,
21 d.data1 child_data1,
22 d.data2 child_data2,
23 d.data3 child_data3
24 from data_table d, child_table c
25 where d.parent_or_child_name = c.child_name
26 ) c
27 where c.parent_name(+)= p.parent_name
28 )
29 group by parent_name, child_name;

PARENT_NAM CHILD_NAME DATA1 DATA2 DATA3
---------- ---------- ----- ----- -----
p1 c11
p1 c12 data2
p2 c21
p2 c22 data1


Procedural solution should be simple... Is there
a SQL solution you can think of?

Thanx.


Tom Kyte
January 20, 2005 - 7:37 pm UTC

how does this relate to the original thread?

does not - never mind

Menon, January 20, 2005 - 8:36 pm UTC

thought the original thread was about a query...

OK- I tried it out myself

Menon, January 24, 2005 - 3:07 pm UTC

Used lag() to solve it.
Posting it just in case someone else is interested...

Added parent_name column to the data_table above..
So the three tables now have following data
---
scott@ORA10GR1.US.ORACLE.COM> select * from parent_table;

PARENT_NAM
----------
p1
p2

scott@ORA10GR1.US.ORACLE.COM> select * from child_table;

PARENT_NAM CHILD_NAME
---------- ----------
p1 c11
p1 c12
p2 c21
p2 c22

scott@ORA10GR1.US.ORACLE.COM> select * from data_table;

PARENT_OR_ PARENT_NAM DATA1 DATA2 DATA3
---------- ---------- ---------- ---------- ----------
p1 p1 1 3 3
c11 p1 1 2 3
c12 p1 1 4 3
p2 p2 1 2 3
c21 p2 1 2 3
c22 p2 2 2 3

6 rows selected.

----

Note the new column parent_name in the data_table.

And the select that seems to work is:
scott@ORA10GR1.US.ORACLE.COM> select d.parent_name, d.data1, d.data2, d.data3,
2 case
3 when parent_name = lag_parent_name then
4 decode( data1, lag_data1, null, 'data1' )
5 end data1_diff,
6 case
7 when parent_name = lag_parent_name then
8 decode( data2, lag_data2, null, 'data2' )
9 end data2_diff,
10 case
11 when parent_name = lag_parent_name then
12 decode( data3, lag_data3, null, 'data3' )
13 end data3_diff
14 from
15 (
16 select d.*,
17 lag( parent_name ) over( order by parent_name) lag_parent_name,
18 lag( data1 ) over( order by parent_name) lag_data1,
19 lag( data2 ) over( order by parent_name) lag_data2,
20 lag( data3 ) over( order by parent_name) lag_data3
21 from
22 (
23 select *
24 from data_table
25 ) d
26 ) d;

PARENT_NAM DATA1 DATA2 DATA3 DATA1_DIFF DATA2_DIFF DATA3_DIFF
---------- ---------- ---------- ---------- ---------- ---------- ----------
p1 1 3 3
p1 1 2 3 data2
p1 1 4 3 data2
p2 1 2 3
p2 1 2 3
p2 2 2 3 data1

----------

Once again - sorry to have posted it on a wrong
thread earlier. Sometimes it can get confusing (e.g. many
of the follow ups in the above thread did not
*seem* related to the original thread when I glanced
through them earlier.)
Of course, in many cases you answer stuff just
because it is easy to do that instead of saying
anything else:) And of course I appreciate
your site regardless anyways...



how to write this in single query

A reader, January 25, 2005 - 5:12 pm UTC

Hi

I have a data like this

C1 D1
---- ----
1 1
1 23
2 1
3 23

I need to write a query which returns C1 with values D1, 1 and 23, D1 can only return 23 if it also have value 1, so the query run agaisnt above data should return

C1 D1
---- ----
1 1
1 23
3 23

It´s like the opposite of

select * from X b
where D1 = 1
and not exists (select null from X a where a.c1 = b.c1 and a.d1 = 23)

How can you write that in single query?

Thank you

Tom Kyte
January 25, 2005 - 7:18 pm UTC

that doesn't make sense, I've no idea why 2, 1 didn't show up?

or if i can figure out why 2,1 didn't show up, then I cannot figure out how 3,23 did.

got it

A reader, January 25, 2005 - 5:19 pm UTC

Hi I got it

select * from X b
where D1 in (1, 23)
and exists (select null from X a where a.c1 = b.c1 and a.d1 = 23)

cheers

using inline views

Thiru, January 28, 2005 - 4:56 pm UTC

Tom,

Trying to get to a query for the following using inline views. Shall appreciate your help.

drop table test_a;
drop table test_b;
drop table test_c;
create table test_a (snum number, status varchar2(20));
create table test_b(snum number, value number,details varchar2(20));
create table test_c( snum number);
insert into test_b values(100,1000,'testing100');
insert into test_b values(200,2000,'testing200');
insert into test_b values(300,3000,'testing300');
insert into test_b values(400,4000,'testing400');
insert into test_b values(500,5000,'testing500');

insert into test_c values(100);
insert into test_c values(200);
insert into test_c values(300);
insert into test_a values(100,'err100');
insert into test_a values(999,'err999');


select * from test_a;

SNUM STATUS
---------- --------------------
100 err100
999 err999


select * from test_b;

SNUM VALUE DETAILS
---------- ---------- --------------------
100 1000 testing100
200 2000 testing200
300 3000 testing300
400 4000 testing400
500 5000 testing500

select * from test_c;

SNUM
----------
100
200
300

how to write a query for the following result required with
conditon: snum column equates in all the three tables.
for the above data:

cnt(from test_b) cnt(from test_a) sumtest_b.value)
3 1 6000


Tom Kyte
January 28, 2005 - 7:24 pm UTC

not sure what you mean. "snum column equates in all the three tables"

given that -- I cannot rectify "count in test_b = 3".


It *appears* that you want to take the values in test_c, and count the rows in test_A that have that snum, count the rows in test_b that have that snum, and sum up the value column in test_b that have that value.


select *
from (select count(*) from a where snum in (select * from test_c)),
(select count(*),sum(value) from b where snum in (select * from test_c))



sorry missed an imp point

A reader, January 28, 2005 - 5:17 pm UTC

Here is what is required:
drop table test_a;
drop table test_b;
drop table test_c;
create table test_a (id number,snum number, status varchar2(20));
create table test_b(id number,snum number, value number,details varchar2(20));
create table test_c( id number,snum number);
insert into test_b values(1,100,1000,'testing100');
insert into test_b values(1,200,2000,'testing200');
insert into test_b values(2,300,3000,'testing300');
insert into test_b values(2,400,4000,'testing400');
insert into test_b values(3,500,5000,'testing500');

insert into test_c values(1,100);
insert into test_c values(2,200);
insert into test_c values(2,300);
insert into test_a values(1,100,'err100');
insert into test_a values(4,999,'err999');

SQL> select * from test_a;

        ID       SNUM STATUS
---------- ---------- --------------------
         1        100 err100
         4        999 err999

SQL> select * from test_b;

        ID       SNUM      VALUE DETAILS
---------- ---------- ---------- --------------------
         1        100       1000 testing100
         1        200       2000 testing200
         2        300       3000 testing300
         2        400       4000 testing400
         3        500       5000 testing500

SQL> select * from test_c;

        ID       SNUM
---------- ----------
         1        100
         2        200
         2        300

how to write a query for the following result required with
conditon: snum and id column equates in all the three tables.
for the above data ( and id and snum found in test_c)

id cnt(from test_b)  cnt(from test_a)      sumtest_b.value) 
 1     1                          1                  1000
 2     2                          0                  7000 

Tom Kyte
January 28, 2005 - 7:26 pm UTC

well, given my small example above, can you extend it to do this..... you can join those two inline views together (i did, a cartesian join...)

add the relevant columns to the inline views and join them.

OK

James, January 29, 2005 - 12:52 am UTC

Hi Tom,
Any direct SQL Statement exists to retrieve data from
a table with sys.anydata type??
Also provide useful inputs for using sys.anydataset

SQL> create table t(x sys.anydata)
  2  /

Table created.

SQL> insert into t values(sys.anydata.convertNumber(100))
  2  /

1 row created.

SQL> insert into t values(sys.anydata.convertVarchar2('Hello World'))
  2  /

1 row created.

SQL> insert into t values(sys.anydata.ConvertDate(sysdate))
  2  /

1 row created.

SQL> commit;

Commit complete.

SQL> select * from t
  2  /

X()                                                                             
--------------------------------------------------------------------------------
ANYDATA()                                                                       
ANYDATA()                                                                       
ANYDATA()                                                                       


 

How do i make up records, using previous record details

Ali, January 29, 2005 - 3:59 am UTC

Database version: 9.2

CREATE TABLE t_history(seq_no number, item_code varchar2(10), doc_date date, doc_no number, qty_bal number, rate number);

INSERT INTO t_history values(101, 'A', to_date('24-DEC-04'), 90, 5, 10.25);
INSERT INTO t_history values(201, 'A', to_date('04-JAN-05'), 101, 10, 10.25);
INSERT INTO t_history values(202, 'A', to_date('14-JAN-05'), 102, 12, 11.5);
INSERT INTO t_history values(203, 'A', to_date('24-JAN-05'), 103, 13, 11.25);
INSERT INTO t_history values(204, 'A', to_date('24-JAN-05'), 104, 11, 11.25);

INSERT INTO t_history values(111, 'B', to_date('30-DEC-04'), 97, 12, 1.75);

INSERT INTO t_history values(221, 'C', to_date('25-JAN-05'), 111, 1, 127);
INSERT INTO t_history values(222, 'C', to_date('27-JAN-05'), 112, 2, 130);
INSERT INTO t_history values(223, 'C', to_date('28-JAN-05'), 113, 1, 130);


DESIRED OUTPUT (start date = 01-Nov-04, end date = current date)
========================================================================
ITEM DATE QTY RATE
-------------------------------------
A 24-DEC-04 5 10.25
A 25-DEC-04 5 10.25
.....................................
.....................................
A 04-JAN-05 10 10.25
A 05-JAN-05 10 10.25
.....................................
.....................................
A 24-JAN-05 11 11.25
.....................................
.....................................
A 29-JAN-05 11 11.25

B 30-DEC-04 12 1.75
B 31-DEC-04 12 1.75
.....................................
.....................................
B 29-JAN-05 12 1.75

C 25-JAN-05 1 127
.....................................
.....................................
C 29-JAN-05 1 130


Thanks for your time.

Tom Kyte
January 29, 2005 - 8:51 am UTC

You need a result set with all of the dates...

You need a result set with all of the item_codes....

We need to cartesian product them together.... (get all dt/item codes)

then we can outer join with that to pick up the observed qty/rates...


then we can carry down.  In 9iR2 this would look like this:


ops$tkyte@ORA9IR2> with dates
  2  as
  3  (
  4  select to_date('01-nov-2004','dd-mon-yyyy')+rownum-1 dt
  5    from all_objects
  6   where rownum <= (sysdate-to_date('01-nov-2004','dd-mon-yyyy'))+1
  7  ),
  8  items
  9  as
 10  (
 11  select distinct item_code
 12    from t_history
 13  ),
 14  items_dates
 15  as
 16  (select * from dates, items )
 17  select *
 18    from (
 19  select dt, item_code,
 20         to_number( substr( max( rn || qty_bal ) over 
               (partition by item_code order by dt),11)) qty_bal,
 21         to_number( substr( max( rn || rate    ) 
               over (partition by item_code order by dt),11)) rate
 22    from (
 23  select a.*, b.qty_bal, b.rate,
 24         case when b.qty_bal is not null
 25                  then to_char(row_number() 
                  over (partition by a.item_code order by a.dt),'fm0000000009')
 26                  end rn
 27    from items_dates a left join t_history b 
                 on (a.dt = b.doc_date and a.item_code = b.item_code)
 28         )
 29         )
 30   where qty_bal is not null or rate is not null
 31   order by item_code, dt
 32  /
 
DT        ITEM_CODE     QTY_BAL       RATE
--------- ---------- ---------- ----------
24-DEC-04 A                   5      10.25
25-DEC-04 A                   5      10.25
26-DEC-04 A                   5      10.25
27-DEC-04 A                   5      10.25
28-DEC-04 A                   5      10.25
29-DEC-04 A                   5      10.25
30-DEC-04 A                   5      10.25
31-DEC-04 A                   5      10.25
01-JAN-05 A                   5      10.25
.......


In 10g, this would be simplified to:

ops$tkyte@ORA10G> with dates
  2  as
  3  (
  4  select to_date('01-nov-2004','dd-mon-yyyy')+rownum-1 dt
  5    from all_objects
  6   where rownum <= (sysdate-to_date('01-nov-2004','dd-mon-yyyy'))+1
  7  )
  8  select *
  9    from (
 10  select b.dt, a.item_code,
 11         last_value( a.qty_bal ignore nulls) over (partition by item_code order by dt) qty,
 12         last_value( a.rate    ignore nulls) over (partition by item_code order by dt) rate
 13    from t_history a partition by (item_code) right join dates b on (b.dt = a.doc_date)
 14         )
 15   where qty is not null or rate is not null
 16   order by item_code, dt
 17  /
 
DT        ITEM_CODE         QTY       RATE
--------- ---------- ---------- ----------
24-DEC-04 A                   5      10.25
25-DEC-04 A                   5      10.25
26-DEC-04 A                   5      10.25
27-DEC-04 A                   5      10.25
28-DEC-04 A                   5      10.25
29-DEC-04 A                   5      10.25
30-DEC-04 A                   5      10.25
31-DEC-04 A                   5      10.25
01-JAN-05 A                   5      10.25
02-JAN-05 A                   5      10.25
03-JAN-05 A                   5      10.25
04-JAN-05 A                  10      10.25
05-JAN-05 A                  10      10.25
06-JAN-05 A                  10      10.25
07-JAN-05 A                  10      10.25
08-JAN-05 A                  10      10.25
....

due to the addition of

a) partitioned outer joins
b) ignore nulls in analytics.

 

Thanks Tom, but ...

Ali, January 30, 2005 - 8:03 am UTC

If more than 1 record exists for a day then instead of using:
....
from items_dates a left join t_history b
on (a.dt = b.doc_date and a.item_code = b.item_code)
....

I have to use:
......
from items_dates a left join (
SELECT * FROM (
SELECT seq_no, item_code, doc_date, qty_bal, rate,
row_number() over (partition by item_code, doc_date order by seq_no desc) rr
FROM t_history
) WHERE rr = 1
)
b
on (a.dt = b.doc_date and a.item_code = b.item_code)
......

right? Or is there a better way?

Ali


Tom Kyte
January 30, 2005 - 9:39 am UTC

no, why? why wouldn't you report on all records? is there more information missing from the question?

if you want just the "highest sequence by day", what you have there is just fine though (sort of a missing requirement in the original question)

query for the inline view issue above

Thiru, January 31, 2005 - 10:36 am UTC

Tom,

With the help of the inline view example, I wrote this query.
select b.id,b.cnt,b.sum,a.cnt from
(select id,count(*) cnt, sum(value) sum
from test_b where id||snum in (select id||snum from test_c) group by id)b
left outer join
(select id,count(*) cnt from test_a where id||snum in (select id||snum from test_c) group by id)a on( a.id=b.id );

ID CNT SUM CNT
---------- ---------- ---------- ----------
1 1 1000 1
2 2 7000

The result looks good. Will the id||snum an appropriate way of using in the where clause or is it good to join the two columns separately? Is there any other way of writing this query? Thanks for the time.



Tom Kyte
January 31, 2005 - 11:33 am UTC

where (id, snum) in ( select id, snum from ....


think about id=5, snum = 55 and id = 55, snum = 5 for example.....

just use a column list.

Thiru, January 31, 2005 - 4:05 pm UTC

Tom,

The undernoted query has a defect ( actually related to the queries I thought had solved the issue). The second inline view actually returns only count of 1 for the condition. But in the cartesian product, it shows up in all the rows. How to avoid this?

select a.id,a.cnt,a.sum,b.cnt from
(select id, count(*) cnt ,sum(value) sum from test_b where(id,snum) in
(select id,snum from test_c) group by id) a,
(select count(*) cnt from test_a where(id,snum) in
(select id,snum from test_c)) b;

ID CNT SUM CNT
---------- ---------- ---------- ----------
1 1 1000 1
2 2 7000 1

Thanks

Tom Kyte
January 31, 2005 - 4:15 pm UTC

sorry, i don't know what the "question" is really.

You have this query:

(select id, count(*) cnt ,sum(value) sum
from test_b where(id,snum) in (select id,snum from test_c)
group by id) a


counts and sums by ID

You have this one row query:

(select count(*) cnt
from test_a
where(id,snum) in (select id,snum from test_c)) b


it is just a count, it will be assigned to every row.


I don't know what else you would "do" with it other than give it to every row, it doesn't "belong" to any single row (no keys)

Thiru, January 31, 2005 - 4:55 pm UTC

So with the given conditions, the following query was written to get the result. Will this turn out very complicated (provided it is the only way to get the result) if the number of rows in test_c are hugh ( in millions) ?


select c.id,sum(c.cnt1),sum(c.sum),sum(c.cnt) from
(
select a.id id,a.cnt cnt1 ,a.sum sum ,b.cnt cnt from
(select id, count(*) cnt ,sum(value) sum from test_b where(id,snum) in
(select id,snum from test_c) group by id) a
left outer join
(select id,count(*) cnt from test_a where(id,snum) in
(select id,snum from test_c) group by id) b
on b.id=a.id) c
group by c.id ;

ID SUM(C.CNT1) SUM(C.SUM) SUM(C.CNT)
--------- ----------- ---------- ----------
1 1 1000 1
2 2 7000

Tom Kyte
January 31, 2005 - 5:55 pm UTC


millions are ok, it'll do the right speedy thing. big bulk hash joins.

Cursor

Anil, February 02, 2005 - 11:13 am UTC

Hi Tom

Regarding the cursor query you mentioned in the thread  

 select mgr, cursor( select ename from emp emp2 where emp2.mgr = emp.mgr )
  2* from ( select distinct mgr from emp ) emp


I thouhgt this query would be good to fetch records from a master and its child records when execute from a middle tier beacuse in terms of Total number of bytes tranasfered and number of round trips when compared to a normal join 


But the result was exactly opposit.


@NGDEV1-SQL> select a.atp_id,AC_TYP_INHOUSE,AC_TYP_DESC,AC_TYP,
  2  cursor(select deck,deck_desc from MST_AC_TYP_DECK b where b.atp_id=a.atp_id) Deck_Info,
  3  cursor(select period_frm,period_to,brd_pnt,off_pnt from MST_AC_TYP_EXCP c where c.atp_id=a.atp_id) Exception
  4  from mst_ac_typ a  where a.atp_id=3
  5  /

    ATP_ID AC_TY AC_TYP_DESC                                        AC_TYP     DECK_INFO            EXCEPTION
---------- ----- -------------------------------------------------- ---------- -------------------- --------------------
         3 33H   AIRBUS                                             A330-300   CURSOR STATEMENT : 5 CURSOR STATEMENT : 6

CURSOR STATEMENT : 5

DECK       DECK_DESC
---------- --------------------------------------------------
LOWERDECK
LD         TEST
MD         MIDDLE DECK


CURSOR STATEMENT : 6

PERIOD_FR PERIOD_TO BRD_P OFF_P
--------- --------- ----- -----
26-JAN-05 27-JAN-05



Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=17)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_DECK' (TABLE) (Cost=1 Card=2 Bytes=36)
   2    1     INDEX (RANGE SCAN) OF 'ATD_ATP_FK_I' (INDEX) (Cost=1 Card=2)
   3    0   TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_EXCP' (TABLE) (Cost=1 Card=1 Bytes=26)
   4    3     INDEX (RANGE SCAN) OF 'ATE_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
   5    0   TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP' (TABLE) (Cost=1 Card=1 Bytes=17)
   6    5     INDEX (UNIQUE SCAN) OF 'ATP_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)




Statistics
----------------------------------------------------------
         12  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1826  bytes sent via SQL*Net to client
       1211  bytes received via SQL*Net from client
          6  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

NGCS_DEV@NGDEV1-SQL> select a.atp_id,AC_TYP_INHOUSE,AC_TYP_DESC,AC_TYP,
 deck,deck_desc,period_frm,period_to,brd_pnt,off_pnt from MST_AC_TYP_DECK b, MST_AC_TYP_EXCP c,mst_ac_typ a
where b.atp_id=a.atp_id
and  c.atp_id=a.atp_id
and  a.atp_id=3;  2    3    4    5

    ATP_ID AC_TY AC_TYP_DESC                                        AC_TYP     DECK
---------- ----- -------------------------------------------------- ---------- ----------
DECK_DESC                                          PERIOD_FR PERIOD_TO BRD_P OFF_P
-------------------------------------------------- --------- --------- ----- -----
         3 33H   AIRBUS                                             A330-300   LOWERDECK
                                                   26-JAN-05 27-JAN-05

         3 33H   AIRBUS                                             A330-300   LD
TEST                                               26-JAN-05 27-JAN-05

         3 33H   AIRBUS                                             A330-300   MD
MIDDLE DECK                                        26-JAN-05 27-JAN-05



Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=2 Bytes=122)
   1    0   MERGE JOIN (CARTESIAN) (Cost=3 Card=2 Bytes=122)
   2    1     NESTED LOOPS (Cost=2 Card=1 Bytes=43)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP' (TABLE) (Cost=1 Card=1 Bytes=17)
   4    3         INDEX (UNIQUE SCAN) OF 'ATP_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)
   5    2       TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_EXCP' (TABLE) (Cost=1 Card=1 Bytes=26)
   6    5         INDEX (RANGE SCAN) OF 'ATE_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
   7    1     BUFFER (SORT) (Cost=2 Card=2 Bytes=36)
   8    7       TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_DECK' (TABLE) (Cost=1 Card=2 Bytes=36)
   9    8         INDEX (RANGE SCAN) OF 'ATD_ATP_FK_I' (INDEX) (Cost=0 Card=2)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          7  consistent gets
          0  physical reads
          0  redo size
       1093  bytes sent via SQL*Net to client
        511  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

NGCS_DEV@NGDEV1-SQL> /


In the above example my expectation was less byes transfered since master data is not repeating. Additional recursive call is understandable since Oracle has to do something to make the out put in object form. 

Is this not a good way to execute queries..

Would you please explain

Rgds
Anil  
 

Tom Kyte
February 03, 2005 - 12:49 am UTC

your thought process was a little off. first JOINS ARE NOT EVIL, in a rdbms in fact, JOINS ARE WHAT we do!


second, you'll increase the round trips - each cursor will "go back" ,each row with a cursor will "go back"

each cursor will have "some overhead"

joining is "good", just join



efficient way to get counts

Thriu, February 02, 2005 - 3:57 pm UTC

Hi Tom,

I had submitted this earlier, but looks like it did not go through ( some prob with my browser)..So again submitting.

What is an efficient way to get the counts from a table (around 10 million rows) based on varied conditions?

The actual table has around 50 columns.
drop table temp_del;
create table temp_del (c1 varchar2(3),c2 varchar2(3),c3 varchar2(3),c4 varchar2(3),flag number);
insert into temp_del values('abc','bcd','cde','def',0);
insert into temp_del values('abc','bcd','cde','def',1);
insert into temp_del values('abc','bcd','cde','def',2);

insert into temp_del values('bcd','cde','def','efg',0);
insert into temp_del values('bcd','cde','def','efg',1);
insert into temp_del values('bcd','cde','def','efg',2);

insert into temp_del values('cde','def','efg','fgh',0);
insert into temp_del values('cde','def','efg','fgh',1);
insert into temp_del values('cde','def','efg','fgh',2);
commit;

select count(*) from temp_del where c1='abc' and c2='bcd' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and flag=1;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=1;
select count(*) from temp_del where c1='bcd' and c2='cde' and c3='def' and flag=2;
and so many other combinations similar to this..

Is there a way the table can be accessed once and get the varied counts like above?

Count in one Query

Anil Shafeeque, February 02, 2005 - 11:04 pm UTC

Hi

Will this query do that job,



select sum(case when carr_code= 'PP' and vol_unit = 'LTRE' then 1 else 0 end) LTRCOUNT,
sum(case when carr_code= 'PP' and vol_unit is null then 1 else 0 end) NONLTRCOUNT,
sum(case vol_unit when 'LTRE' then 1 else 0 end) C2,
count(*) TOt from TEST_TABLE

LTRCOUNT NONLTRCOUNT C2 TOT
---------- ---------- ---------- ----------
3 4 3 7


Rgds
Anil

What about this query?

Totu, February 03, 2005 - 1:25 am UTC

I have table x and y.
Rows in x:
x y
----------- -----------
8 3
45 2
10 3
43 1

Row in y:
x y
----------- -----------
8 3
10 2
43 3


I want query return records from x table where each table's x column values are different but y column values are same:

select x.* from x, y
where x.x <> y.x and x.y = y.y

It returns:
X Y
---------- ----------
45 2
10 3
8 3
10 3

I expected query return thesecond row from x table.
Thanks in advance.

Tom Kyte
February 03, 2005 - 1:51 am UTC


if only i had a create table and some insert intos, i could play with this one....

Difference

Vikas Khanna, February 03, 2005 - 8:41 am UTC

Hi Tom,

Is there any difference between two statements:

Where A is NOT NULL Vs NOT A is NULL, from performance point of view.

Regards,







Tom Kyte
February 03, 2005 - 2:17 pm UTC

easy enough to test (so i wonder why after seeing lots of tests here, people don't set them up?)

short answer, no.  

test below....

ops$tkyte@ORA9IR2> create table t as select decode(rownum,0,0,null) x, rownum y, decode(mod(rownum,2), 1, rownum,  null ) z from big_table.big_table;
 
Table created.
 
ops$tkyte@ORA9IR2> @trace
 
Session altered.
 
ops$tkyte@ORA9IR2> select count(*) from t where x is not null;
 
  COUNT(*)
----------
         0
 
ops$tkyte@ORA9IR2> select count(*) from t where NOT x is null;
 
  COUNT(*)
----------
         0
 
ops$tkyte@ORA9IR2> select count(*) from t where y is not null;
 
  COUNT(*)
----------
   9000000
 
ops$tkyte@ORA9IR2> select count(*) from t where NOT y is null;
 
  COUNT(*)
----------
   9000000
 
ops$tkyte@ORA9IR2> select count(*) from t where z is not null;
 
  COUNT(*)
----------
   4500000
 
ops$tkyte@ORA9IR2> select count(*) from t where NOT z is null;
 
  COUNT(*)
----------
   4500000
 
ops$tkyte@ORA9IR2>


select count(*) from t where x is not null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      1.14       1.14      18466      18477          0           1
                                                                                                                                                                            
select count(*) from t where NOT x is null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      1.13       1.16      18466      18477          0           1
                                                                                                                                                                            
select count(*) from t where y is not null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     13.99      13.69      18466      18477          0           1
                                                                                                                                                                            
select count(*) from t where NOT y is null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     13.89      13.60      18466      18477          0           1
                                                                                                                                                                            
select count(*) from t where z is not null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      7.65       7.51      18466      18477          0           1
                                                                                                                                                                            
select count(*) from t where NOT z is null
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      7.63       7.68      18466      18477          0           1
 

in response to Totu

A reader, February 03, 2005 - 10:02 am UTC

Those results are what I would expect.

Change your query to:

select x.*,y.*
from x, y
where x.y = y.y
order by 1,3

and you will see how the joins are being done with x.y = y.y, then it will be quite obvious with the added condition of x.x <> y.x that you get the results you did.

I think you are overlooking the fact that you are joining multiple rows in X to multiple rows in Y.

Cursor vs Join

Anil, February 04, 2005 - 1:33 am UTC

Hi Tom


<<first JOINS ARE NOT EVIL, in a rdbms in
fact, JOINS ARE WHAT we do!
>>

I am always convinced that Joints are not evil . Here I was thinking of the amount of data tranasfer to the java client. I thought this will reduce since master table data is not repeating. Also it reduces the coding at java since they get it as objects. But now I am convinced that they can do exta coding to get the performance. So we are going for a plain join.

Rgds
Anil

Tom Kyte
February 04, 2005 - 11:34 am UTC

but unless you are pulling TONS of data with each cursor, the overhead of being a cursor and the roundtrips involved in being a cursor are going to overwhelm any reduction.

here the volumes of data are just "too small", we are down in small "bytes", nothing to optimize there.

Efficient way to get counts

Thiru, February 04, 2005 - 11:24 am UTC

Hi Tom,

In my earlier issue with the same Title, I had given a case for varied count conditons from a single table and a response was given by Anil. My question is:
a) Is each row read once for each case statement or will all case statements be read for each row and grab the count? So if there are say 10 case statements, then hopefully oracle goes through all these cases for every row of the table for a FTS (or according to the conditions). Thanks for clarifying.

b)Is the query the only way to write with one single access to the table data?

Thanks again.

Tom Kyte
February 04, 2005 - 12:03 pm UTC

select sum(case when carr_code= 'PP' and vol_unit = 'LTRE' then 1 else 0 end)
LTRCOUNT,
sum(case when carr_code= 'PP' and vol_unit is null then 1 else 0 end)
NONLTRCOUNT,
sum(case vol_unit when 'LTRE' then 1 else 0 end) C2,
count(*) TOt from TEST_TABLE

that'll read each row in test_table once and sum of the results of the case statements applied to that row.



you could use decode if you like.

Thanks so much..

A reader, February 04, 2005 - 12:17 pm UTC


demo_pkg.get_query

A reader, February 11, 2005 - 3:37 am UTC

I used the package and got the output i wanted, thanks for that., I tried to alter the column heading to just Dd-Mon but my version of the altered pkg does compile with errors, also can i extend the where clause to filter come data here.

15-JUL 16-JUL 17-JUL 18-JUL
Ajay 500 0 100 0
Bhavani 700 0 0 0
Chakri 0 0 200 0

Also can we get data like for example ( all tuesdays of the two months)
thanks

Tom Kyte
February 11, 2005 - 8:05 pm UTC

don't know what to say here -- other than "yes, you can filter"

you can make this code do whatever you want.

but I cannot help you with a compilation error, without actually seeing anything

demo_pkg.get_query

A reader, February 12, 2005 - 12:56 am UTC

u r rt tom,
i should have give the code for you to see the full picture
the output i expected is something like

nameid cfg cabtype 01-Jan 02-Jan 03-Jan
---------------------------------------
abc 2/2 h1 20 19.56 22
xbc 1/2 h2 21 17.46 12
ybc 4/2 h1 22 14.66 22
zbc 2/1 h1 16 17.76 23

1 create or replace package crstb_tch
2 as
3 type rc is ref cursor;
4 procedure get_query( p_cursor in out rc, p_start date, p_end date );
5 end;
6 /
7 create or replace package body crstb_tch
8 as
9 procedure get_query( p_cursor in out rc, p_start date, p_end date )
10 is
11 l_query long := 'select Nameid, cfg, cabtype ';
12 begin
13 for i in 1 .. trunc(p_end)-trunc(p_start)+1
14 loop
15 l_query := l_query || ', min( decode( trunc(rd_dt), ' ||
16 'to_date( ''' || to_char(p_start+i-1,'dd-mon') ||
17 ''', ''dd-mon'' ), tch, 0 )) "' || to_char(p_start+i-1,'dd-mon') || '"';
18 end loop;
19 l_query := l_query || ' from jttbl ' ||
20 'where tch > 10 and bsc like '%Bangalore%' ||
21 'group by nameid, cfg, cabtype ' ||
22 'order by nameid' ;
23 open p_cursor for l_query;
24 end;
25* end;
26 /

Warning: Package created with compilation errors.

Also as a further extension how to modify the code to as for only tuesdays (or wednesdays) of a month (or two months.)?

thanks in advance

Tom Kyte
February 12, 2005 - 12:37 pm UTC

"u r rt tom,"

w h a t t h e h e c k i s t h a t


5 end;
6 /
7 create or replace package body crstb_tch
8 as

s h o u l d b e a r e d f l a g t o y o u.

it put the spec and body together there, seems that / is not submitting the code, fix that.

you probably did a "get" which puts a file into the buffer and then ran it. Use "start filename" or @filename, not get....


a n d p l e a s e, u s e r e a l w o r d s. I a m p r e t t y s u r e t h i s h a r d t o r e a d f o r y o u.



demo_pkg_get_query

A Reader,, February 17, 2005 - 12:09 am UTC

Hi TOM,

what the heck is wrong with this

"u r rt tom,"

you are having a complex. period so you cannot read it.,

if some thing was a red flag or green or whatever color i would not be visiting this page.

your answer did not leave me any wiser in oracle or in english



Tom Kyte
February 17, 2005 - 7:51 am UTC

sorry, I could not read this followup at all. It is gibberish as well.

your keyboard must be in a state of failure, maybe that is why the "/" went wrong (with lots of other letters)?


sorry if you cannot take a joke and get the point that in professional communication you should actually use full words. Do u use ths spk in ur resume? (I certainly hope not)

I did point out what you did wrong -- and:

5 end;
6 /
7 create or replace package body crstb_tch
8 as

should have been what we call a "red flag" -- sqlplus was treating the spec and body as ONE piece of code, and I guessed it was most likely due to "using get" instead of the standard @file or "start file"

and if you are not any wiser in Oracle -- that is not my failing here. I pointed out your issue (you used get) -- without even SEEING your command. I told you how to do it right (use @file or "start file"). I did that in normal text, nothing relevant to answering the question was obscured using techniques you used to write to me in the first place.

And this isn't "about english", this would apply in ANY language. In your native language, do you take proper representations of words and make up new ones to communicate to your neighbors?


on a instant message session, perhaps (even then, 1/2 the times I have to ask "what does FDAGFA mean exactly?")....

on a phone SMS message to a friend perhaps.....

but in written professional communication, nope, unacceptable (not to mention hard to read isn't it)

Answer To the impolite "Reader"

A reader, February 17, 2005 - 4:40 pm UTC

I won't be technically what I am without this site. And I won't mind paying even if Tom were to charge me by page. It is that valuable. But it is all free.

Tom spends most of his week-ends answering our questions. Let's try to be decent and polite even if not thankful.



Tom Kyte
February 17, 2005 - 8:17 pm UTC

It isn't about being polite to me really here. I do this mostly because I like to. Be rude of you want, but expect me to make fun of "im speak" -- why? because it is something to be made fun of.

To me, it is much more fundemental. It is about communication. It is about taking enough time to talk to the person on the other end of the wire.

We are in a technical profession (last time I looked). Preciseness is key -- I have enough challenges trying to figure out what the question is sometimes, let alone translating "im speak" into real words. The ability to phrase a complete question so that someone who hasn't actually been consumed by the problem for the last day or so can understand it is hard enough -- but when you add in the "static" (i don't know how else to describe it, it is like static on the radio, the sound drops out and I miss something) of "im speak" to the mix makes it unbearable.

To that end, I've modified my review banner -- I give fair warning, you use "im speak" and I will make fun of you and call it out. period.

Call it a crusade if you will, I'm still looking for that "u" person -- no clue what gender they are, where they live, what they do -- but people keep asking for them over and over.



Related to MAX salary in each dept

Ravi Kumar, February 18, 2005 - 3:27 am UTC

Hi.. Tom
I am a big fan of you :)

I was reading the followup of one of question related to get the maximum salary in each department.

I can see that every query you suggested to get this includes partition by clause, What about this one..

select ename,deptno,sal
from emp where (deptno,sal) in(select deptno,max(sal)
from emp
group by deptno)
and will it be slower than using partition by.

Thanks & Regards
Ravi

Tom Kyte
February 18, 2005 - 8:09 am UTC

on large sets, you will basically be scanning the table twice.

analytics -- once.

Thanks Tom

A reader, February 18, 2005 - 8:52 am UTC

I learnt a lot from this site.This is by far the best Oracle Forum I have ever been to.I learn something new everyday.We should really appreciate the fact that Tom is taking time to answer so many questions ( For godsake he is a Vice President).I hope you continue your service to the Oracle World
Tom..You Rock

A reader, February 22, 2005 - 9:53 am UTC

Hi Tom,

I read the follow up and got very disappointed from some people. But please just ignore them and continue helping Oracle World. I learned a lot from your great site.

I am using one of your magic SQL for parsing one column and split it to different parts. The query works fine. The problem that I have is that, I have to do it 1000 records at each time. It means if the query selects more than that e.g 10,000 records to process, the session will stay active forever and I have to kill the session and start it again with the limit of 1000. Could you please tell me what can be wrong?

SQL> desc nitstoryn
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 SRL                                       NOT NULL NUMBER(12)
 STORYFILE_NME                             NOT NULL VARCHAR2(1000)
 AUTHOR                                             VARCHAR2(42)
 STORY_ORIGIN                                       VARCHAR2(10)
 CATEGORY                                           VARCHAR2(30)
 HEADLINE                                           VARCHAR2(200)
 SUBHEADLINE                                        VARCHAR2(200)
 KEYWORDS                                           VARCHAR2(1000)
 HISTORY                                            VARCHAR2(200)
 DATELINE                                           VARCHAR2(1000)
 HIGHLIGHT                                          VARCHAR2(4000)
 CREATION_TIME                                      VARCHAR2(100)
 LAST_MODIFIED_TIME                                 VARCHAR2(100)
 TIMEZONE                                           VARCHAR2(25)
 FOOTNOTES                                          CLOB
 BODY                                               CLOB
 CREATION_TIME_DATE                                 DATE

SQL> desc nitielinkn
 Name                                      Null?    Type
 ----------------------------------------- -------- ---------------------
 SRL                                       NOT NULL NUMBER(12)
 NITSTORY_SRL                                       NUMBER(12)
 LINK_TYP                                           VARCHAR2(1)
 LINK                                               VARCHAR2(4000)
 VALID_FLG                                          VARCHAR2(1)
 LAST_VALIDATED_DATE                                DATE


INSERT INTO nitielinkn(srl,nitstory_srl,link_typ,link)
SELECT nitsielink.nextval, srl story_srl, 'E',
       SUBSTR(to_char(body),INSTR(lower(to_char(body)),'href',1,rr.r),
       INSTR(body,'"',INSTR(body,'href',1,rr.r),2)-INSTR(body,'href',1,rr.r)+1) link
FROM
 (SELECT srl, ','||body||',' body, (LENGTH(body)-LENGTH(replace(body,'href','')))/4 cnt
  FROM nitstoryn WHERE INSTR(body,'href',1,1) > 0 ) nitstory,
 (SELECT rownum r
  FROM all_objects
  WHERE rownum <= 100) rr
WHERE rr.r <= nitstory.cnt
and srl >  11000
and srl <= 12000 "This is the part that I have to use to limit it to 1000"
and INSTR(lower(to_char(body)),'href',1,1) > 0
/

Thanks,
- Arash
 

Tom Kyte
February 22, 2005 - 10:01 am UTC

trace it with 1000, 2000, 5000, 10000 see what is happening.

I don't see any reasons why it would "stop", get slower maybe as it pages to temp -- sure.

OK

Catherine, February 22, 2005 - 12:01 pm UTC

Hi Tom,
Any other better way to put this query?

SQL> select deptno,sal,count(*) from emp
  2  group by deptno,sal
  3  having sal = any(select max(sal) from emp group by deptno)
  4  /

    DEPTNO        SAL   COUNT(*)
---------- ---------- ----------
        10       5000          1
        20       3000          2
        30       2850          1

Please do reply.
Bye!
 

Tom Kyte
February 22, 2005 - 12:41 pm UTC

there are many ways to write that query.

drop the "any" and it'll work -- there is a "better" way. done.


search around for "top-n"

Sql Query

Ratnam, March 02, 2005 - 9:52 am UTC

Hi Tom,
Please help to write SQL query to list all records that was inserted between today 8am and previous day 8am.
example
Between 8am 2-Mar-2005 and 8am 1-Mar-2005.

Thanks a lot in advance.

-Ratnam



Tom Kyte
March 02, 2005 - 10:12 am UTC

trunc(sysdate)+8/24 it 8am today.
trunc(sysdate-1)+8/24 is 8am yesterday

between those two are the records of interest.

What commands are in @trace file ?

Parag Jayant Patankar, March 02, 2005 - 10:18 am UTC

Hi tom,

While replying to one of the questions you have used
ops$tkyte@ORA9IR2> @trace ( trace.sql ), Can you tell us what is the content of trace.sql ?

regards & thanks
pjp


Tom Kyte
March 02, 2005 - 10:57 am UTC

alter session set events '10046 trace name context forever, level 12';


Nice

Raju, March 02, 2005 - 10:32 am UTC

Hello Tom,
I use Oracle 9i Release 2.I would like to insert a single space 

between each character of the string "Hello World"

SQL> select 'Hello World' from dual
  2  /

'HELLOWORLD
-----------
Hello World

I would like to get the output as
 
  H e l l o W o r l d

Is it possible in Oracle 8i or 9i?


   

Tom Kyte
March 02, 2005 - 11:07 am UTC

it would be hard using the builtins.

in 10g, easy


  1* select regexp_replace( 'hello world', '(.)', '\1 ' ) from dual
ops$tkyte@ORA10G> /
 
REGEXP_REPLACE('HELLOW
----------------------
h e l l o   w o r l d


in 8i/9i, either have the client do it or write a small plsql function to do it. 

Oracle consultant

Dawar, March 03, 2005 - 3:01 pm UTC

Tom,

I need to write a query in the following form:
select * from table where table.field IN f(:bind-variable)
or perhaps
select * from table where table.field IN (select field from f(:bind-variable))
where f(:bind-variable) is a function or stored procedure that returns a rowset, ref cursor, table or whatever.

Regards,
Dawar

Tom Kyte
March 03, 2005 - 5:37 pm UTC

search for str2tbl on this site

'h e l l o w o r l d'

ant, March 07, 2005 - 12:37 pm UTC

if you're on 9i, pull the string apart then use sys_connect_by_path to put it together again:

SQL> select trim(replace(sys_connect_by_path(c,'_ '),'_')) msg
  2    from (
  3  select iter.pos, msg, substr(d.msg,iter.pos,1) as c
  4    from (select 'hello world' as msg from dual) d,
  5         (select rownum pos from emp) iter
  6   where iter.pos <= length(d.msg)
  7         )
  8   where level = length(msg)
  9   start with pos=1
 10   connect by prior pos=pos-1;
 
MSG
----------------------
h e l l o   w o r l d

still might be better of writing some plsql, but this works in a pinch.  amazing how much nicer regex is for doing things like this - 10 lines of sql code compared to 1 using regex. 

SQL Query

RS, March 18, 2005 - 3:28 am UTC

Hi Tom,

I have a table like this :
s_name varchar2,
s_type varchar2,
s_num1 number,
s_num2 number

Values :
'A','X',100,200
'A','Y',,100
'B','X',50,
'B','Z',,75

Output i need:
X1 X2 Y1 Y2 Z1 Z2
A 100 200 0 100 0 0
B 50 0 0 0 0 75

Can you please help by suggesting a simpler way
to achieve this?

Thanks and regards,
RS

Tom Kyte
March 18, 2005 - 7:03 am UTC

<quote from the page you had to use to put this here>
If your followup requires a response that might include a query, you had better supply very very simple create tables and insert statements. I cannot create a table and populate it for each and every question. The SMALLEST create table possible (no tablespaces, no schema names, just like I do in my examples for you)
</quote>


but -- unless you KNOW before hand the max number of X's and Y's and Z' (for you see, a sql query has a fixed number of columns), you'll need to read the chapter on dynamic SQL in Expert one on one Oracle where I demonstrate how to run a query to build a query with the right number of columns.

A reader, March 23, 2005 - 12:18 am UTC


Column Format

A reader, April 14, 2005 - 11:51 am UTC

Tom,

I know we can do this: COLUMN column_name FORMAT model
Is there a way to do that for all columns in the select statement?

Thanks.

Tom Kyte
April 14, 2005 - 11:56 am UTC

only for numbers

set numformat ....

but otherwise, you need to do it column by column pretty much.

A reader, April 14, 2005 - 12:00 pm UTC

Thanks. I was hoping that you had some tricks.

A reader, April 15, 2005 - 5:35 pm UTC

Hi Tom,

I have table like

create table t ( id number,
col1 varchar2(1),
col2 varchar2(1),
col3 varchar2(1)
);

insert into t values(1, 'a', 'b', null);
insert into t values(2, null, 'b', 'c');
insert into t values(3, 'a', null, 'c');
insert into t values(4, 'd', 'e', null);
insert into t values(5, 'd', 'f', null);

What I want to do is if any columns ( col1, 2, 3 ) have same value, then the records will be in one group, so records 1,2,3 will be in one group, records 4, 5 will be in other group.

How to do this with sql?

Thanks

Tom Kyte
April 15, 2005 - 6:21 pm UTC

select min(rowid) over ( partition by col1, col2, col3 ) grp,
id, col1, col2, col3
from t
/


grp is your "grouping" column.

A reader, April 15, 2005 - 9:02 pm UTC

Hi Tom,

Maybe I didn't describe it clearly, actually, I want the result looks like

ID C C C grp
---------- - - -
1 a b 1
2 b c 1
3 a c 1
4 d e 2
5 d f 2

Because record 1 and 3 have same value of col1('a'), so 1 and 3 are in same group, record 1 and 2 have same value of col2('b'), so 1 and 2 are in same group, record 2 and 3 have same value of col3('c'), so 2 and 3 are in same group, so 1,2,3 should be in same group. same thing to record 4 and 5.

grp number doesn't need to be continuous, gap between grp number is OK.

How can I do this?

Thanks

Tom Kyte
April 16, 2005 - 8:46 am UTC

and what if record 1 was

a b f

what "group" did it belong to then. I don't see this happening in general. What about



How to sort...

A reader, April 16, 2005 - 6:25 am UTC

Hi Tom,

I was pretty impressed the way you answer sql queries. I would appreciate, if you can also look into this.

I have to write a sql query where it has to retrieve all the columns of the table TEST1 with the given sorting order.

TABLE : TEST1

The raw data is as follows

SQL> Select dispnum, airline, org, des, mtype from TEST1;

DISPNUM        AIRLINE     ORG    DES    MTYPE
-------        -------     ---    ---    -----
    113          CX        HKG    DXB    CP
     13          AH        ALG    JED    EMS
   4146          CX        CAN    DXB    EMS
     50          BI        BWN    DXB    LC
      8          AZ        MXP    AKL    CP
     74          BG        DAC    DXB    LC
      1          LH        FRA    DXB    LC
    BAH          EK        BAH    KWI    CP
     SN                    DXB    LON    CP
     SN          SV        RUH    EWR    CP
   7193                    DXB    CMN    EMS

Sorting to be done on the columns (airline, dispnum, org, des, mtype) in ascending order. Ensure that (dispnum) starting with a character should come first in the list.

Your assistance will be highly appreciated.

Rgrds 

Tom Kyte
April 16, 2005 - 8:58 am UTC

no create tables...
no insert into's....
no way for me to test....


but sounds like you might want to order by

order by airline,
case when upper(substr(dispnum),1,1) between 'A' and 'Z'
then 1
else 2
end,
dispnum, org, des, mtype



A reader, April 16, 2005 - 8:56 am UTC

If record 1 was

a b f

it should belong to group 1, the grouping criteria is if any column ( col1, 2, 3 ) has same value, then the records should belong to same group.

Tom Kyte
April 16, 2005 - 9:07 am UTC

and why not group 2 then? A makes it belong to group 1, F to group 2.

I don't see this happening in SQL efficiently (or even inefficiently maybe..)

A reader, April 16, 2005 - 9:22 am UTC

F doesn't make the first record to group 2, because col3 in group 2 is null.

Tom Kyte
April 16, 2005 - 9:36 am UTC

<quote>
What I want to do is if any columns ( col1, 2, 3 ) have same value, then the
records will be in one group, so records 1,2,3 will be in one group, records 4,
5 will be in other group.
</quote>

so, column 1 in record 2 was null, why does it make it into group 1 (first mention of null)

I don't see this happening in sql, sorry.....

A reader

A, April 17, 2005 - 3:02 am UTC

I have a table called a which has field like (rid,bname,city,rdate,share)
I have another table called b which has (bname,city,rdate,amount)

I want to find out average share(of the robbers participating in robberies) during the most recent year with any robberies and the year in which the total amount robbed was the highest.


Tom Kyte
April 17, 2005 - 8:41 am UTC

cool, too bad I see no fields named "robbers" or how A relates to B or anything.

no creates
no inserts
insufficient detail to say anything.

i want to make a question.

md. oli ullah., April 17, 2005 - 9:27 am UTC

i have installed oracle database sercer 9i. but when i had opend sql it require user name and password.in the user name i wrote SYSTEM/SCOTT and in the password field i wrote MANAGER/TIGER. none of them functioning and producing a error message tns protocol error.
how can i enter into sql.pls send user name and password into my email.
with thanks
md.oli ullah.

Tom Kyte
April 17, 2005 - 9:54 am UTC

when you installed the product, you setup the accounts and passwords (it was part of the installation, you had to supply a sys and system password).


as the Oracle software owner, log into the OS and you should be able to:

connect / as sysdba


the superuser, you can use this account to set scott's password, unlock that account, create a new account (that is the suggested thing to do). After you create that account, log out (you don't use sys or system to test or play with ever), and log back in as your newly created account.

A reader

A, April 17, 2005 - 12:31 pm UTC

Here you go
create table robbery(bankname varchar(100) not null,city varchar(50) not null,rdate date not null,amount integer not null,primary key (bankname,city,rdate));

This table stores information about robberies of banks that the gang has already performed,including how much was stolen.

create table accomplices(robberid integer ,bankname varchar(100) not null,city varchar(50) not null,rdate date not null,share integer not null,foreign key(bankname,city,rdate) references robbery(bankname,city,rdate) on update cascade,primary key(robberid,bankname,city,rdate));

This table stores information about which gang members participated in each bank robbery and what share of the money they got.

select * from robbery;

bankname city rdate amount
N Bank | xxx | 1999-01-08 | 32
Lk Bank | yyy | 1995-02-28 | 19
Lk Bank | xxx | 1995-03-30 | 21


select * from accomplices;
robberid bankname city rdate share

1 | LK Bank | xxx | 1995-03-30 | 42
1 | LK Bank | yyy | 1995-02-28 | 49
1 | N Bank | xxx | 1999-01-08 | 64

And the question remains same
write a sql that shows the average robbers share(of the robbers participating in robberries) during the most recent year with any robberies and the year in which the total amount robbed was the highest.In order to write this query I don't know if both these tables has to be used.


Hope this time it's clear.

Cheers

Tom Kyte
April 17, 2005 - 3:26 pm UTC

"on update cascade"

it is clear this isn't Oracle ;)

so, I hesitate to give you sql, since you are not using this database and hence it might not work.


no create tables
no insert intos
no looking by me....

(the foreign keys don't even match up)


but it sounds like simple SQL homework.

Avg share group by year, order by year desc, take the first row...

select * from (select avg(share), whatever_function_YOUR_db_uses(rdate) year
from robbery
group by that_function_whatever_it_is
order by 2 desc )
where rownum = 1 <<=== you'll find that probably doesn't work for you :)

(in fact, these sound exactly like the quiz problems I give to anyone that has SQL on their resume..)

use the same technique on the other table


A reader, April 22, 2005 - 4:36 pm UTC

Hi Tom,

I have a table T

create table test
(
id number,
t number,
t1 number,
t2 number
);

This table has values like the following :

insert into test values (1,1,0,0);
insert into test values (2,0,1,0);
insert into test values (3,0,1,0);
insert into test values (4,0,0,1);
commit;

I want to get a result as follows:

t t1 t2
1 2 1

That is, I want to see the count of all 1s for each of the columns t, t1 and t2 as a single row.

Thanks.


Tom Kyte
April 22, 2005 - 4:42 pm UTC

select count(decode(t,1,1)), count(decode(t1,1,1)), count(decode(t2,1,1)) from t;

small, tiny correction

Menon, April 22, 2005 - 4:56 pm UTC

"select count(decode(t,1,1)), count(decode(t1,1,1)), count(decode(t2,1,1)) from
t; "

should be "from test" instead of "from t" :)

Tom Kyte
April 22, 2005 - 5:03 pm UTC

I read "I have a table T"

:)

Thank You Tom just what I wanted.

A reader, April 22, 2005 - 5:02 pm UTC


query tuning

sreenivasa rao, April 23, 2005 - 2:23 am UTC

Dear Tom,
i have a OLTP environment database with following setup

hash_join_enabled=true
optimizer_feature_enable=9.2.0
optimizer_index_caching=90
optimizer_index_cost_adj=25
optimizer_max_permutations=2000
query_rewrite_enabled=TRUE
query_rewrite_integrity=TRUSTED

and all the tables are having almost updated statistics aside with hitstograms on all of their columns.


SELECT
DISTINCT A.COMP_APPL_ID AS "App No",
A.APPDATE AS "App Date",
B.CUSTOMERNAME AS "Customer Name",
D.DESCRIPTION AS "Product",
D.SUBTYPEDESCRIPTION AS "Sub-Product",
A.FILELOGINNO AS "Physical File No",
C.DESCRIPTION AS "Location",
E.BPNAME AS "Sourced By"
FROM
LOT_CUSTOMER_T B,
LOT_GLOBALLOAN_T A,
COMPANY_GENERIC C,
PRODUCT D,
BP E,
LOT_WORKFLOWSTAGE_DTL F,
WORKFLOWSTAGE ws,
COMPANY_EMPLOYEE_ROLE er,
COMP_EMP_AREA ea
WHERE
A.COMP_APPL_ID = B.COMP_APPL_ID(+) AND
A.SRC_BPID=E.BPID AND
C.GENERICID = A.CG_LOCATIONCODE AND
B.GUAR_COAP_FLAG(+)='P' AND
D.PRODUCTCODE = A.PRODUCTCODE AND
NVL(A.PRODUCTSUBTYPE,'NULL') = NVL(D.PRODSUBTYPECODE,'NULL') AND
A.COMPANYID=1000 AND
A.COMP_APPL_ID = F.comp_appl_id AND
ws.workflowstageid = f.workflowstageid AND
ws.rolecode = er.cg_rolecode AND
er.productcode = a.productcode AND
er.employeecode = ea.employeecode AND
ea.cg_locationcode = A.CG_LOCATIONCODE and F.STAGESTATUS='P' AND F.STAGE='STYMREPT' order by 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 2.46 2.59 0 0 0 0
Execute 2 0.01 0.00 0 0 0 0
Fetch 1265 94.02 324.06 2 7393514 0 12649
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1269 96.49 326.65 2 7393514 0 12649

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (SCB)

Rows Row Source Operation
------- ---------------------------------------------------
20 SORT UNIQUE
2901425 NESTED LOOPS
3631139 HASH JOIN
4866 INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (object id 24087)
12657 NESTED LOOPS OUTER
12628 HASH JOIN
12628 HASH JOIN
12628 HASH JOIN
465 TABLE ACCESS FULL WORKFLOWSTAGE
12628 HASH JOIN
12628 TABLE ACCESS BY INDEX ROWID LOT_WORKFLOWSTAGE_DTL
12628 INDEX RANGE SCAN SREE_LTWORKFLOWDTL_IND6 (object id 31139)
264057 HASH JOIN
42 TABLE ACCESS FULL PRODUCT
264057 TABLE ACCESS FULL LOT_GLOBALLOAN_T
4944 TABLE ACCESS FULL BP
5718 TABLE ACCESS FULL COMPANY_GENERIC
12594 TABLE ACCESS BY INDEX ROWID LOT_CUSTOMER_T
18455 INDEX RANGE SCAN LOT_CUSTOMER_T_INDX1 (object id 24473)
2901425 INDEX RANGE SCAN CM_EMP_RL_INDX7 (object id 29462)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
20 SORT (UNIQUE)
2901425 NESTED LOOPS
3631139 HASH JOIN
4866 INDEX GOAL: ANALYZED (FAST FULL SCAN) OF
'CM_EMP_AR_INDX3' (NON-UNIQUE)
12657 NESTED LOOPS (OUTER)
12628 HASH JOIN
12628 HASH JOIN
12628 HASH JOIN
465 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'WORKFLOWSTAGE'
12628 HASH JOIN
12628 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'LOT_WORKFLOWSTAGE_DTL'
12628 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'SREE_LTWORKFLOWDTL_IND6' (NON-UNIQUE)
264057 HASH JOIN
42 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PRODUCT'
264057 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'LOT_GLOBALLOAN_T'
4944 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'BP'
5718 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'COMPANY_GENERIC'
12594 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'LOT_CUSTOMER_T'
18455 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'LOT_CUSTOMER_T_INDX1' (NON-UNIQUE)
2901425 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'CM_EMP_RL_INDX7'
(NON-UNIQUE)

could you give me suggested indexes?
i dont bother even if the optimizer does not use those indexes,but i want to know where the exactly indexes need to be created.
thanks in advance.




Tom Kyte
April 23, 2005 - 9:14 am UTC

set autotrace traceonly explain

and run that query in sqlplus and lets see the autotrace "guess" at the cardinalities.

(and if you could set statistics_level = 'all' in your session and rerun the tkprof that'd be really great, we'd see:

TABLE ACCESS FULL OBJ#(222) (cr=3 r=0 w=0 time=43 us)


in the tkprof -- with the logical, physical io by step)

query tuning

sreenivasa rao, April 25, 2005 - 1:28 am UTC

Thanks for your quick response.
see the explain plan and tkprof(with statisitcs='all')



Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3589 Card=33550 Byte
          s=7381000)

   1    0   SORT (UNIQUE) (Cost=2498 Card=33550 Bytes=7381000)
   2    1     NESTED LOOPS (Cost=1407 Card=33550 Bytes=7381000)
   3    2       HASH JOIN (Cost=1406 Card=362966 Bytes=74045064)
   4    3         INDEX (FAST FULL SCAN) OF 'CM_EMP_AR_INDX3' (NON-UNI
          QUE) (Cost=3 Card=4881 Bytes=53691)

   5    3         NESTED LOOPS (OUTER) (Cost=1401 Card=1213 Bytes=2341
          09)

   6    5           HASH JOIN (Cost=1097 Card=1213 Bytes=201358)
   7    6             HASH JOIN (Cost=1086 Card=1213 Bytes=167394)
   8    7               HASH JOIN (Cost=1072 Card=1213 Bytes=133430)
   9    8                 TABLE ACCESS (FULL) OF 'WORKFLOWSTAGE' (Cost
          =3 Card=465 Bytes=5580)

  10    8                 HASH JOIN (Cost=1068 Card=1213 Bytes=118874)
  11   10                   TABLE ACCESS (BY INDEX ROWID) OF 'LOT_WORK
          FLOWSTAGE_DTL' (Cost=390 Card=10157 Bytes=213297)

  12   11                     INDEX (RANGE SCAN) OF 'SREE_LTWORKFLOWDT
          L_IND6' (NON-UNIQUE) (Cost=39 Card=1)

  13   10                   HASH JOIN (Cost=665 Card=32163 Bytes=24765
          51)

  14   13                     TABLE ACCESS (FULL) OF 'PRODUCT' (Cost=2
           Card=42 Bytes=1554)

  15   13                     TABLE ACCESS (FULL) OF 'LOT_GLOBALLOAN_T
          ' (Cost=661 Card=263946 Bytes=10557840)

  16    7               TABLE ACCESS (FULL) OF 'BP' (Cost=12 Card=4944
           Bytes=138432)

  17    6             TABLE ACCESS (FULL) OF 'COMPANY_GENERIC' (Cost=9
           Card=5727 Bytes=160356)

  18    5           TABLE ACCESS (BY INDEX ROWID) OF 'LOT_CUSTOMER_T'
          (Cost=2 Card=1 Bytes=27)

  19   18             INDEX (RANGE SCAN) OF 'LOT_CUSTOMER_T_INDX1' (NO
          N-UNIQUE)

  20    2       INDEX (RANGE SCAN) OF 'CM_EMP_RL_INDX7' (NON-UNIQUE)

SQL> alter session set statistics_level='all';

Session altered.

SQL> alter session set sql_trace=true;

Session altered.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      2.58       2.77          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      900     65.66     107.06       5887    3977886          0       13482
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      902     68.24     109.84       5887    3977886          0       13482

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30  (SCB)

Rows     Row Source Operation
-------  ---------------------------------------------------
  13482  SORT UNIQUE (cr=3977886 r=5887 w=0 time=107007846 us)
3097218   NESTED LOOPS  (cr=3977886 r=5887 w=0 time=93139700 us)
3906063    HASH JOIN  (cr=51085 r=5887 w=0 time=18144270 us)
   4882     INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (cr=24 r=18 w=0 time=23439 us)(object id 24087)
  13507     NESTED LOOPS OUTER (cr=51061 r=5869 w=0 time=11004008 us)
  13478      HASH JOIN  (cr=9527 r=5678 w=0 time=10308218 us)
  13478       HASH JOIN  (cr=9451 r=5676 w=0 time=10104237 us)
  13478        HASH JOIN  (cr=9344 r=5676 w=0 time=10001398 us)
    465         TABLE ACCESS FULL WORKFLOWSTAGE (cr=16 r=0 w=0 time=918 us)
  13478         HASH JOIN  (cr=9328 r=5676 w=0 time=9945039 us)
  13478          TABLE ACCESS BY INDEX ROWID LOT_WORKFLOWSTAGE_DTL (cr=2390 r=0 w=0 time=120576 us)
  13478           INDEX RANGE SCAN SREE_LTWORKFLOWDTL_IND6 (cr=145 r=0 w=0 time=23638 us)(object id 31139)
 264973          HASH JOIN  (cr=6938 r=5676 w=0 time=8599523 us)
     42           TABLE ACCESS FULL PRODUCT (cr=7 r=0 w=0 time=245 us)
 264973           TABLE ACCESS FULL LOT_GLOBALLOAN_T (cr=6931 r=5676 w=0 time=7296217 us)
   4944        TABLE ACCESS FULL BP (cr=107 r=0 w=0 time=6384 us)
   5727       TABLE ACCESS FULL COMPANY_GENERIC (cr=76 r=2 w=0 time=86230 us)
  13442      TABLE ACCESS BY INDEX ROWID LOT_CUSTOMER_T (cr=41534 r=191 w=0 time=585595 us)
  19495       INDEX RANGE SCAN LOT_CUSTOMER_T_INDX1 (cr=27019 r=0 w=0 time=322852 us)(object id 24473)
3097218    INDEX RANGE SCAN CM_EMP_RL_INDX7 (cr=3926801 r=0 w=0 time=52575212 us)(object id 29462)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
  13482   SORT (UNIQUE)
3097218    NESTED LOOPS
3906063     HASH JOIN
   4882      INDEX   GOAL: ANALYZED (FAST FULL SCAN) OF
                 'CM_EMP_AR_INDX3' (NON-UNIQUE)
  13507      NESTED LOOPS (OUTER)
  13478       HASH JOIN
  13478        HASH JOIN
  13478         HASH JOIN
    465          TABLE ACCESS   GOAL: ANALYZED (FULL) OF
                     'WORKFLOWSTAGE'
  13478          HASH JOIN
  13478           TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID)
                      OF 'LOT_WORKFLOWSTAGE_DTL'
  13478            INDEX   GOAL: ANALYZED (RANGE SCAN) OF
                       'SREE_LTWORKFLOWDTL_IND6' (NON-UNIQUE)
 264973           HASH JOIN
     42            TABLE ACCESS   GOAL: ANALYZED (FULL) OF
                       'PRODUCT'
 264973            TABLE ACCESS   GOAL: ANALYZED (FULL) OF
                       'LOT_GLOBALLOAN_T'
   4944         TABLE ACCESS   GOAL: ANALYZED (FULL) OF 'BP'
   5727        TABLE ACCESS   GOAL: ANALYZED (FULL) OF
                   'COMPANY_GENERIC'
  13442       TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF
                  'LOT_CUSTOMER_T'
  19495        INDEX   GOAL: ANALYZED (RANGE SCAN) OF
                   'LOT_CUSTOMER_T_INDX1' (NON-UNIQUE)
3097218     INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'CM_EMP_RL_INDX7'
                (NON-UNIQUE)

********************************************************************************

alter session set sql_trace=false


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30  (SCB)



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      2.58       2.77          0          0          0           0
Execute      3      0.00       0.01          0          0          0           0
Fetch      900     65.66     107.06       5887    3977886          0       13482
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      905     68.24     109.85       5887    3977886          0       13482

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       11      0.00       0.00          0          0          0           0
Execute     11      0.00       0.00          0          0          0           0
Fetch       11      0.00       0.00          0         24          0          11
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       33      0.00       0.00          0         24          0          11

Misses in library cache during parse: 1

    3  user  SQL statements in session.
   11  internal SQL statements in session.
   14  SQL statements in session.
    1  statement EXPLAINed in this session.
********************************************************************************
Trace file: rfes_ora_11054.trc
Trace file compatibility: 9.00.01
Sort options: default

       1  session in tracefile.
       3  user  SQL statements in trace file.
      11  internal SQL statements in trace file.
      14  SQL statements in trace file.
       4  unique SQL statements in trace file.
       1  SQL statements EXPLAINed using schema:
           SCB.prof$plan_table
             Default table was used.
             Table was created.
             Table was dropped.
    1078  lines in trace file.

Best regards
sreenivas 

 

Tom Kyte
April 25, 2005 - 7:11 am UTC

It was doing fine until here:

1 0 SORT (UNIQUE) (Cost=2498 Card=33550 Bytes=7381000)
2 1 NESTED LOOPS (Cost=1407 Card=33550 Bytes=7381000)

3 2 HASH JOIN (Cost=1406 Card=362966 Bytes=74045064)
4 3 INDEX (FAST FULL SCAN) OF 'CM_EMP_AR_INDX3' (NON-UNIQUE) (Cost=3 Card=4881 Bytes=53691)
5 3 NESTED LOOPS (OUTER) (Cost=1401 Card=1213 Bytes=234109)


Rows Row Source Operation
------- ---------------------------------------------------
13482 SORT UNIQUE (cr=3977886 r=5887 w=0 time=107007846 us)
3097218 NESTED LOOPS (cr=3977886 r=5887 w=0 time=93139700 us)

3906063 HASH JOIN (cr=51085 r=5887 w=0 time=18144270 us)
4882 INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (cr=24 r=18 w=0 time=23439
us)(object id 24087)
13507 NESTED LOOPS OUTER (cr=51061 r=5869 w=0 time=11004008 us)


the time jumped -- massive nested loops join of 3+ million rows. it was expecting about 33,000.

What is cm_emp_ar_indx3 on, what are we joining at that step.

QUERY TUNING

sreenivasa rao, April 25, 2005 - 9:52 am UTC

The index CM_EMP_AR_INDX3 contains below columns
EMPLOYEECODE
PRODUCTCODE
CG_LOCATIONCODE
AND tablename is COMP_EMP_AREA "ea"
and joins are as follows

er.employeecode = ea.employeecode AND
ea.cg_locationcode = A.CG_LOCATIONCODE



Thanks in advance
regards
sreenivas


Tom Kyte
April 25, 2005 - 10:00 am UTC

and how about CM_EMP_RL_INDX7

(is cg_locationcode skewed? does it look like that is the part of the join being processed there with that fast full index scan, the one on cg_locationcode, all I can see are index and some table names -- hard for me to tell)

Query

Karthick, April 26, 2005 - 4:04 am UTC

Hi tom

i was trying to write a query that will return all the tables in a schema and total number of column in that table and number of index in that table and constrains if any and total number of records in each table. but i beleve it is not possible with a normal sql.

the output is some thing like....

TABLE_NAME COL INDEX CONSTRAIN ROW
---------------------------------------------------
TABLE1 10 0 0 250
TABLE2 15 2 1 10000

i was planing to have this as a base query and planing to have drill down queries which will show the column detail,
index detail,constrain detail. etc.

can you help me with this.

thank you.

Tom Kyte
April 26, 2005 - 7:37 am UTC

select table_name, num_rows,
(select count(*) from user_indexes where table_name=x.table_name) idx,
(select count(*) from user_tab_columns where table_name=x.table_name) col,
(select count(*) from user_constraints where table_name=x.table_name) con
from user_tables x;


I would use num_rows as an approximation -- it is filled in after an analyze. counting the rows in a table would be prohibitively expensive.

Table Detail

P.Karthick, April 27, 2005 - 8:29 am UTC

Thank you for your answer.

can you please tell me where could i get the informatin about all the table that oracle uses to manage the database.

thank you.

P.Karthick.

Tom Kyte
April 27, 2005 - 8:30 am UTC

the reference guide
</code> https://docs.oracle.com#index-REF <code>
documents the dictionary tables.

Whats the logic behind....?

VIKAS SANGAR, April 27, 2005 - 12:03 pm UTC

Dear Mr. Kyte,

I would like to know that, What sorting & seraching (Quick sort / Bubble sort / Binary search etc) technique/s does Oracle applies in the background while sorting or searching databases?

Secodly, whenever we query a Table/View etc (select... from Tab1), we always see only the names of the columns along with records present therein. But, why dont we see the name of the table itself by default? Whats the logic behind this? Is there any way using which the name of Table/View can also be displayed automatically when querried (other than using Reports)?

Tom Kyte
April 27, 2005 - 12:24 pm UTC

the details of the sort are not documented -- however, it is definitely not a bubble sort :)

it is a complex thing, it must be able to sort many gigabytes of data on a machine with less than gigabytes of memory.


In a relational database, a query is a table....

but consider:

select * from emp, dept, bonus, project ......



what "table" is that. don't forget unions, minus, intersect, scalar subqueries, etc...

Better way??

A reader, May 13, 2005 - 4:43 pm UTC

Hi Tom,

select id from a
where tt = (select max(tt) from a
where id1 in (select id from b
where res = 10)
and col1 = 'D');

Is there a better way to rewrite this query?

Tom Kyte
May 13, 2005 - 5:09 pm UTC

see
</code> https://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52asktom-1735913.html <code>
PLAYING WITH AGGREGATION


i'd demonstrate, but don't what what datatype A.TT is, so -- I cannot.

(and I'm assuming TT is unique?)

Followup to previous question.....

A reader, May 13, 2005 - 7:54 pm UTC

Hi Tom,

TT is a column of date datatype and it is not unique. Is it possible for you to demonstrate how to rewrite that query?

Thanks.

Tom Kyte
May 13, 2005 - 8:18 pm UTC

select id from a
where tt = (select max(tt) from a
where id1 in (select id from b
where res = 10)
and col1 = 'D');


actually, now that I note that is not a correlated subquery, that looks fine.

I see no issues with that query as written, given the question it answers -- it looks fine.



Nice

Ramesh, May 14, 2005 - 3:17 am UTC

Hello Tom,
Could you please provide the documentation link
for TKPROF??

Tom Kyte
May 14, 2005 - 9:35 am UTC

</code> https://docs.oracle.com#index-SQL <code>
performance tuning guide

A reader, May 19, 2005 - 2:41 pm UTC

Hi Tom,

I need to get results as follows:

I have a table t which has about 9 mil rows

create table t
(
lname varchar2(10),
mname varchar2(10),
fname varchar2(10),
empid number,
id2 varchar2(10)
);

I need to get the following results:

I need a query which will return rows which have same mname, fname, empid, id2 but different lname. What is the easiest way to do this?

Thanks.


Tom Kyte
May 19, 2005 - 2:49 pm UTC

select *
from ( select a.*,
count(distinct lname) over (partition by mname, fname, empid, id2 ) cnt
from t a )
where cnt > 1;

SQL help for parent child

Parag Jayant Patankar, May 20, 2005 - 6:06 am UTC

Hi Tom,

I required your expertise help for writing a SQL statement in Oracle 8.1.7 and onwards

create table c ( c1 number(3), c2 varchar2(10));

create table a ( a1 number(3), a2 number(3), a3 number, a4 number(1))


insert into c values (1, 'parag');
insert into c values (2, 'tom');
insert into c values (3, 'jonathan');
insert into c values (4, 'neville');
insert into c values (5, 'sam');

commit;


insert into a values (1, 1, 100, 1);
insert into a values (1, 2, 100, 1);
insert into a values (1, 3, 400, 1);
insert into a values (2, 1, 500, 3);
insert into a values (2, 2, 600, 1);
insert into a values (2, 3, 7000,1);
insert into a values (3, 1, 500, 3);
insert into a values (5, 1, 300, 3);
insert into a values (5, 1, 400, 3);
insert into a values (5, 1, 700, 3);


I have two tables c and a. C is client master while A is accounts master. A -> C is many to one relationship. These two tables linked to each other by using column c1 and a1. Column a4 in table is account modification indicator. if value of a4 is > 2 then account is closed.

I want to report all clients who have no accounts or whose ALL accounts are closed.

I want output in following way

3 jonathan ALL ACCOUNTS CLOSED
4 neville NO ACCOUNTS
5 sam ALL ACCOUNTS CLOSED

Can you show me how to write this SQL ? ( In acutal system I have huge no of records in client and accounts master )

regards & thanks
pjp

Tom Kyte
May 20, 2005 - 8:05 am UTC

ops$tkyte@ORA817DEV> select a.*, decode( a.cnt1, 0, 'no accts', 'all closed' )
  2    from (
  3  select c.c1, c.c2, count(a.a1) cnt1, count(case when a.a4>2 then 1 end) cnt2
  4    from c, a
  5   where c.c1 = a.a1(+)
  6  group by c.c1, c.c2
  7        ) a
  8   where cnt1=cnt2;
 
        C1 C2               CNT1       CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
         3 jonathan            1          1 all closed
         4 neville             0          0 no accts
         5 sam                 3          3 all closed
 

A reader, May 20, 2005 - 12:33 pm UTC

Hi Tom,

I have a table t
(
id1 number,
id2 number
);

I want to get results which will return rows such that they have same id2 but different id1. What is the easiest way to do this?

Thanks.

Tom Kyte
May 20, 2005 - 6:37 pm UTC

select * from (
select id2, id1, count(distinct id1) over (partition by id2) cnt
from t
) where cnt > 1;

SQL Help

Parag Jayant Patankar, May 24, 2005 - 11:01 am UTC

Hi Tom

Thanks for your brilliant answer to my SQL question to you in this thread.

I need one more experte help in Oracle 9.2.

Suppose I have following tables

create table c ( c1 number(3), c2 varchar2(10));
create table a ( a1 number(3), a2 number(3), a3 number, a4 number(1));
create table ac( ac1 number(3), ac2 number(3));


insert into c values (1, 'parag');
insert into c values (2, 'tom');
insert into c values (3, 'jonathan');
insert into c values (4, 'neville');
insert into c values (5, 'sam');
insert into c values (6, 'joe');
insert into c values (7, 'barbie');
insert into c values (8, 'benoit');
insert into c values (9, 'jack');

insert into a values (1, 1, 100, 1);
insert into a values (1, 2, 100, 1);
insert into a values (1, 3, 400, 1);
insert into a values (2, 1, 500, 3);
insert into a values (2, 2, 600, 1);
insert into a values (2, 3, 7000,1);
insert into a values (3, 1, 500, 3);
insert into a values (5, 1, 300, 3);
insert into a values (5, 1, 400, 3);
insert into a values (5, 1, 700, 3);


insert into ac values (6, 7);
insert into ac values (6, 8);

commit;

For my question in this thread you have provided following excellent answer

select a.*, decode( a.cnt1, 0, 'no accts', 'all closed' )
from (
select c.c1, c.c2, count(a.a1) cnt1, count(case when a.a4>2 then 1 end) cnt2
from c, a
where c.c1 = a.a1(+)
group by c.c1, c.c2
) a
where cnt1=cnt2
/

Which is giving following output


C1 C2 CNT1 CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
3 jonathan 1 1 all closed
4 neville 0 0 no accts
5 sam 3 3 all closed
6 joe 0 0 no accts
7 barbie 0 0 no accts
8 benoit 0 0 no accts
9 jack 0 0 no accts

Now I want to modify this query, that client appearing in table "ac" should not get reported, as clients 7 and 8 linked to client 6. clients 6, 7, 8 should not get reported in sql output.

For e.g.

C1 C2 CNT1 CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
3 jonathan 1 1 all closed
4 neville 0 0 no accts
5 sam 3 3 all closed
9 jack 0 0 no accts


Most of the times I can arrive the correct answer, but I do not know where I am arriving an answer to right method or not. ( Performance wise ). For e.g. required output I can generate using "SET/MINUS" operator or subquery but I do not know is it correct way or not ? I am always having feeling that this kind output can be generated using some simple principle.

Kindly guide me

1. Can you show me SQL query for generating required output ? ( in my actual ac table having few thousands record )

2 How to arrive the correct principle for writing such SQLs ?

regards & thanks
pjp

Tom Kyte
May 24, 2005 - 1:10 pm UTC

and c1 not in (select ac1 from ac union all select ac2 from ac )

Query

A reader, July 07, 2005 - 10:35 am UTC

Hi Tom,

I have two tables

CREATE TABLE t1
(
CLM_ID NUMBER,
HUB_LOAD_SEQUENCE_NO NUMBER);

CREATE TABLE t2
(
HUB_LOAD_SEQUENCE_NO NUMBER,
EDW_LOAD_SEQUENCE_NO NUMBER);

and index on t1 table

CREATE INDEX t1_idx01 ON t1
(HUB_LOAD_SEQUENCE_NO);

and I am writing the following query to get all the rows within the range identified in table t2. We may have millions of rows in t1 table but t2 always has one row. HUB_LOAD_SEQUENCE_NO is integer incremented in t2 before load data in t1 and then populated in t1.

select *
from t1 a
where exists
(select 1 from t2 b
where a.HUB_LOAD_SEQUENCE_NO > b.EDW_LOAD_SEQUENCE_NO and a.HUB_LOAD_SEQUENCE_NO <= b.HUB_LOAD_SEQUENCE_NO)

Do you think from performance point of view the above query is okay?

Thanks


Tom Kyte
July 07, 2005 - 1:04 pm UTC

why wouldn't you do the more straight forward:

select * from t1
where HUB_LOAD_SEQUENCE_NO > (select EDW_LOAD_SEQUENCE_NO from t2)
and HUB_LOAD_SEQUENCE_NO <=(select HUB_LOAD_SEQUENCE_NO from t2)

especially if you wanted indexes to be useful on T1?



A reader, July 07, 2005 - 2:30 pm UTC

Hi Tom,

Thanks a million!!!

Query Logic Help

denni50, July 07, 2005 - 3:07 pm UTC

Hi Tom

I have the following query below where what I need to do is match one column to another column on a different record,in same table,and if there is a match combine(aggregate).

For example...there's is a special campaign where records will have an acknowledgement code(yearly aggregate sums) and default_acknowledgement code(monthly aggregate sums). On this one report mgt wants the sums/totals broken down by month(05FS) not yearly(05S)...'F' represents June 2005(see below).

SQL> select acknowledgement,default_acknowledgement,campaigncode
  2  from appeals
  3  where campaigncode='D05FS';

ACKN DEFA CAMPAI
---- ---- ------
05S  05FS D05FS
05S  05FS D05FS
05S  05FS D05FS

I only need to work with the first 3 characters from both columns...however what I need to accomplish is:

  Find where substr(default_acknowledgement,1,3)=substr(acknowledgement,1,3)then group those sums together and output.
  

if you look at the output below the two '05F' are separate(because of the UNION)I need those two 05F combined with a sum=619763.
 

SQL> select substr(a.acknowledgement,1,3) as Project,sum(a.pieces_mailed) as Dropped,
  2  sum(a.total#) as Response, sum(a.total$) as Income, avg(usernumber1) as Cost_Per_Unit
  3  from appeals a
  4  where a.acknowledgement in('05AA',
  5    '05AB', 
  6    '05AC', 
  7    '05AD',
  8    '05AF',
  9    '05AG',
 10    '05BA', 
 11    '05BB',
 12    '05BC', 
 13    '05BD', 
 14    '05BF',
 15    '05BG',
 16    '05CA', 
 17    '05CB', 
 18    '05CC', 
 19    '05CD',
 20    '05CF',
 21    '05CG',
 22    '05CH',
 23    '05DA', 
 24    '05DB', 
 25    '05DC', 
 26    '05DD', 
 27    '05DF',
 28    '05DG',
 29    '05EA', 
 30    '05EC', 
 31    '05ED', 
 32    '05EF',
 33    '05EG',
 34    '05FA', 
 35    '05FB', 
 36    '05FC', 
 37    '05FD', 
 38    '05FF',
 39    '05FG',
 40    '05Y') 
 41   group by substr(a.acknowledgement,1,3)
 42  UNION
 43  select substr(a.default_acknowledgement,1,3) as Project,sum(a.pieces_mailed) as Dropped
 44  sum(a.total#) as Response,sum(a.total$) as Income,avg(usernumber1) as Cost_Per_Unit
 45  from appeals a
 46  where a.default_acknowledgement in('05FS')
 47  group by substr(a.default_acknowledgement,1,3)
 48  /

PRO    DROPPED   RESPONSE     INCOME COST_PER_UNIT
--- ---------- ---------- ---------- -------------
05A     843388      33439  887844.01    .471597418
05B     914806      32619 1055638.96    .468912784
05C     991906      46599 1829657.49     .56123302
05D     814946      24964   735433.5    .464850449
05E     526517       9956   332143.4    .452991228
05F      22533        136     4619.9           .56
05F     597230      11241  351794.76    .524829932
05Y     163066      12457  539362.25    .881592357

8 rows selected.

thanks for any tips/help!


 

Tom Kyte
July 07, 2005 - 3:56 pm UTC

real quick is:

select project, sum(dropped), sum(response), ....
from (Q)
group by project;

where Q is your query with UNION changed to UNION ALL (you do know your union is a DISTINCT operation - you could "lose" rows with it!!!)


but I'm curious, if

substr(default_acknowledgement,1,3)=substr(acknowledgement,1,3)

on the SAME ROW, haven't you double counted them?

we could probably get this down to a single pass, but if the simple quick fix above is faster than fast enough, it is the path of least resistance.


thanks Tom

denni50, July 07, 2005 - 4:20 pm UTC

....I'll try your solution and get back with you.

No...the match can never be on the same row, since the
same record will have '05S' and '05FS'...the first
three characters of each is 05S,05F respectively.

The default_acknowledgement( first 3 characters) has to
match other records(rows) that has acknowledgement(first 3
characters) equaling '05F'

so that:
row 1 substr(def_ack,1,3)= row 10(substr(ack,1,3)

thanks!





UNION ALL

denni50, July 07, 2005 - 5:00 pm UTC

Tom

if I use a union all it pretty much does the same thing
except it places the second 05F at the bottom.


SQL> select substr(a.acknowledgement,1,3),sum(a.pieces_mailed),
  2   sum(a.total#),sum(a.total$), avg(usernumber1)
  3    from appeals a
  4    where a.acknowledgement in('05AA',
  5    '05AB',
  6    '05AC',
  7    '05AD',
  8    '05AF',
  9    '05AG',
 10    '05BA',
 11    '05BB',
 12    '05BC',
 13    '05BD',
 14    '05BF',
 15    '05BG',
 16    '05CA',
 17    '05CB',
 18    '05CC',
 19    '05CD',
 20    '05CF',
 21    '05CG',
 22    '05CH',
 23    '05DA',
 24    '05DC',
 25    '05DD',
 26    '05DF',
 27    '05DG',
 28    '05EA',
 29    '05EC',
 30    '05ED',
 31    '05EF',
 32    '05EG',
 33    '05FA',
 34    '05FB',
 35    '05FC',
 36    '05FD',
 37    '05FF',
 38    '05FG',
 39    '05Y')
 40      group by substr(a.acknowledgement,1,3)
 41     UNION ALL
 42     select substr(a.default_acknowledgement,1,3) as
 43  Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) a
 44  avg(usernumber1) as Cost_Per_Unit
 45  from appeals a
 46  where a.default_acknowledgement in('05FS')
 47  group by substr(a.default_acknowledgement,1,3);

SUB SUM(A.PIECES_MAILED) SUM(A.TOTAL#) SUM(A.TOTAL$) AVG(USERNUMBER1)
--- -------------------- ------------- ------------- ----------------
05A               843388         33439     887844.01       .471597418
05B               914806         32619    1055638.96       .468912784
05C               991906         46599    1829657.49        .56123302
05D               302423          8242     275749.03       .483467742
05E               526517          9956      332143.4       .452991228
05F               597230         11241     351794.76       .524829932
05Y               163068         12457     539362.25       .880668081
05F                22533           136        4619.9              .56

8 rows selected.

SQL> 

not sure how to code the from(Q) query...keep getting alot
of identifier errors..etc

  1  select substr(acknowledgement,1,3) as Project,sum(pieces_mailed) as Dropped,
  2   sum(total#) as Response, sum(total$) as Income, avg(usernumber1) as Cost_Per_Unit
  3  from(select substr(a.acknowledgement,1,3),sum(a.pieces_mailed),
  4   sum(a.total#),sum(a.total$), avg(usernumber1)
  5    from appeals a
  6    where a.acknowledgement in('05AA',
  7    '05AB',
  8    '05AC',
  9    '05AD',
 10    '05AF',
 11    '05AG',
 12    '05BA',
 13    '05BB',
 14    '05BC',
 15    '05BD',
 16    '05BF',
 17    '05BG',
 18    '05CA',
 19    '05CB',
 20    '05CC',
 21    '05CD',
 22    '05CF',
 23    '05CG',
 24    '05CH',
 25    '05DA',
 26    '05DC',
 27    '05DD',
 28    '05DF',
 29    '05DG',
 30    '05EA',
 31    '05EC',
 32    '05ED',
 33    '05EF',
 34    '05EG',
 35    '05FA',
 36    '05FB',
 37    '05FC',
 38    '05FD',
 39    '05FF',
 40    '05FG',
 41    '05Y')
 42      group by substr(a.acknowledgement,1,3)
 43     UNION ALL
 44     select substr(a.default_acknowledgement,1,3) as
 45  Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) as Inc
 46  avg(usernumber1) as Cost_Per_Unit
 47  from appeals a
 48  where a.default_acknowledgement in('05FS')
 49  group by substr(a.default_acknowledgement,1,3))
 50* group by substr(acknowledgement,1,3)
SQL> /
group by substr(acknowledgement,1,3)
                *
ERROR at line 50:
ORA-00904: "ACKNOWLEDGEMENT": invalid identifier

thanks again!

 

Tom Kyte
July 07, 2005 - 5:26 pm UTC

select project, sum(response), sum(inc), ...
from ( YOUR_QUERY_AS_IT_EXISTS_GOES_HERE )
group by project;



THANKS TOM!!!

denni50, July 08, 2005 - 8:22 am UTC

it worked!!!...exactly what I needed...at first I didn't
understand how to apply the first select statement to the
from(select statement)....learned something new...yeayyy!
;~)

SQL> select Project,sum(Dropped),
  2   sum(Response),sum(Income),avg(Cost_Per_Unit) 
  3  from(select substr(a.acknowledgement,1,3) as project,sum(a.pieces_mailed) as Dropped,
  4   sum(a.total#) as Response,sum(a.total$) as Income, avg(a.usernumber1) as Cost_Per_Unit
  5    from appeals a
  6    where a.acknowledgement in('05AA',
  7    '05AB',
  8    '05AC',
  9    '05AD',
 10    '05AF',
 11    '05AG',
 12    '05BA',
 13    '05BB',
 14    '05BC',
 15    '05BD',
 16    '05BF',
 17    '05BG',
 18    '05CA',
 19    '05CB',
 20    '05CC',
 21    '05CD',
 22    '05CF',
 23    '05CG',
 24    '05CH',
 25    '05DA',
 26    '05DC',
 27    '05DD',
 28    '05DF',
 29    '05DG',
 30    '05EA',
 31    '05EC',
 32    '05ED',
 33    '05EF',
 34    '05EG',
 35    '05FA',
 36    '05FB',
 37    '05FC',
 38    '05FD',
 39    '05FF',
 40    '05FG',
 41    '05Y')
 42      group by substr(a.acknowledgement,1,3)
 43     UNION ALL
 44     select substr(a.default_acknowledgement,1,3) as
 45  Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) as Income,
 46  avg(usernumber1) as Cost_Per_Unit
 47  from appeals a
 48  where a.default_acknowledgement in('05FS')
 49  group by substr(a.default_acknowledgement,1,3)) 
 50  group by project
 51  /

PRO SUM(DROPPED) SUM(RESPONSE) SUM(INCOME) AVG(COST_PER_UNIT)
--- ------------ ------------- ----------- ------------------
05A       843388         33439   887844.01         .471597418
05B       914806         32619  1055638.96         .468912784
05C       991906         46599  1829657.49          .56123302
05D       302423          8242   275749.03         .483467742
05E       526517          9956    332143.4         .452991228
05F       619763         11377   356414.66         .542414966
05Y       163068         12457   539362.25         .880668081

 

OK

Kumar, July 12, 2005 - 1:28 pm UTC

Hi Tom,
For this code snippet Can I do a periodic commit??

I would like to do a commit for every 500 rows.

for x in (select * from big_temp) loop

... do some validations with temp data
... and insert into base tables if validation
... succeeds,else raise different types of exceptions.

insert into t1 values(..);
insert into t2 values(...);

end loop;

Inside the loop I would like to do a commit for every 500 rows.
Is that possible?

Could you please help me with some code snippet??



Tom Kyte
July 13, 2005 - 10:29 am UTC

the answer is "well of course you CAN, but SHOULD you"

and the answer to that is "no"

what happens when after you committed 1000 records and have many more to go, the system fails.

where is your restart logic to be able to pick up at some meaningful point and complete your work.

(you want to BULK UP too -- forall processing if you are going to do this painful slow by slow processing)

???

Kumar, July 12, 2005 - 11:57 pm UTC

Hi Tom,
I expected your reply.
Please do reply.

Tom Kyte
July 13, 2005 - 11:00 am UTC

I sleep sometimes.

I even go to dinner with friends occasionally.



Another Query Logic

denni50, July 15, 2005 - 8:43 am UTC

Tom...need your help again.

it's almost along the same lines with the other query that you helped me with earlier in this thread...the difference is this query involves two separate tables and columns that I need to combine into one result set. I've been testing a myriad of sql statements, scalar subqueries...etc, trying to get the correct result output with no luck, keep getting the below garbage:(sample of one variation)

(r.ack_code will always = '05Y')

SQL> select to_char(r.ack_date,'MMYYYY') as project, sum(r.receipt_count) as Dropped,
  2  count(payamount) as Response,sum(p.payamount) as Income, avg(p.usernumber1) as Cost_Per_Unit
  3  from payment p,receipting r
  4  where p.acknowledgement=r.ack_code
  5  and to_char(p.paydate,'MMYYYY')=to_char(r.ack_date,'MMYYYY')
  6  group by to_char(r.ack_date,'MMYYYY');

PROJEC    DROPPED   RESPONSE     INCOME COST_PER_UNIT
------ ---------- ---------- ---------- -------------
012005   12969500       1000   51081.84      51.08184
022005   37469619       3284  142051.48    43.2556273
032005   68955792       7032   309690.2    44.0401308
042005  159119680      12160  485689.24    39.9415493
052005   98539248      12576  527281.88    41.9276304
062005   88548030      13320  617276.36    46.3420691
072005    1805072       4848   186652.5    38.5009282

7 rows selected.

SQL> 


below is the correct results when I run the queries on each table separately:
(excluded the cost_per_unit column for testing purposes only)


SQL> select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
  2  from receipting r
  3  group by to_char(r.ack_date,'MMYYYY');

PROJEC    DROPPED
------ ----------
012005      51878
022005      45639
032005      39224
042005      52342
052005      31342
062005      26591
072005       1117

7 rows selected.

SQL> select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
  2  sum(payamount) as Income
  3  from payment p
  4  where p.acknowledgement='05Y'
  5  group by to_char(p.paydate,'MMYYYY');

PROJEC   RESPONSE     INCOME
------ ---------- ----------
012005        250   12770.46
022005        821   35512.87
032005       1758   77422.55
042005       3040  121422.31
052005       3144  131820.47
062005       3330  154319.09
072005       1616    62217.5

7 rows selected.


this is what I need:

PROJEC    DROPPED   RESPONSE      INCOME
------    ---------- ----------  ----------
012005     51878         250    12770.46
022005     45639         821    35512.87
032005     39224        1758    77422.55
042005     52342        3040   121422.31
052005     31342        3144   131820.47
062005     26591        3330   154319.09
072005      1117        1616     62217.5
      
      
....can't seem to get the grouping to work correctly, if analytics can achieve what I need that would be great.... my neverending thanks!
   
      
      
      
       

Tom Kyte
July 15, 2005 - 5:55 pm UTC

just

select ....
from (q1), (q2)
where q1.projec = q2.projec;


using an outer join or full outer join if need be.

Different solution to previous request

denni50, July 15, 2005 - 4:26 pm UTC

Tom
unable to figure out how to combine multi table aggregates into one result set, after exhaustive research, and having to get this done I came up with a simple solution to create view's of each table's aggregate results and then combine the two views like below:

SQL> select decode(r.project,'012005','Jan 2005',
  2                          '022005','Feb 2005',
  3                          '032005','Mar 2005',
  4                          '042005','Apr 2005',
  5                          '052005','May 2005',
  6                          '062005','Jun 2005',
  7                          '072005','Jul 2005',
  8                          '082005','Aug 2005',
  9                          '092005','Sep 2005',
 10     '102005','Oct 2005',
 11     '112005','Nov 2005',
 12     '122005','Dec 2005') Month,
 13  r.dropped,p.response,p.income,p.cost_per_unit
 14  from tku_receipt r,tku_payment p
 15  where p.project=r.project;

MONTH       DROPPED   RESPONSE     INCOME COST_PER_UNIT
-------- ---------- ---------- ---------- -------------
Jan 2005      51878        250   12770.46      51.08184
Feb 2005      45639        821   35512.87    43.2556273
Mar 2005      39224       1758   77422.55    44.0401308
Apr 2005      52342       3040  121422.31    39.9415493
May 2005      31342       3144  131820.47    41.9276304
Jun 2005      26591       3330  154319.09    46.3420691
Jul 2005       2961       1717      65886    38.3727432

7 rows selected.


...however I would still like to learn how to combine multi table aggregates with joins into one result set if that is at all possible.

thx...enjoy the w/e!
 

procedural code vs sql

a reader, July 16, 2005 - 5:46 am UTC

Tom,

I am trying to convert a procedural logic into sql statement. Please find the details below:

create table t1 (acc_no number (5), c2 date , c3 number);

insert into t1 values (100,  sysdate -10, null);
insert into t1 values (101,  sysdate -9, null);
insert into t1 values (200,  null, null);
insert into t1 values (300,  sysdate -1, null);
insert into t1 values (400,  sysdate -1, null);


create table t2 (acc_no number (5), cycle number, type varchar2(1));

insert into t2 values (100, 3, 'A');
insert into t2 values (101, 1, 'B');
insert into t2 values (200, 0, 'D');
insert into t2 values (300, 0, 'C');
insert into t2 values (400, 5, 'E');


create table t3 (type varchar2(1), cost number);
insert into t3 values ('A', 10);
insert into t3 values ('B', 20);
insert into t3 values ('C', 30);
insert into t3 values ('D', 40);
insert into t3 values ('E', 50);


SQL> select * from t1;

    ACC_NO C2                      C3
---------- --------------- ----------
       100 06-JUL-05
       101 07-JUL-05
       200
       300 15-JUL-05
       400 15-JUL-05

SQL> select * from t2;

    ACC_NO      CYCLE T
---------- ---------- -
       100          3 A
       101          1 B
       300          0 C
       200          0 D
       400          5 E

SQL> select * from t3;

T       COST
- ----------
A         10
B         20
C         30
D         40
E         50


t1 and t2 has a relationship based on acc_no field. However t1 may have accounts that are not present in t2.

t3 stores cost corresponding to each type.

purpose : update column c3 of table t1 based on the following logic:

if t2.type in ('A', 'B', 'C') then 
    if t2.cycle > o or t1.c2 is not null then 
       set t1.c3 = t3.cost (corresponding to matching type)
    else 
       set t1.c3 =0
elsif t2.type ='D' then
     if t1.c2 > (sysdate -1) then
         if t2.cycle =1 then 
         set t1.c3 =t3.cost (corresponding to matching type)
         elsif t2.cycle >1
         set t1.c3 =100
      endif
elsif t2.type ='E' then 
      if t1.c2 > (sysdate -1) then
         set t1.c3 =t3.cost (corresponding to matching type)
      end if
else
      set t1.c3=0
end 


Many thanks in advance for your help.
 
 

    
       
 

 

Tom Kyte
July 16, 2005 - 9:32 am UTC

assuming I can make the following very reasonable (for they must already be true) changes:

create table t3 (type varchar2(1) primary key, cost number);
                              
 create table t2 (acc_no number (5) primary key, cycle number, type varchar2(1) not null references t3 );


then I believe (you'll need to test the logic, especially the UPPER CASE stuff, because you don't say what happens if type='D' for example and t1.c2 is NOT > sysdate-1, what should t1.c3 be set to?  null or itself?


ops$tkyte@ORA9IR2> update (
  2  select t2.type t2_type,
  3         t2.cycle t2_cycle,
  4         t1.c2 t1_c2,
  5         t1.c3 t1_c3,
  6         t3.cost t3_cost
  7    from t1, t2, t3
  8   where t1.acc_no = t2.acc_no(+)
  9     and t2.type = t3.type(+)
 10  )
 11  set t1_c3 = case when t2_type in ( 'A', 'B', 'C' )
 12                   then case when t2_cycle > 0 or t1_c2 is not null
 13                             then t3_cost
 14                             else 0
 15                         end
 16                   when t2_type = 'D'
 17                   then case when t1_c2 > (sysdate-1)
 18                             then case when t2_cycle = 1
 19                                       then t3_cost
 20                                       when t2_cycle > 1
 21                                       then 100
 22                                       ELSE T1_C3
 23                                   end
 24                              ELSE T1_C3
 25                          end
 26                  when t2_type = 'E'
 27                  then case when t1_c2 > (sysdate-1)
 28                            then t3_cost
 29                            ELSE T1_C3
 30                       end
 31                  else 0
 32              end
 33  /
 
5 rows updated.
 

thanks Tom...

denni50, July 16, 2005 - 9:02 am UTC

naturally you always make things seem so simple(I guess because they are)...never thought to put both queries as
scalar subqueries in the 'FROM' clause.

I'm just now beginning to learn the power of these queries
in 9i and that you can put them anywhere.

just an fyi...during my research ended up on the(initials
only)DB website where he had some pretty good examples
of using scalar subqueries, of which I applied several of
the techniques presented(still with no luck)...and then
read(further on down the page) that scalar subqueries don't
work with GROUP BY and HAVING clauses(along with several
other clause restrictions)...I'll find the link and send
it to you.

DB link

denni50, July 16, 2005 - 9:20 am UTC

Tom

here's the link...

</code> http://builder.com.com/5100-6388_14-1051694-2.html <code>


maybe I'm misinterpreting what he's explaining.
in reading it over again I believe it now means
that the scalar subquery result set cannot be used
in a GROUP BY...correct me if I'm wrong.

thanks





Tom Kyte
July 16, 2005 - 9:59 am UTC

... Scalar subqueries are a powerful enhancement to Oracle9i SQL. ....

Very first sentence starts off wrong.  bummer.  

Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
 
ops$tkyte@ORA815> select (select count(*) from dual) from dual;
 
(SELECTCOUNT(*)FROMDUAL)
------------------------
                       1
 

and the examples he uses, they *should* be analytics, doesn't really show where scalar subqueries rule, why they are good.


and if you run his sql, you'll see why I always cut and paste from sqplus :)  At least that way, I'm pretty sure they are syntactically correct.  Nothing more frustrating than having to debug the example before using it (of a,b,c,d -- A, which is not even a scalar subquery example, is the only sql that runs ;)

His suggested use in a single row insert is sort of strange for a DW.  I don't know about you, but I would certainly not do what is suggested in 
http://builder.com.com/5110-6388-1051712.html
do you really want to full scan the table 4 times to get 4 aggregates?  I don't


I don't think you misinterpreted anything, it just doesn't give any really good examples.  Even if you hack the sql to make it work, the examples are not representative of why you would use a scalar subquery.


Here is an excerpt from Effective Oracle by Design on Scalar subqueries
<quote>

Scalar Subqueries
The scalar subquery is a neat capability in SQL that can provide the easiest way to get excellent performance from a query. Basically, since Oracle8i Release 1, you have been able to use a subquery that returns at most a single row and column anywhere you could use a character string literal before. You can code this:

Select 'Something'
  From dual
 Where 'a' = 'b'

So, you can also code this:

Select (select column from  where  )
  From dual
 Where (select column from  where  ) = (select  from . )

I mainly use this capability for the following tasks:
    Removing the need for outer joins
    Aggregating information from multiple tables in a single query
    Selectively selecting from a different table/row in a single query

We'll take a look at each of these uses in turn.

Remove an Outer Join

When you remove an outer join, not only is the resulting query usually easier to read, but many times, the performance can be improved as well. The general idea is you have a query of this form:

Select , outer_joined_to_table.column
  From some_table, outer_joined_to_table
 Where  = outer_joined_to_table(+)

You can code that as follows:

Select , (select column from outer_joined_to_table where  )
  From some_table;

In many cases, there is a one-to-one relationship from the driving table to the table being outer-joined to, or an aggregate function is applied to the outer-joined-to column. For example, consider this query:

select a.username, count(*)
  from all_users a, all_objects b
 where a.username = b.owner (+)
 group by a.username;

Its results are equivalent to running this query:

   select a.username, (select count(*)
                      from all_objects b
                     where b.owner = a.username) cnt
     from all_users a

But somehow, the second query is more efficient. TKPROF shows us the efficiency, but this time, it lets us down. It isn't useful for seeing why this is more efficient.

<b>NOTE: The somehow is related to an entirely different plan - see Alberto's message below.
</b>

select a.username, count(*)
  from all_users a, all_objects b
 where a.username = b.owner (+)
 group by a.username

call     count    cpu elapsed    disk   query current     rows
------- ------  ----- ------- ------- ------- -------  -------
Parse        1   0.00    0.00       0       0       0        0
Execute      1   0.00    0.00       0       0       0        0
Fetch        4   1.90    2.22       0  144615       0       44
------- ------  ----- ------- ------- ------- -------  -------
total        6   1.90    2.22       0  144615       0       44

Rows     Row Source Operation
-------  ---------------------------------------------------
     44  SORT GROUP BY
  17924   MERGE JOIN OUTER
     44    SORT JOIN
     44     NESTED LOOPS
     44      NESTED LOOPS
     44       TABLE ACCESS FULL USER$
     44       TABLE ACCESS CLUSTER TS$
     44        INDEX UNIQUE SCAN I_TS# (object id 7)
     44      TABLE ACCESS CLUSTER TS$
     44       INDEX UNIQUE SCAN I_TS# (object id 7)
  17916    SORT JOIN
  30581     VIEW
  30581      FILTER
  31708       TABLE ACCESS BY INDEX ROWID OBJ$
  31787        NESTED LOOPS
     78         TABLE ACCESS FULL USER$
  31708         INDEX RANGE SCAN I_OBJ2 (object id 37)
   1035       TABLE ACCESS BY INDEX ROWID IND$
   1402        INDEX UNIQUE SCAN I_IND1 (object id 39)
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR
      1       FIXED TABLE FULL X$KZSPR

Now, let's compare this to the second version:

select a.username, (select count(*)
                      from all_objects b
                     where b.owner = a.username) cnt
  from all_users a

call     count    cpu elapsed    disk   query current     rows
------- ------  ----- ------- ------- ------- -------  -------
Parse        1   0.00    0.00       0       0       0        0
Execute      1   0.00    0.00       0       0       0        0
Fetch        4   1.63    1.98       0  135594       0       44
------- ------  ----- ------- ------- ------- -------  -------
total        6   1.63    1.98       0  135594       0       44

Rows     Row Source Operation
-------  ---------------------------------------------------
     44  NESTED LOOPS
     44   NESTED LOOPS
     44    TABLE ACCESS FULL OBJ#(22)
     44    TABLE ACCESS CLUSTER OBJ#(16)
     44     INDEX UNIQUE SCAN OBJ#(7) (object id 7)
     44   TABLE ACCESS CLUSTER OBJ#(16)
     44    INDEX UNIQUE SCAN OBJ#(7) (object id 7)

We see it did less logical I/O, but all references to the ALL_OBJECTS part of the query are missing from the plan. In fact, it is not possible to see the plan for these scalar subqueries as of Oracle9i Release 2. This is unfortunate, and we can only hope that an upcoming version will show scalar subqueries.

What if you need more than one column from the related table? Suppose we needed not only the COUNT(*), but also the AVG(OBJECT_ID). We have four choices:

    Go back to the outer join.
    Use two scalar subqueries.
    Use a trick with a single scalar subquery.
    Use an object type.

Since the first option is pretty obvious, we won't demonstrate that. We will take a look at the other choices, and demonstrate why the third and fourth options may be worthwhile. 

Use Two Scalar Subqueries

First, we'll look at using two scalar subqueries:
select a.username, (select count(*)
                      from all_objects b
                     where b.owner = a.username) cnt,
                   (select avg(object_id )
                      from all_objects b
                     where b.owner = a.username) avg
  from all_users a

call     count    cpu elapsed    disk   query current     rows
------- ------  ----- ------- ------- ------- -------  -------
Parse        1   0.00    0.00       0       0       0        0
Execute      1   0.00    0.00       0       0       0        0
Fetch        4   3.18    3.25       0  271036       0       44
------- ------  ----- ------- ------- ------- -------  -------
total        6   3.18    3.25       0  271036       0       44

That effectively doubled the work (look at the QUERY column and compare its values to the previous results). We can get back to where we were, however, just by using a small trick.

Use a Single Scalar Subquery

Instead of running two scalar subqueries, we will run one that will encode all of the data of interest in a single string. We can use SUBSTR then to pick off the fields we need and convert them to the appropriate types again.

select username,
       to_number( substr( data, 1, 10 ) ) cnt,
       to_number( substr( data, 11 ) ) avg
  from (
select a.username, (select to_char( count(*), 'fm0000000009' ) ||
                           avg(object_id)
                      from all_objects b
                     where b.owner = a.username) data
  from all_users a
       )

call     count    cpu elapsed    disk   query current     rows
------- ------  ----- ------- ------- ------- -------  -------
Parse        1   0.01    0.01       0       0       0        0
Execute      1   0.00    0.00       0       0       0        0
Fetch        4   1.66    1.73       0  135594       0       44
------- ------  ----- ------- ------- ------- -------  -------
total        6   1.68    1.75       0  135594       0       44

So, in the inline view, we formatted the COUNT(*) in a ten-character wide, fixed-width field. The format modifier (FM) in the TO_CHAR format suppressed the leading space that a number would have, since we know the count will never be negative (so we do not need a sign). We then just concatenate on the AVG() we want. That does not need to be fixed width, since it is the last field. I prefer to use fixed-width fields in all cases because it makes the SUBSTR activity at the next level much easier to perform. The outer query then just must SUBSTR off the fields and use TO_NUMBER or TO_DATE as appropriate to convert the strings back to their native type. As you can see, in this case, it paid off to do this extra work. 

One note of caution on this technique though: Beware of NULLs. On fields that allow NULLs, you will need to use NVL. For example, if COUNT(*) could have returned a NULL (in this case, it cannot), we would have coded this way:
nvl( to_char(count(*),'fm0000000009'), rpad( ' ', 10 ) )

That would have returned ten blanks, instead of concatenating in a NULL, which would have shifted the string over, destroying our results.

Use an Object Type

Lastly, we can use an object type to return a "scalar" value that is really a complex object type. We need to start by creating a scalar type to be returned by our subquery:

ops$tkyte@ORA920> create or replace type myScalarType as object
  2  ( cnt number, average number )
  3  /

Type created.

That maps to the two numbers we would like to return: the count and the average. Now, we can get the result using this query:

select username, a.data.cnt, a.data.average
  from (
select username, (select myScalarType( count(*), avg(object_id) )
                    from all_objects b
                   where b.owner = a.username ) data
  from all_users a
       ) A
call     count    cpu elapsed    disk   query current     rows
------- ------  ----- ------- ------- ------- -------  -------
Parse        1   0.01    0.01       0       0       0        0
Execute      1   0.00    0.00       0       0       0        0
Fetch        4   1.56    1.63       0  135594       0       44
------- ------  ----- ------- ------- ------- -------  -------
total        6   1.58    1.65       0  135594       0       44

Here, we get the same results without needing to encode the data using TO_CHAR and decode the data using SUBSTR and TO_NUMBER. Additionally, the presence of NULLs would not further complicate the query.

Using the object type is convenient to reduce the query complexity, but it does involve the extra step of creating that type, which some people are hesitant to do. So, while this technique is easier to use, I find most people will use the encode/decode technique rather than the object type approach. The performance characteristics are very similar with either technique.

Aggregate from Multiple Tables

Suppose you are trying to generate a report that shows by username, the username, user ID, created date, number of tables they own, and the number of constraints they own for all users created within the last 50 days. This would be easy if ALL_OBJECTS had both TABLES and CONSTRAINTS, but it doesn't. You need to count rows in two different tables. If you just joined, you would end up with a Cartesian join, so that if a user owned six tables and had three constraints, you would get 18 rows. 

I'll demonstrate two queries to retrieve this information: one with and one without scalar subqueries. Without scalar queries, there are many ways to achieve this. One technique is to use a Cartesian join. We could also use multiple levels of inline views and join ALL_USERS to ALL_CONSTRAINTS, aggregate that, and then join that to ALL_TABLES (or reverse the two) as well. We could join ALL_USERS to inline views that aggregate ALL_CONSTRAINTS and ALL_TABLES to the same level of detail.  We'll compare the implementation of those last two methods to the scalar subquery here.  The second inline view solution would look like this:

ops$tkyte@ORA920> select a.username, a.user_id, a.created,
  2         nvl(b.cons_cnt,0) cons, nvl(c.tables_cnt,0) tables
  3    from all_users a,
  4         (select owner, count(*) cons_cnt
  5                from all_constraints
  6                   group by owner) b,
  7             (select owner, count(*) tables_cnt
  8                from all_tables
  9                   group by owner) c
 10   where a.username = b.owner(+)
 11     and a.username = c.owner(+)
 12     and a.created > sysdate-50
 13  /

USERNAME                          USER_ID CREATED         CONS     TABLES
------------------------------ ---------- --------- ---------- ----------
A                                     511 04-JUL-03          3          1
A1                                    396 20-JUN-03          0          1
B                                     512 04-JUL-03          3          1
C                                     470 21-JUN-03          0          0
D                                     471 21-JUN-03          0          1
OPS$TKYTE                             513 05-JUL-03         17          6

6 rows selected.

Elapsed: 00:00:01.94

We had to use outer joins from ALL_USERS to the two inline views - otherwise we would "lose" rows for users that did not have any tables or had tables but no constraints.  The performance of this query - about 2 seconds on my system - is not the best.  Using scalar subqueries instead, we see a query that looks very similar - yet the performance characteristics are very different:

ops$tkyte@ORA920> select username, user_id, created,
  2         (select count(*)
  3            from all_constraints
  4           where owner = username) cons,
  5         (select count(*)
  6            from all_tables
  7           where owner = username) tables
  8    from all_users
  9   where all_users.created > sysdate-50
 10  /

USERNAME                          USER_ID CREATED         CONS     TABLES
------------------------------ ---------- --------- ---------- ----------
A                                     511 04-JUL-03          3          1
A1                                    396 20-JUN-03          0          1
B                                     512 04-JUL-03          3          1
C                                     470 21-JUN-03          0          0
D                                     471 21-JUN-03          0          1
OPS$TKYTE                             513 05-JUL-03         17          6

6 rows selected.

Elapsed: 00:00:00.06

It is true that we can "tune" that first query - we can see that when using the inline views, Oracle is producing the aggregations for every user in the database and then outer joining these results to ALL_USERS.  Well, most of our users are not in this report - only the recently created one - so we are computing aggregates for lots of data we are not going to use.  So, we can manually push the predicate down into these inline views:


ops$tkyte@ORA920> select a.username, a.user_id, a.created,
  2         nvl(b.cons_cnt,0) cons, nvl(c.tables_cnt,0) tables
  3    from all_users a,
  4         (select all_constraints.owner, count(*) cons_cnt
  5            from all_constraints, all_users
  6           where all_users.created > sysdate-50
  7             and all_users.username = all_constraints.owner
  8           group by owner) b,
  9         (select all_tables.owner, count(*) tables_cnt
 10            from all_tables, all_users
 11           where all_users.created > sysdate-50
 12             and all_users.username = all_tables.owner
 13           group by owner) c
 14   where a.username = b.owner(+)
 15     and a.username = c.owner(+)
 16     and a.created > sysdate-50
 17  /

USERNAME                          USER_ID CREATED         CONS     TABLES
------------------------------ ---------- --------- ---------- ----------
A                                     511 04-JUL-03          3          1
A1                                    396 20-JUN-03          0          1
B                                     512 04-JUL-03          3          1
C                                     470 21-JUN-03          0          0
D                                     471 21-JUN-03          0          1
OPS$TKYTE                             513 05-JUL-03         17          6

6 rows selected.

Elapsed: 00:00:00.10

Here, it is not just the performance boost you may achieve that makes this approach attractive, but also its simplicity. 

Select from Different Tables

Using scalar subqueries for selecting from different tables is one of the neater tricks by far. This is useful in two areas:

    Joining rows in a table/view to some set of other tables-using data in the query itself to pick the table to join to

    Looking up data in an INSERT statement or getting SQLLDR to do code conversions without needing to call PL/SQL

We'll demonstrate each in turn.


Join Rows to a Set of Tables

One of my core scripts is a script I call DBLS (for database ls, or database dir for Windows users). The query is as follows:

ops$tkyte@ORA920> select object_type, object_name,
  2         decode( status, 'INVALID', '*', '' ) status,
  3         decode( object_type,
  4        'TABLE', (select tablespace_name
  5                    from user_tables
  6                   where table_name = object_name),
  7        'TABLE PARTITION', (select tablespace_name
  8                              from user_tab_partitions
  9                             where partition_name = 
                                              subobject_name),
 10        'INDEX', (select tablespace_name
 11                    from user_indexes
 12                   where index_name = object_name),
 13        'INDEX PARTITION', (select tablespace_name
 14                              from user_ind_partitions
 15                             where partition_name = 
                                               subobject_name),
 16        'LOB', (select tablespace_name
 17                  from user_segments
 18                 where segment_name = object_name),
 19     null ) tablespace_name
 20    from user_objects a
 21   order by object_type, object_name
 22  /

This generates a report for the current schema of all of the objects, including their type and status. For many things that are segments (consume space), it reports the tablespace in which they reside. Now, you might wonder why I didn't just code the following:

select b.object_name, b.object_type,
       decode( b.status, 'INVALID', '*', '' ),
       a.tablespace_name
  from user_segments a, user_objects b
 where a.segment_name(+) = b.object_name
   and a.segment_type(+) = b.object_type;

It is more terse and seems like a better choice. AUTOTRACE will help to explain why it doesn't work as well.

ops$tkyte@ORA920> select object_type, object_name,
  2         decode( status, 'INVALID', '*', '' ) status,
  3         decode( object_type,

 21   order by object_type, object_name
 22  /

86 rows selected.

Statistics
----------------------------------------------------------
        820  consistent gets

ops$tkyte@ORA920> select b.object_name, b.object_type,
  2         decode( b.status, 'INVALID', '*', '' ),
  3             a.tablespace_name
  4    from user_segments a, user_objects b
  5   where a.segment_name(+) = b.object_name
  6     and a.segment_type(+) = b.object_type;

86 rows selected.

Statistics
----------------------------------------------------------
      12426  consistent gets


And the larger your schemas, the worse it gets. I discovered that if I did selective lookups to the less complex views using the DECODE row by row, the query would constantly perform in milliseconds, even on a large schema. Using USER_SEGMENTS, which is a very general-purpose view-sort of the kitchen sink of views-and outer-joining to that could be a killer in certain schemas, to the point where the script was useless since it would take so long to run.

This example shows a technique for joining each row in a result set with a different table. This is also useful when you have a design that uses a single field as a foreign key to N different tables. (Perhaps this is not a good idea, since you cannot use database-integrity constraints, but people do it.) In those cases, a construct such as this is key to pulling the data back together in a fairly efficient manner (in a single query).


Perform Lookups

Lastly, this scalar subquery technique is useful when used in conjunction with SQLLDR to perform lookups of data. Suppose you are given an input file where the fields are all specified using lookup codes, but you need to have the data decoded in your database tables. Rather than load the raw data into staging tables, performing joins, and inserting into the real tables, you can use a scalar subquery in the control files directly to load the data. For example, a control file could look like this:

LOAD DATA
INFILE *
INTO TABLE t
REPLACE
FIELDS TERMINATED BY '|'
(
username "(select username 
             from all_users where user_id = :username)"
)
BEGINDATA
0
5
11
19
21
30

That would automatically convert the USER_ID in the data stream into the USERNAME by doing that lookup for you.

Note: In Oracle8i, there is a product issue, whereby if you use this technique, you must also use rows=1 on the SQLLDR command line. Otherwise, you will find the subquery is executed only once and will insert the same value over and over. There are patches that can be applied to various 8174 releases to correct this.

</quote> 

Immense thanks Tom......

denni50, July 16, 2005 - 12:09 pm UTC

there are no words to express the immense gratitude for
your taking the time to explain and illustrate the differences in performance utilizing ss vs outerjoins.

I've copied and printed your entire illustration and explanation and will add to my collection of your scripts and other solutions I use and have learned from.

I'm sure there are many others out there that will also
benefit from this.

As for testing what DB had on his website, was in a real
crunch to get the data so I could generate a report for
the 'big honcho' in Atlanta and was scrambling from website
to website trying to find info on multi table aggregates
and scalar subqueries to combine the tables.

I have your book EOBD and sifted through it and don't recall
any chapter on SS...I'll have to look at it again.

however...another kazillion thanks!....definitely have
learned a great deal from this challenge.

;~)









Tom Kyte
July 16, 2005 - 1:07 pm UTC

The chapter on Effective SQL :)

about the "somehow"

Alberto Dell'Era, July 18, 2005 - 7:23 am UTC

> NOTE: Scalar subquery caching is the answer to the "somehow"

But are you sure that scalar subquery *caching* is what is making the difference?

all_users.username is obviously unique, so i would expect the subquery to be "probed" with unique values, thus the cached values will never be reused, hence caching is useless here.

It can be confirmed by looking at the plans.

If you explain the query in 9.2.0.6, and then explain the scalar subquery:

select a.username, (select count(*)
from all_objects b
where b.owner = a.username) cnt
from all_users a;

--------------------------------------------------------
| Id | Operation | Name | R
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS BY INDEX ROWID| USER$ |
|* 5 | INDEX UNIQUE SCAN | I_USER1 |
| 6 | TABLE ACCESS BY INDEX ROWID| OBJ$ |
|* 7 | INDEX RANGE SCAN | I_OBJ2 |
|* 8 | TABLE ACCESS BY INDEX ROWID | IND$ |
|* 9 | INDEX UNIQUE SCAN | I_IND1 |
|* 10 | TABLE ACCESS BY INDEX ROWID | OBJAUTH$ |
| 11 | NESTED LOOPS | |
| 12 | FIXED TABLE FULL | X$KZSRO |
|* 13 | INDEX RANGE SCAN | I_OBJAUTH2 |
|* 14 | FIXED TABLE FULL | X$KZSPR |
|* 15 | FIXED TABLE FULL | X$KZSPR |
|* 16 | FIXED TABLE FULL | X$KZSPR |
|* 17 | FIXED TABLE FULL | X$KZSPR |
|* 18 | FIXED TABLE FULL | X$KZSPR |
|* 19 | FIXED TABLE FULL | X$KZSPR |
|* 20 | FIXED TABLE FULL | X$KZSPR |
|* 21 | FIXED TABLE FULL | X$KZSPR |
|* 22 | FIXED TABLE FULL | X$KZSPR |
|* 23 | FIXED TABLE FULL | X$KZSPR |
|* 24 | FIXED TABLE FULL | X$KZSPR |
|* 25 | FIXED TABLE FULL | X$KZSPR |
|* 26 | FIXED TABLE FULL | X$KZSPR |
|* 27 | FIXED TABLE FULL | X$KZSPR |
|* 28 | FIXED TABLE FULL | X$KZSPR |
|* 29 | FIXED TABLE FULL | X$KZSPR |
|* 30 | FIXED TABLE FULL | X$KZSPR |
| 31 | NESTED LOOPS | |
| 32 | NESTED LOOPS | |
|* 33 | TABLE ACCESS FULL | USER$ |
| 34 | TABLE ACCESS CLUSTER | TS$ |
|* 35 | INDEX UNIQUE SCAN | I_TS# |
| 36 | TABLE ACCESS CLUSTER | TS$ |
|* 37 | INDEX UNIQUE SCAN | I_TS# |
--------------------------------------------------------
(snip)
5 - access("U"."NAME"=:B1) <-- interesting ...
(snip)

select count(*)
from all_objects b
where b.owner = 'DELLERA'

-------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS BY INDEX ROWID| USER$ |
|* 5 | INDEX UNIQUE SCAN | I_USER1 |
| 6 | TABLE ACCESS BY INDEX ROWID| OBJ$ |
|* 7 | INDEX RANGE SCAN | I_OBJ2 |
|* 8 | TABLE ACCESS BY INDEX ROWID | IND$ |
|* 9 | INDEX UNIQUE SCAN | I_IND1 |
|* 10 | TABLE ACCESS BY INDEX ROWID | OBJAUTH$ |
| 11 | NESTED LOOPS | |
| 12 | FIXED TABLE FULL | X$KZSRO |
|* 13 | INDEX RANGE SCAN | I_OBJAUTH2 |
|* 14 | FIXED TABLE FULL | X$KZSPR |
|* 15 | FIXED TABLE FULL | X$KZSPR |
|* 16 | FIXED TABLE FULL | X$KZSPR |
|* 17 | FIXED TABLE FULL | X$KZSPR |
|* 18 | FIXED TABLE FULL | X$KZSPR |
|* 19 | FIXED TABLE FULL | X$KZSPR |
|* 20 | FIXED TABLE FULL | X$KZSPR |
|* 21 | FIXED TABLE FULL | X$KZSPR |
|* 22 | FIXED TABLE FULL | X$KZSPR |
|* 23 | FIXED TABLE FULL | X$KZSPR |
|* 24 | FIXED TABLE FULL | X$KZSPR |
|* 25 | FIXED TABLE FULL | X$KZSPR |
|* 26 | FIXED TABLE FULL | X$KZSPR |
|* 27 | FIXED TABLE FULL | X$KZSPR |
|* 28 | FIXED TABLE FULL | X$KZSPR |
|* 29 | FIXED TABLE FULL | X$KZSPR |
|* 30 | FIXED TABLE FULL | X$KZSPR |
-------------------------------------------------------
(snip)
5 - access("U"."NAME"='DELLERA') <-- interesting ...
(snip)

If you diff the two plans, you can see that lines 1-30 are the plan for the subquery, and so the plan for the main query is:

| 31 | NESTED LOOPS | |
| 32 | NESTED LOOPS | |
|* 33 | TABLE ACCESS FULL | USER$ |
| 34 | TABLE ACCESS CLUSTER | TS$ |
|* 35 | INDEX UNIQUE SCAN | I_TS# |
| 36 | TABLE ACCESS CLUSTER | TS$ |
|* 37 | INDEX UNIQUE SCAN | I_TS# |

So it is full-scanning user$ and looking up some infos on unique columns; that means that the subquery will be "probed" (re-executed with another bind value for :B1) with unique values.

Tom Kyte
July 18, 2005 - 8:08 am UTC

indeed. the scalar subquery caching is applicable for the remaining examples but not that particular one. jumped to a conclusion.

Scalar Subqueries vs Views Performance

denni50, July 18, 2005 - 11:53 am UTC

Tom

decided to compare the performance with the solution you provided using the 2 scalar subqueries in the 'FROM' clause against the 2 views of each tables aggregate results that I came up...they are virtually identical!

so essentially ssq are nothing more than 'on the fly' views with no ddl (as shown in the execution plan "VIEW (Cost=113 Card=1 Bytes=31)" from the ssq).

I would have thought a view(being a stored query) would give a better performance rate....hmmm, interesting.


SQL> set autotrace traceonly explain
SQL> select r.project,r.dropped,p.response,p.income
  2  from (select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
  3        from receipting r group by to_char(r.ack_date,'MMYYYY')) r,
  4         (select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
  5          sum(payamount) as Income from payment p
  6          where p.paydate between to_date('01-JAN-2005','DD-MON-YYYY') and to_date('31-DEC-2005',
'DD-MON-YYYY')
  7          and p.appealcode like '%TY%'
  8          group by to_char(p.paydate,'MMYYYY'))p
  9  where r.project=p.project;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=121 Card=11 Bytes=53
          9)

   1    0   HASH JOIN (Cost=121 Card=11 Bytes=539)
   2    1     VIEW (Cost=113 Card=1 Bytes=31)
   3    2       SORT (GROUP BY) (Cost=113 Card=1 Bytes=21)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'PAYMENT' (Cost=112
           Card=107 Bytes=2247)

   5    4           INDEX (RANGE SCAN) OF 'PDATE_INDEX' (NON-UNIQUE) (
          Cost=8 Card=2131)

   6    1     VIEW (Cost=7 Card=1062 Bytes=19116)
   7    6       SORT (GROUP BY) (Cost=7 Card=1062 Bytes=23364)
   8    7         TABLE ACCESS (FULL) OF 'RECEIPTING' (Cost=2 Card=106
          2 Bytes=23364)


******Trace from 2Views***************
SQL> select r.project,r.dropped,p.response,p.income
  2  from gui.tku_receipt r,gui.tku_payment p
  3  where p.project=r.project
  4  /
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=121 Card=11 Bytes=53
          9)

   1    0   HASH JOIN (Cost=121 Card=11 Bytes=539)
   2    1     VIEW OF 'TKU_PAYMENT' (Cost=113 Card=1 Bytes=31)
   3    2       SORT (GROUP BY) (Cost=113 Card=1 Bytes=21)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'PAYMENT' (Cost=112
           Card=107 Bytes=2247)

   5    4           INDEX (RANGE SCAN) OF 'PDATE_INDEX' (NON-UNIQUE) (
          Cost=8 Card=2131)

   6    1     VIEW OF 'TKU_RECEIPT' (Cost=7 Card=1062 Bytes=19116)
   7    6       SORT (GROUP BY) (Cost=7 Card=1062 Bytes=23364)
   8    7         TABLE ACCESS (FULL) OF 'RECEIPTING' (Cost=2 Card=106
          2 Bytes=23364)

SQL>  

Tom Kyte
July 18, 2005 - 12:19 pm UTC

a view is just a stored query though, I would not expect a view to have any inherit performance boost over just "a query" of any sort.


scalar subqueries might

a) go faster
b) go slower
c) go the same speed

as an equivalent query without them - pretty much like everything

however Tom....

denni50, July 18, 2005 - 2:01 pm UTC

in looking at the tkprof output...my interpretation is that the 2 ssq were converted (more or less) to views behind the scenes: (I am assuming to obtain the obj# and obj_id)

select text
from
view$ where rowid=:1

select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#,o.flags from obj$ o where o.obj#=:1

I understand in this simplistic case the elapsed_cpu is insignificant...however if I had to combine 5,6,7+ multi_table aggregates it appears that views would be more productive and beneficial especially if the query was going to be run every week or month.

I also noticed disk=12 with ssq, while disk=0 with views..what does that represent?

as always comments/feedback appreciated.


*********************************************************************************


TKPROF: Release 8.1.7.0.0 - Production on Mon Jul 18 12:02:42 2005

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Trace file: xxx_ora_892.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace=true


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.93 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.93 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 42
********************************************************************************

select r.project,r.dropped,p.response,p.income
from (select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
from receipting r group by to_char(r.ack_date,'MMYYYY')) r,
(select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
sum(payamount) as Income from payment p
where p.paydate between to_date('01-JAN-2005','DD-MON-YYYY') and to_date('31-DEC-2005','DD-MON-YYYY')
and p.appealcode like '%TY%'
group by to_char(p.paydate,'MMYYYY'))p
where r.project=p.project

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 38.32 0 1 0 0
Execute 1 0.00 0.86 0 0 0 0
Fetch 2 18437.50 18819.92 12 30373 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 18437.50 18859.10 12 30374 0 7

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 42

Rows Row Source Operation
------- ---------------------------------------------------
7 HASH JOIN
7 VIEW
7 SORT GROUP BY
24952 TABLE ACCESS BY INDEX ROWID PAYMENT
656843 INDEX RANGE SCAN PDATE_INDEX (object id 40375)
7 VIEW
7 SORT GROUP BY
46 TABLE ACCESS FULL RECEIPTING

********************************************************************************

select text
from
view$ where rowid=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 156.25 4.32 0 0 0 0
Execute 2 0.00 3.36 0 0 0 0
Fetch 2 0.00 0.66 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 156.25 8.34 0 4 0 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$

********************************************************************************

select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,
o.dataobj#,o.flags
from
obj$ o where o.obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 4.51 0 0 0 0
Execute 1 0.00 2.67 0 0 0 0
Fetch 1 0.00 0.42 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 7.60 0 3 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID OBJ#(18)
1 INDEX UNIQUE SCAN OBJ#(36) (object id 36)

********************************************************************************

select r.project,r.dropped,p.response,p.income
from gui.tku_receipt r,gui.tku_payment p
where p.project=r.project

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 156.25 56.07 0 4 0 0
Execute 1 0.00 0.98 0 0 0 0
Fetch 2 18281.25 18379.11 0 30373 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 18437.50 18436.16 0 30377 0 7

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 42

Rows Row Source Operation
------- ---------------------------------------------------
7 HASH JOIN
7 VIEW
7 SORT GROUP BY
24952 TABLE ACCESS BY INDEX ROWID OBJ#(29492)
656843 INDEX RANGE SCAN OBJ#(40375) (object id 40375)
7 VIEW
7 SORT GROUP BY
46 TABLE ACCESS FULL OBJ#(52143)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 156.25 94.39 0 5 0 0
Execute 3 0.00 2.77 0 0 0 0
Fetch 4 36718.75 37199.03 12 60746 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 36875.00 37296.19 12 60751 0 14

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 156.25 8.83 0 0 0 0
Execute 3 0.00 6.03 0 0 0 0
Fetch 3 0.00 1.08 0 7 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 156.25 15.94 0 7 0 3

Misses in library cache during parse: 2

3 user SQL statements in session.
3 internal SQL statements in session.
6 SQL statements in session.
********************************************************************************
Trace file: xxx_ora_892.trc
Trace file compatibility: 8.00.04
Sort options: default

1 session in tracefile.
3 user SQL statements in trace file.
3 internal SQL statements in trace file.
6 SQL statements in trace file.
5 unique SQL statements in trace file.
23858 lines in trace file.









Tom Kyte
July 18, 2005 - 2:12 pm UTC

it is not turned into a view, a query like:

select ..., (scalar subquery1), (scalar subquery2)
from ...

is executed like this:


for x in ( select .... from .... ) -- without scalar subqueries
loop
if (scalar subquery1) is not in the cache,
then
execute scalar subquery1 using the values from X
put into cache
end if;
if (scalar subquery2) is not in the cache,
then
execute scalar subquery2 using the values from X
put into cache
end if;

output row
end loop


scalar subquery1 and 2 are optimized as "standalone" queries.

Tom am definitely looking forward...

denni50, July 18, 2005 - 2:45 pm UTC

to reading more about scalar subqueries in your new book
and did go back and re-read chapter 8 of EOBD(p.504-514).

yep!!..there's a whole segment on multi table aggregates..
didn't see it..was even looking in index for aggregates,
grouping...etc.

...could have saved myself alot of wasted time.

Alberto Dell'Era, July 18, 2005 - 3:49 pm UTC

To be more precise, adding the info Jonathan Lewis gave us in the newsgroup [sorry I can't quote - if I add the URL and press "preview", the submit button disappear ..], wouldn't it be (additions in uppercase):

for x in ( select .... from .... ) -- without scalar subqueries
loop
if (scalar subquery) is not in the cache,
then
execute scalar subquery using the values from X
IF (NO HASH COLLISIONS OCCURS) THEN
put into cache
END IF;
end if;

output row
end loop

That's very important to remember if you rely on subquery caching and you are going to fetch a number of rows comparable or greater than the cache size - 1024 in 10g, 256 earlier. It's not an LRU cache, so it's going to be full after the first hundreds of rows have been fetched.

Tom Kyte
July 18, 2005 - 4:58 pm UTC

It is a little more "subtle" than that.

run this script passing in say "4" or "10" (to control the initial size of the table) and see what you see.

the package just maintains a count of how many times called and the "callees" in a string.

Table T starts with N rows (you control N, N should be >= 4). For example, it might start with:

R DATA
---- -----
1 2
2 1
3 2
4 1


The loop after that finds two values that cause a collision. So, we'll have two that definitely cause the hash collisions.


Then we insert 100 more rows of these values in order (using R to sequence them).

When totally out of sequence (every other one is different), we get maximum calls


But when we update most to 1 (1 is the second guy in always), and the last one to whatever collides with it -- we see not so many calls, even though in theory the rows with 1 should call evertime. They get "maxed out".


with N=4 (in 9i)

cnt = 3 str = ,179,1,1

so 179 and 1 collide and 1 called twice, that was expected. Add 100 more rows varying 1, 179, 1, 179....

cnt = 52 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1

That was unexpected sort of. 179 and 1 are in there 52 times each. I would have thought "53". Now do the updates and:


cnt = 4 str = ,179,1,1,1

hmmm.



N=10
cnt = 6 str = ,179,1,1,1,1,1
cnt = 55 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 7 str = ,179,1,1,1,1,1,1



N=100
cnt = 51 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 100 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 52 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1


So, I'll just say "you want best hit -- order the data in the inline view by the parameters you pass to the function, else, it'll be variable number of hits and somewhat unpredicable"



If anyone sees some fundemental "whoops, you have a problem in the code", I'd love to have it pointed out (really)


set echo on

create or replace package demo_pkg
as
function f( x in number ) return number;
procedure reset;
procedure show;

function get_cnt return number;
end;
/

create or replace package body demo_pkg
as
g_cnt number;
g_str long;

function get_cnt return number
is
begin
return g_cnt;
end;

procedure reset
is
begin
g_cnt := 0;
g_str := null;
end;

procedure show
is
begin
dbms_output.put_line( substr
( 'cnt = ' || demo_pkg.g_cnt || ' str = ' || demo_pkg.g_str,
1, 255 ) );
end;

function f( x in number ) return number
is
begin
g_cnt := g_cnt+1;
g_str := g_str || ',' || x;
return x;
end;

end;
/

drop table t;
create table t
as
select rownum r, decode(mod(rownum,2),0,1,2) data
from dual
connect by level <= &1;

begin
for i in 2 .. 1024
loop
update t set data = i where mod(r,2) = 1;
commit;
demo_pkg.reset;
for x in ( select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
exit when demo_pkg.get_cnt <> 2;
end loop;
demo_pkg.show;
end;
/
insert into t
select l, (select data from t where r = 1+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;


exec demo_pkg.reset
set autotrace traceonly statistics
select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r)
/
set autotrace off
exec demo_pkg.show

update t set data = 1 where r > &1+2;
update t set data = (select data from t where r=1) where r=(select max(r) from t);



commit;
exec demo_pkg.reset
set autotrace traceonly statistics
select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r)
/
set autotrace off
exec demo_pkg.show


Alberto Dell'Era, July 18, 2005 - 6:33 pm UTC

>cnt = 52 str =
>,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1

>That was unexpected sort of. 179 and 1 are in there 52 times each. I would
>have thought "53".


This fragment:

insert into t
select l, (select data from t where r = 1+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;

Generates this sequence:


R DATA
---------- ----------
1 179
2 1
3 179
4 1
5 1 <-- same as before
6 179
(snip)

If you c/1+mod(l,2)/2+mod(l,2)/g

insert into t
select l, (select data from t where r = 2+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;

Thus generating:


R DATA
---------- ----------
1 179
2 1
3 179
4 1
5 179
6 1
(snip)

You'll find:

cnt = 53 str = ,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1

(same per N=10 and N=100 - one more function call).

I've gone so far in reproducing your test case - since it's 12:30PM now, see you tomorrow ...
(why do you post addictive things always at around midnight Italy time ? ;)


Tom Kyte
July 18, 2005 - 6:42 pm UTC

very good -- one mystery down (sort of, how'd cache that extra one), one to go :)

Alberto Dell'Era, July 18, 2005 - 6:55 pm UTC

I think everything can be explained if we admit a two-level cache:

level 1: remembers the last value only, if no hit, passes the value to level 2 cache
level 2: an hash cache as described by Jonathan lewis.

So the 1-1-1-1-1-1-1-1... sequences always get an hit by level 1 cache (N "1" - N-1 hits), regardless if they are or not in level 2 (they aren't since they collide with 179 [btw a prime number]).

If you order the values in the inline view as you suggest, you greatly increase the chance of hitting the level 1.

But now, definitely, I must hit the bed - 1PM ...

Tom Kyte
July 18, 2005 - 9:17 pm UTC


:)

Makes sense, we can develop a full test by finding a couple of colliding pairs and seeing what happens later


but if we order the set by the input parameters to the scalar subquery, we'll always get the cache hits on the 2nd iteration (that is probably why I sort of thought it might actually replace the cache value, I was hitting that 1st level "last time we did this" cache and it made it look like it was replacing the value)

full test case

Alberto Dell'Era, July 19, 2005 - 6:02 am UTC

We know that 1 and 179 collide (as can be shown by running your test case above).

Procedure "test" just prints the sequence, and then performs your sql query using your demo_pkg:

create or replace procedure test
is
begin
-- show sequence
for x in (select r, data from t order by r)
loop
dbms_output.put_line (to_char(x.r,'99') || ']' || to_char (x.data,'999') );
end loop;

-- perform test
demo_pkg.reset;
for x in ( select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
demo_pkg.show;
end test ;
/


--- cache 179, then an infinite sequence of 1

drop table t;
create table t
as
select rownum r, decode(rownum,1,179,1) data
from dual
connect by level <= 20;

exec test;

1] 179 <- miss
2] 1 <- miss
3] 1
4] 1
5] 1
6] 1
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1
cnt = 2 str = ,179,1


-- break the 1-1-1-1 sequence:

drop table t;
create table t
as
select rownum r, decode(rownum,1,179,5,100,1) data
from dual
connect by level <= 20;

exec test;

1] 179 <- miss
2] 1 <- miss
3] 1
4] 1
5] 100 <- miss
6] 1 <- miss <=== note this
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1

cnt = 4 str = ,179,1,100,1

100 evicts 1 from the level-2 cache, hence the additional function call immediately after.

-- cache 1 in level-2 cache

drop table t;
create table t
as
select rownum r, decode(rownum,1,1,2,179,5,100,1) data
from dual
connect by level <= 20;

exec test;

1] 1 <- miss
2] 179 <- miss
3] 1
4] 1
5] 100 <- miss
6] 1 <== HIT!
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1
cnt = 3 str = ,1,179,100

Since now 1 is in the level-2 cache, no miss.

-- level-2 is not an LRU cache (as Jonathan Lewis said):

drop table t;
create table t
as
select rownum r, decode(rownum,1,179, decode (mod(rownum,2),0,1,100) ) data
from dual
connect by level <= 20;

exec test;

1] 179 <- miss
2] 1 <- miss
3] 100 <- miss
4] 1 <- miss
5] 100
6] 1 <- miss
7] 100
8] 1 <- miss
9] 100
10] 1 <- miss
11] 100
12] 1 <- miss
13] 100
14] 1 <- miss
15] 100
16] 1 <- miss
17] 100
18] 1 <- miss
19] 100
20] 1 <- miss
cnt = 12 str = ,179,1,100,1,1,1,1,1,1,1,1,1

179 prevents 1 from entering the level-2 cache.

Tom Kyte
July 19, 2005 - 7:43 am UTC

Excellent

Keep this up and I won't have to anymore :)


That enforces the "if you want to call the function F, or more generally the scalar subquery S, as little as possible, you can sort the data"


select ...., (scalar subquery referencing columns a, b, c)
from T1, T2, T2, ..
where ...


will call the scalar subquery an unknown number of times depending on how a,b,c arrive (what order they arrive in)

whereas

select x.*, (scalar subquery references columns a,b,c)
from (select ....
from t1,t2,t2,....
where
order by a,b,c )

will invoke the scalar subquery on the order of the distinct count of a,b,c


A tradeoff, if the scalar subquery is "expensive" and the sort considered "cheap", definitely something to do.

Alberto Dell'Era, July 19, 2005 - 8:02 am UTC

And if ordering is very expensive, but the "distinct count of a,b,c" is "less" than the level-2 cache size ("less" and not "less or equal" because we must account for hash collisions), you can avoid the ordering, as the results will be cached as well (but someone not, because of hash collisions -> less predictable).

DBLS Update

Andrew Rye, July 19, 2005 - 12:18 pm UTC

Hi Tom -

Just a little fix. . .I had the following problem running the DBLS query:

ORA-01427: single-row subquery returns more than one row

Had to add in the base table/index name in the partition queries. Took the liberty of concatenating in the subobject_name, where applicable, to the object name field in the result. Here's the updated version:

select object_type,
object_name ||
decode( subobject_name, null, '',
' / ' || subobject_name
) object_name,
decode( status, 'INVALID', '*',
''
) status,
decode( object_type,
'TABLE', ( select tablespace_name
from user_tables
where table_name = object_name
),
'TABLE PARTITION', ( select tablespace_name
from user_tab_partitions
where table_name = object_name
and partition_name = subobject_name
),
'INDEX', ( select tablespace_name
from user_indexes
where index_name = object_name
),
'INDEX PARTITION', ( select tablespace_name
from user_ind_partitions
where index_name = object_name
and partition_name = subobject_name
),
'LOB', ( select tablespace_name
from user_segments
where segment_name = object_name
),
null
) tablespace_name
from user_objects a
order by
object_type,
object_name
/

Thanks for this discussion on yet another handy tool for the development belt.


Tom Kyte
July 19, 2005 - 5:25 pm UTC

(i knew that -- that it didn't work with partitions ;)

subquery caching cache size on 9i and 10g

Alberto Dell'Era, July 19, 2005 - 3:50 pm UTC

I've tried to measure the cache size of the subquery caching process, and i've noticed that Jonathan Lewis was right - it's 256 on 9i and 1024 on 10g (but for numbers only, see below).

create or replace function f(p number)
return number
is
begin
dbms_application_info.set_client_info( userenv('client_info')+1 );
return p;
end;
/

-- create a table with a sequence of unique values
-- this disables the level-1 cache
drop table t;
create table t
as
with value_source as (
select rownum r, dbms_random.random data
from dual
connect by level <= 100000
),
dup_detector as (
select value_source.*,
row_number() over (partition by data order by r) dup_number
from value_source
)
-- dump non-duplicated values only
select r, data as data
from dup_detector
where dup_number = 1;

-- append the same sequence at the end of t
insert /*+ append */ into t (r,data)
select 100000 + r, data
from t;
commit;

-- test
exec dbms_application_info.set_client_info(0);

begin
for x in ( select r, data, (select f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
end;
/

-- cached values = #values in table - #calls to f
select count(*) - userenv('client_info') as cache_size from t;

9.2.0.6:
CACHE_SIZE
----------
256

10.1.0.3:
CACHE_SIZE
----------
1024

I've tried also variants: I've used "rownum" instead of "dbms_random.random" - same results.

BUT - when i tried using VARCHAR2 instead of NUMBER for the DATA column, i've got anything between 16 and 512 for CACHE_SIZE, so the results are not to be generalized. Probably it's safe to avoid relying too much on the cache size when designing queries (or investigate the varchar2 case further ;).

Alberto

A reader, July 19, 2005 - 3:58 pm UTC

"To be more precise, adding the info Jonathan Lewis gave us in the newsgroup
[sorry I can't quote - if I add the URL and press "preview", the submit button
disappear ..]"

try </code> http://tinyurl.com <code>
does it still happen ?

termoutput off

Parag Jayant Patankar, July 20, 2005 - 4:14 am UTC

Hi Tom,

I do not want to display output on screen. For this reason in Oracle 9iR2 I have written following script in unix which is executable from command line

sqlplus -s / <<!
set termout off

select sysdate from dual
/

!
But still it is showing display on screen. Can you tell me why it is not working ?

regards & thanks
pjp


Tom Kyte
July 20, 2005 - 6:27 am UTC

either

a) run a script, not simulated interactive commands.

cat >> x.sql <<EOF
set termout off
select sysdate from dual;
exit
EOF
sqlplus -s / @x.sql

b) redirect the above to /dev/null


set termout works for scripts, not interactive sessions.

Is this possible?

Arangaperumal, July 20, 2005 - 8:58 am UTC

Hi TOM,
Below is one of the question asked for me in one interview. Can you
please help me in getting the answer.

Table is having two columns. Table data:
col1 col2
3 8
1 6
4 7
2 5

Output should be each column should be sorted individually

col1 col2
1 5
2 6
3 7
4 8

And also the query should be generic i.e., it should work fine even
the table is having more than two columns.

(i got this from our yahoo groups)



Tom Kyte
July 20, 2005 - 12:30 pm UTC

I don't see how it could be 'generic' for N columns.

could I do it? sure, but the query would have to change for each additional column

my question to them would involve tilting my head to the side and asking "why" :)


ops$tkyte@ORA10GR1> select * from t;
 
        C1         C2         C3
---------- ---------- ----------
         1          8         11
         2          7         10
         3          6         12
         4          5         13
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select c1, c2, c3
  2    from (select c1, row_number() over (order by c1) rn from t)
  3          natural join
  4         (select c2, row_number() over (order by c2) rn from t)
  5          natural join
  6         (select c3, row_number() over (order by c3) rn from t)
  7  /
 
        C1         C2         C3
---------- ---------- ----------
         1          5         10
         2          6         11
         3          7         12
         4          8         13
 

sql query

AD, July 20, 2005 - 4:23 pm UTC

Hi Tom,

Could you please help with the following sql query.

I have a table with acc_no, year_month, cycle which stores data for every month. For every account, starting from the current month I have to go backwards and retrieve the latest year month for which cycle=<5. This needs to be retrieved only if the cycle >=5 for the current month.

create table tab1(acc number(10), year_month number(6), cycle number(2));
acc, year_month constitutes the key.

insert into tab1 values (100, 200507, 7);
insert into tab1 values (100, 200506, 6);
insert into tab1 values (100, 200505, 5);
insert into tab1 values (100, 200504, 4);
insert into tab1 values (100, 200503, 3);
insert into tab1 values (100, 200502, 6);
insert into tab1 values (100, 200501, 6);
insert into tab1 values (100, 200412, 7);
insert into tab1 values (200, 200507, 7);
insert into tab1 values (200, 200506, 6);
insert into tab1 values (200, 200505, 3);
insert into tab1 values (200, 200504, 4);
insert into tab1 values (200, 200503, 3);
insert into tab1 values (200, 200502, 2);
insert into tab1 values (200, 200501, 1);
insert into tab1 values (200, 200412, 0);

select * from tab1 order by acc, year_month desc

acc year_month cycle
--- -------------- -------
100 200507 7 <==== current month (cycle >=5, so travel backwards and find the
100 200506 6 earliest record for which
100 200505 5 <======= expected result
100 200504 4
100 200503 3
100 200502 6
100 200501 6
100 200412 7
200 200507 7 <=====current month (cyccle >=5, so need to travel backwards)
200 200506 6
200 200505 3 <======= expected result
200 200504 4
200 200503 3
200 200502 2
200 200501 1
200 200412 0

Thanks in advance,

one possible solution

miquel, July 20, 2005 - 6:47 pm UTC

select acc, year_month, cycle from
( SELECT acc, year_month , cycle, max(year_month) over( partition by acc) max_ym
FROM tab1
WHERE acc in (select x.acc from tab1 x
WHERE x.year_month = to_char(sysdate,'yyyymm') AND x.CYCLE >= 5)
and year_month < to_char(sysdate,'yyyymm') and cycle <= 5)
where year_month = max_ym
/

ACC YEAR_MONTH CYCLE
----- ---------- ----------
100 200505 5
200 200505 3

Scalar Subquery Caching

Jonathan Lewis, July 21, 2005 - 7:51 am UTC

This is the way I've summarised the caching mechanism for scalar subqueries and filter subqueries in Chapter 9: Transformations of my new (not yet published) book:

if this is the first row selected from the driving table
execute the subquery with this driving value
retain the driving (input) and return (output)values as ‘current values’
set the ‘current values’ status to ‘not yet stored’.
else
if the driving value matches the input value from the ‘current values’
return the output value from the ‘current values’
else
if the status of the ‘current values’ is ‘not yet stored’
attempt to store the ‘current values’ in an in-memory hash-table
if a hash collision occurs
do not store the ‘current values’
end if
end if
probe the hash table using the new driving value
if the new driving value is in the hash table
retrieve the stored return value from the in-memory hash table
retain these values as the ‘current values’
set the ‘current values’ status to ‘previously stored’
else
execute the subquery with the new driving value
retain the driving and return (output) values as ‘current values’
set the ‘current values’ status to ‘not yet stored’.
end if
return the output value from the ‘current values’
end if
end if



Tom Kyte
July 21, 2005 - 8:05 am UTC

Jonathan,

Thanks! And looking forward to the book.

would be interesting

A reader, July 21, 2005 - 9:08 am UTC

to hear one of the developers of Scalar Subquery Caching:

is Jonathan's guess correct ?


Tom Kyte
July 21, 2005 - 4:10 pm UTC

I doubt you'll see an answer. The internal algorithms, which are not only subject to change, are not published like that.

Alex, July 21, 2005 - 9:19 am UTC

Can we get some more info about this book? What's the subject?

Alberto Dell'Era, July 21, 2005 - 9:57 am UTC

> This is the way I've summarised the caching mechanism
> for scalar subqueries and filter subqueries

"and filter subqueries" ... very interesting :)





Another Query in Need of Help

denni50, July 26, 2005 - 11:27 am UTC

Tom

I'm working on another one of those engaging queries.
I've achieved correct results for other reports using scalar subqueries..however I'm faced with a new dilemma on this one.

I need to combine the result sets of two distinct queries that don't need to relate to one another and don't link to one another.

I'm generating an income based report that mgt wants to see income by month for 2005 regardless of source(mailing campaigns from prior years), and they want the income results to be calculated against 2005 mailing campaigns
only.

ex( if $200,000 of this years income comes from appeals back in 2004,2003..>they want that income to be calculated against appeals mailed in 2005 only)

below are the correct results when running the queries separately:


  1  select to_char(p.paydate,'MMYYYY'),count(p.payamount) as response,sum(p.payamount) as income
  2               from payment p
  3               where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
  4               and p.payamount > 0 and p.acctno='HOUSE'
  5*              group by to_char(p.paydate,'MMYYYY')
SQL> /

TO_CHA   RESPONSE     INCOME
------ ---------- ----------
012005      39562 1958762.76
022005      32133 1156852.83
032005      40108  1620443.9
042005      34582 1306551.47
052005      27300  954046.48
062005      19614  768746.78
072005      21653  969498.04


SQL> select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as Dropped
  2                 from appeals a
  3                 where accountno='HOUSE'
  4                 and campaigncode LIKE 'D05%'
  5                 group by substr(a.appealcode,4,3);

PRO    DROPPED
--- ----------
05A     888065
05B     916756
05C     997435
05D     824555
05E     821171
05F     600912
05G    1207396


7 rows selected.

I need to group the payment results by paydate and the appeals results by appealcode,
then they need to be combined as:

PRO    DROPPED     RESPONSE  INCOME
------ ---------- --------- --------
05A     888065     39562 1958762.76
05B     916756     32133 1156852.83
05C     997435     40108 1620443.9
05D     824555     34582 1306551.47
05E     821171     27300  954046.48
05F     600912     19614  768746.78
05G    1207396     21653  969498.04 

I've been testing all different ways, below is one sample query that generates errors:
had to split the count and sum functions to avoid the 'too many values' error
and now I get the: 
ERROR at line 2:
ORA-01427: single-row subquery returns more than one row

I need to group by paydate but I don't want it in the select statement.
*******************************************************************************

Select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as dropped,
(select count(p.payamount) 
             from payment p
             where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
             and p.payamount > 0 and p.acctno='HOUSE'
             group by to_char(p.paydate,'MMYYYY'))response ,(select sum(pa.payamount) from payment pa where paydate>=to_date('01-JAN-2005','DD-MON-YYYY')and pa.payamount > 0 and pa.acctno='HOUSE'group by to_char(pa.paydate,'MMYYYY'))income
from appeals a
where a.accountno='HOUSE'
and a.campaigncode LIKE 'D05%'
group by substr(a.appealcode,4,3);

hope this doesn't sound to far-fetched or confusing...if I join both tables through the appealcode the payment results
exclude the income that has come in from appealcodes representing prior years.

thanks for any help/tips!














 

Tom Kyte
July 26, 2005 - 3:10 pm UTC

you lost me when you said you needed to group payments by date and then showed a merged result that has no date.

I don't see how those two sets are related.

I know Tom...

denni50, July 26, 2005 - 3:53 pm UTC

that's the problem is that the two sets are not related
but I need to relate them somehow.

In regards to the payment grouping this is what I meant:

SQL> select count(p.payamount),sum(payamount)
  2               from payment p
  3               where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
  4               and p.payamount > 0 and p.acctno='HOUSE'
  5               group by to_char(p.paydate,'MMYYYY');

COUNT(P.PAYAMOUNT) SUM(PAYAMOUNT)
------------------ --------------
             39562     1958762.76
             32133     1156852.83
             40108      1620443.9
             34582     1306551.47
             27300      954046.48
             19614      768746.78
             21653      969498.04

7 rows selected.

....I need the result set to be grouped by paydate without
including paydate in the select clause, now I need to
find a way to combine this result with the result set
from the other query.

The only column that links the two tables is "appealcode".
I can't use that column to join because on one side of the
join I only want appealcodes that represent 2005...on the
other side of the join I need all appealcodes(regardless
of what year they were created)we received as income in 2005.

Our system of coding for appealcodes always references the
year and alpha code for month..so that appealcode:

DCH05A    represents January  2005
DCH05B         "     February 2005
DCH04D         "     April    2004

a donor sends a donation with the DCH04D receipt chad
on 7/20/2005...the income is reported in 2005, however
the appealcode is NOT a 2005 appealcode...therefore I
do not it selected in the grouping from the appeals table.

I've come up with a solution to update a non-used column
in the appeals table with the 'year' that an appealcode
is created. The payment table already has a year column
that gets updated with "2005" every time a payment gets
posted(regardless of what year the appealcode denotes)..
this way I can join the two "year" columns and that should
work.

Please feel free to offer any other suggestions if you
think a there's a better way to accomplish this.

thanks 



 




 

Tom Kyte
July 26, 2005 - 4:17 pm UTC

sorry -- I don't think I'll be able to work this one through my head here

why cannot you just decode the payment date to an appeal code to join? seems that should work?

Already tried something to that effect....

denni50, July 26, 2005 - 4:39 pm UTC

joining substr(appealcode,4,2)= to_char(paydate,'RR')
which is '05'='05'....query took off with the shuttle
Discovery and never returned.

thanks again!


Tom Kyte
July 26, 2005 - 5:34 pm UTC

(remember inline views, if you have two inline views -- join them.... they looked SMALL didn't they?)

Your mention of Decode

denni50, July 27, 2005 - 8:41 am UTC

...don't really understand your comment about something
looking SMALL...however you mentioned something about
using decode, it didn't register with me yesterday
afternoon until after I got home(and gave my zapped out
brain a little rest) and thought that may possibly work.

question on that possibility:

can you reference the decode value in a join..ex.

select decode(to_char(paydate,'MMYYYY'))'012005','05A,'022005','05B'...etc)

....then join

where substr(appealcode,4,3)=decode(to_char(paydate,'MMYYYY'))'012005','05A'..etc

that would make the value coming from the appeals table
'05A'=the decode paydate value coming from payment.

...I'm getting ready to test that possibility.

thanks for that tip!










Tom Kyte
July 27, 2005 - 10:21 am UTC

you had two inline views with like 7 rows each. they looked small - or was that just an example

Tom....Perfect!

denni50, July 27, 2005 - 9:21 am UTC

SQL> select ap.project,ap.dropped,py.response,py.income
  2  from(select decode(to_char(p.paydate,'MMYYYY'),'012005','05A',
  3                                                 '022005','05B',
  4                                                 '032005','05C',
  5                                                 '042005','05D',
  6                                                 '052005','05E',
  7                                                 '062005','05F',
  8                                                 '072005','05G') as project,
  9              count(p.payamount) as Response,
 10              sum(payamount) as Income from payment p
 11              where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
 12              and p.payamount > 0 and p.acctno='HOUSE'
 13              group by to_char(p.paydate,'MMYYYY'))py,
 14                (select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as Dropped
 15                 from appeals a
 16                 where a.accountno='HOUSE'
 17                 and a.campaigncode LIKE 'D05%'
 18                 and a.campaigncode NOT LIKE 'D05H%'
 19                 group by substr(a.appealcode,4,3))ap
 20  where py.project=ap.project;

PRO    DROPPED   RESPONSE     INCOME
--- ---------- ---------- ----------
05A     888065      39562 1958762.76
05B     916756      32133 1156852.83
05C     997435      40108  1620443.9
05D     824555      34582 1306551.47
05E     821171      27300  954046.48
05F     600912      19614  768746.78
05G    1207396      21653  969498.04

7 rows selected.


 

Format numbers in cursor/query

Laxman Kondal, August 03, 2005 - 4:31 pm UTC

Hi Tom

I used Forms and there I could format numbers and I am trying to find a way to format numbers when fetch by cursor/query.

Most of numbers are 6-8 digits and 0-8 decimals and itÂ’s really difficult to read easily.

Decimals are controllable but can I put commas like what's in Forms - like 45,678,912.08 and 45,678,912.00

Thanks and regards


Tom Kyte
August 03, 2005 - 6:10 pm UTC

select to_char(x,'999,999,999.00') x from t;

please help me out

narayan rao sallakonda, August 10, 2005 - 2:00 pm UTC

how can we write a statement that gives me the no of columns in a particular table.

Tom Kyte
August 11, 2005 - 8:39 am UTC

select count(*) from user_tab_columns where table_name = :x

please help me out

narayan rao sallakonda, August 10, 2005 - 2:00 pm UTC

how can we write a statement that gives me the no of columns in a particular table.

help on the query

A reader, August 18, 2005 - 10:34 pm UTC

Tom,
i have the tab:

user_id user_desc

U5CJD JOHN DOO (JD78687)
G3CSW SAM WANG (SW5678888)
M2XJG JULIE GAYLE (JG90)

I want a query to return:

U5CJD : JOHN DOO
G3CSW : SAM WANG
M2XJG : JULIE GAYLE

I have the trouble to make it happens.

Thanks a lot



Tom Kyte
August 18, 2005 - 11:27 pm UTC

so do I since I don't have any create tables or inserts to populate them to test with!

select user_id || ' : ' || substr( user_desc, 1, instr( user_desc, '(' )-1 )
from t;



Please help

A reader, August 19, 2005 - 11:06 am UTC

Tom,

I am trying to work this query out. I would like to get the
following results for a school. Many students come in and
out of school. How can I get when the first left and when
they came in. For example for student: 214115


SELECT * FROM SCHOOL_DAYS
WHERE STUDENT_ID = '214115'

FIRST_OUT SCHOOL_IN STUDENT_ID
3/22/2005 2:35:23 PM 3/29/2005 8:49:28 AM 214115

select * from school_days
where student_id = '201048'

first_out school_in student_id
3/29/2005 5:05:04 PM 3/30/2005 3:08:39 PM 201048





--
--SQL Statement which produced this data:
-- SELECT * FROM SCHOOL_DAYS
-- WHERE STUDENT_ID = '214115'
--
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/13/2004 00:55:04', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/19/2004 09:05:26', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/19/2004 14:35:35', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/19/2004 14:53:33', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/22/2004 21:01:19', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/25/2004 02:24:46', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('01/13/2005 00:58:14', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/13/2005 14:25:17', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('03/19/2005 02:15:27', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/22/2005 14:35:23', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('03/29/2005 08:49:28', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/29/2005 09:01:18', 'MM/DD/YYYY HH24:MI:SS'), '214115');
COMMIT;



Tom Kyte
August 20, 2005 - 4:21 pm UTC

You'll really have to explain the data a bit better? I don't fully understand what I'm looking at or how to determine "first in" and "first out"

More explanation needed

A reader, August 22, 2005 - 9:51 am UTC

Tom,

Thanks again for comment on my query. I don't know if
this can be done to be honest. But here it goes.

We are trying to answer the question HOW long
a particular student was out of school. In other words, how LONG DID HE/SHE stayed in since he/she went out and came back in.

for example...


SCHOL IN FIRST_OUT
3/22/2005 8:27:03 PM 3/29/2005 5:05:04 PM
3/30/2005 3:08:39 PM 4/16/2005 1:40:44 PM


SCHOOL IN first_OUT
3/22/2005 8:27:03 PM 3/29/2005 5:05:04 PM

AND THEN CAME BACK IN

3/30/2005 3:08:39 PM

WE are looking for:

This student left the school 3/29 and came back in 3/30 therefore,the student was at out of the school for 1 day. in other words, we would like to begin counting when the student left the school until he came back in. It seems to me it's like criss-cross type of query.

If the student left 2/10 and came back in 2/15. The student was out for 5 days. What is throwing me off is that the data
may look like this..

school in first out

2/05 2/15

back in first_out

2/20 3/1


and I need to have it like this


first out back in the student was out for 5 days...in Feb. for example.
2/15 2/20


AGain the query should be FIRST OUT.....WHEN he/she went out and when she/he came back in. Sort of like criss cross in and out.

Tom Kyte
August 23, 2005 - 3:57 am UTC

select school_in, first_out,
LEAD(school_in)
over (partition by student_id order by school_in) next_school_in
from table;


run that and see if it isn't fairly easy to see how to get what you need (next_shool_in-first_out) = number of days "out", sum these up.

Thanks!!! for guiding me

A reader, August 23, 2005 - 2:43 pm UTC

Tom,

Thanks for putting me in the right direction. The only thing
I have to figure it out is when the student leave let's say
in one month and come come back in another. For example,
if the student left in Feb and came back in March. Right now I can't get that "pair". Any ideas???



select out_date, back_day_in, round(avg ( next_activity_date - out_date)) days_between
from ( select student_id,
lead(in_date) over (partition by student_id order by in_date) back_day_in,
out_date,
lead(out_date) over (partition by student_id order by in_date) next_activity_date
from t
where student_id = '214820'
and ((out_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS'))
or (in_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS')))
)
where back_day_in is not null
group by student_id,out_date,back_day_in
order by 1,2

Tom Kyte
August 24, 2005 - 8:42 am UTC

select *
from
(

select out_date, back_day_in, round(avg ( next_activity_date - out_date))
days_between
from ( select student_id,
lead(in_date) over (partition by student_id order by in_date)
back_day_in,
out_date,
lead(out_date) over (partition by student_id order by in_date)
next_activity_date
from t
where student_id = '214820'

where back_day_in is not null
)
where ((out_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS'))
or (in_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR
HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS')))
)
group by student_id,out_date,back_day_in
order by 1,2


friend, August 24, 2005 - 3:34 am UTC

hi tom, could u pls send me oracle 9i documentation link

Tom Kyte
August 24, 2005 - 10:55 am UTC

"U" isn't available, they didn't come to Iceland with me. Is it OK if I send it to you?

</code> http://otn.oracle.com/ <code>-> documentation

The documentation for friend

A reader, August 24, 2005 - 9:03 am UTC

You should not really need Tom to do this for you.

</code> http://www.google.com/search?q=oracle+documentation <code>

sql query

AD, September 11, 2005 - 5:30 pm UTC

Hi Tom,

Could you please help with the following sql query.

I have a table with acc_no, year_month, cycle which stores data for every month. From the current month I like to travel backwards and retrieve the first record where the cycle change from <6 to 6

create table tab1(acc number(10), year_month number(6), cycle number(2));
acc, year_month constitutes the key.

insert into tab1 values (100, 200507, 7);
insert into tab1 values (100, 200506, 6);
insert into tab1 values (100, 200505, 5);
insert into tab1 values (100, 200504, 4);
insert into tab1 values (100, 200503, 3);
insert into tab1 values (100, 200502, 6);
insert into tab1 values (100, 200501, 6);
insert into tab1 values (100, 200412, 7);
insert into tab1 values (200, 200507, 7);
insert into tab1 values (200, 200506, 6);
insert into tab1 values (200, 200505, 3);
insert into tab1 values (200, 200504, 4);
insert into tab1 values (200, 200503, 3);
insert into tab1 values (200, 200502, 2);
insert into tab1 values (200, 200501, 1);
insert into tab1 values (200, 200412, 0);

select * from tab1 order by acc, year_month desc

acc year_month cycle
--- -------------- -------
100 200507 7
100 200506 6 <== expected result
100 200505 5
100 200504 4
100 200503 3
100 200502 6
100 200501 6
100 200412 7
200 200507 7
200 200506 6 <== expected result
200 200505 3
200 200504 4
200 200503 3
200 200502 2
200 200501 1
200 200412 0


Thanks for your time,

Regards


Tom Kyte
September 11, 2005 - 6:59 pm UTC

ops$tkyte@ORA10G> select *
  2    from (
  3  select tab1.*,
  4         lag(cycle) over (partition by acc order by year_month) last_cycle
  5    from tab1
  6         )
  7   where cycle = 6 and last_cycle < 6
  8   order by acc, year_month desc;
 
       ACC YEAR_MONTH      CYCLE LAST_CYCLE
---------- ---------- ---------- ----------
       100     200506          6          5
       200     200506          6          3
 

one more query...

sd, September 14, 2005 - 5:40 am UTC

I have to tables A and B with only one column N.

A B
--- ----
1 1
2 2
3 4
5 6

I want to display only rows with values 3,4,5 and 6 I've tried it with full outer join and then minus , is there any other way(better way)??

TIA

Tom Kyte
September 14, 2005 - 8:37 am UTC

I don't get the logic behind this at all. why 3,4,5 and 6.

and I see a single table with two columns A and B??

which query will be fast?

Alay, September 15, 2005 - 9:24 am UTC

Hi Tom,
If i have two tables, tableA with 1 crore rows and tableB with 10 rows. Now i perform join operation as follows:

(1) select * from tableA, tableB;
(2) select * from tableB, tableA;

Which query will execute fast and why?

I have one another question also. What is the fastest way to know number of rows of a table?

Tom Kyte
September 15, 2005 - 9:50 am UTC

neither, both are a cartesian product and will result in ( 10 * 1 crore) rows.

They are the same query. Using the CBO - there isn't really a difference


the way to know the number of rows in a table -- select count(*) from table;

How to replace null with some value ?

San, September 19, 2005 - 3:15 am UTC

Hello Tom,
I'm in the process of developing a "VALIDATION" routine that would cross check an existing schema with the base schema. The base schema is the schema which would be created at the time of installation of the application. At any time the client can execute this "VALIDATION" Routine to check the current status.
For simplicity, the existing schema is "IA_USER" and the validating schema is "VERIFY_USER".

I have written the following code and there are 2 questions i'd request you to brief.

1> I intend to display "VALIDATE INDEXES/ CONSTRAINT.." in case the query doesn't return any rows.

2> Is there a better alternative to the code ?

-- SET TERM OFF
SET FEEDBACK OFF
SET VERIFY OFF

REPHEADER PAGE CENTER 'PERFORMING TABLE/COLUMN VALIDATIONS'
TTITLE CENTER ==================== SKIP 1

col USER format a15
col TABLE format a15
col COLUMN format a40
col "DATA TYPE" format a40
set line 132
set pages 100

break on today skip 1 on user skip 0 on table skip 1 on column

spool c:\1.txt

select to_char(sysdate,'fmMONTH DD, YYYY') TODAY from dual;

repheader off
ttitle off

select a.owner "USER" , a.table_name "TABLE",
a.column_name ||' -> FAILED VALIDATION' "COLUMN",
a.data_type ||' -> FAILED VALIDATION' "DATA TYPE"
from dba_tab_columns a
where not exists (select 'x' from dba_tab_columns b
where a.table_name = b.table_name
and b.owner = 'VERIFY_USER'
and (b.column_name = a.column_name and b.data_type = a.data_type))
and exists (select 'x' from dba_tables c
where a.table_name = c.table_name
and c.owner = 'VERIFY_USER')
and a.owner = 'IA_USER'
and a.table_name not like '%BIN$%'
UNION
select a.owner "USER" , a.table_name "TABLE",
a.column_name ||' -> FAILED VALIDATION' "COLUMN",
a.data_type ||' -> FAILED VALIDATION' "DATA TYPE"
from dba_tab_columns a
where not exists (select 'x' from dba_tab_columns b
where a.table_name = b.table_name
and b.owner = 'IA_USER'
and (b.column_name = a.column_name and b.data_type = a.data_type))
and exists (select 'x' from dba_tables c
where a.table_name = c.table_name
and c.owner = 'IA_USER')
and a.owner = 'VERIFY_USER'
and a.table_name not like '%BIN$%'
/

spool off

ttitle off
repheader off
clear breaks
clear columns
set verify on
set feedback on
-- set term on


Tom Kyte
September 19, 2005 - 11:43 am UTC

looks like you are trying to compare the contents of two tables --

</code> https://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html <code>

has a method for doing that in a single query.

forgot to mention

San, September 19, 2005 - 3:16 am UTC

Tom, I forgot to mention. We are on 10g release 1.

Regards
San

replace no rows with some string

San, September 20, 2005 - 2:05 am UTC

Tom, thanks for the link. There too the question remains that how do i display some string if the query doesn't return any rows. One way i know is

select nvl(max(tablespace_name),'INVALID')
from dba_tablespaces
where tablespace_name = 'SOME_TABLESPACE';

select nvl(tablespace_name,'INVALID') doesn't return anything if the query doesn't return any rows.

I can replace my code with nvl(max...) or nvl(min..) but in case of multiple columns/ indexes failing validation, the query would return just 1 value (either max or min).

Thanks
San

Tom Kyte
September 20, 2005 - 10:05 am UTC

sure it does -- an aggregate without any group by always returns

o at least one row
o at most one row


ops$tkyte@ORA10GR2> set feedback 1
ops$tkyte@ORA10GR2> select max(dummy) from dual where 1=0;


M
-


1 row selected.

ops$tkyte@ORA10GR2> select nvl(max(dummy),'Y') from dual where 1=0;


N
-
Y

1 row selected.


so that is returning a single row. 

FIFO in SQL

Parag Jayant Patankar, September 23, 2005 - 9:19 am UTC

Hi Tom,

I am having following table

create table t
 (
 indicator    varchar2(1),
 dt           date,
 amt          number
 );

insert into t values ('p', sysdate-100, 20);
insert into t values ('p', sysdate-90, 30);
insert into t values ('p', sysdate-85, 70);
insert into t values('s', sysdate-85, 60);
insert into t values('p', sysdate-83, 100);
insert into t values('s', sysdate-84, 40);
insert into t values('s', '01-aug-05', 80);

commit;

So my table having following records
18:32:51 SQL> select * from t;

I DT               AMT
- --------- ----------
p 14-JUN-05         20
p 24-JUN-05         30
p 29-JUN-05         70
s 29-JUN-05         60
p 01-JUL-05        100
s 30-JUN-05         40
s 01-AUG-05         80

7 rows selected.

Indicator is purchase or sale. I want to deduct sale amt from purchase by FIFO method and want following output ( something similar )

Indicator    Dt    Amt    Dt    Sale Amt    Bal of Pur    Bal of Sale

p    14-Jun-05    20    29-Jul-05    60    0    40
p    24-Jun-05    30    29-Jul-05    40    0    10
p    29-Jun-05    70    29-Jul-05    10    60    0
                        30-Jun-05    40    20    0
                        01-Aug-05    80    0    60
p    01-Jul-05    100                      160    

Is it possible to generate this kind output in SQL ?

regards & thanks
pjp 

SQL Query

Parag Jayant Patankar, September 26, 2005 - 10:55 am UTC

Hi Tom,

Is it possible to have FIFO calculation from SQL for the question I have asked in this thread ?

regards & thanks
pjp


Tom Kyte
September 27, 2005 - 9:24 am UTC

i didn't understand the output desired and there were no details about it supplied.

SQL Query Help

A reader, October 17, 2005 - 10:40 am UTC

Hi Tom,

I have a tree that looks like this:

A(1)
___________|__________
| |
B1(2) B2(3)
________|_______ ________|________
| | | |
C1(4) (5)C2 C3(6) C4(7)
| |
D1(8) D2(9)
__________|__________
| | |
E1(10) E2(11) E3(12)
|
F1(13)


And a table below. The key is the number in perenthesis, and the source_key is the key where it comes from:

NODE KEY SOURCE_KEY
---- ---------- ----------
A 1
B1 2 1
B2 3 1
C1 4 2
C2 5 2
C3 6 3
C4 7 3
D1 8 6
D2 9 7
E1 10 8
E2 11 8
E3 12 8
F1 13 12


I'd like a query that returns the following or somewhat similar. My objective is from any given node, give me all the nodes where it comes from and all the nodes that come from it. I've had a hard time and cann't seem to figure it out. Could you please help?

Example starting from node E2:

NODE KEY FROM_NODE FROM_NODE_KEY
---- --- ---------- -------------
E2 11 D1 8
D1 8 C3 6
C3 6 B2 3
B2 3 A1 1
A1 1
F1 13 E2 11

Example starting from node D1:

D1 8 C3 6
C3 6 B2 3
B2 3 A1 1
A1 8 E1 6
E2 11 D1 8
F1 13 E2 11

I'm using Oracle8i Release 8.1.7.4.1 - Production.

Thanks.


reader

A reader, October 18, 2005 - 7:16 pm UTC

Hi Tom
please see this query:

select department_id,count(employee_id) employee_count,last_name
from employees
group by department_id,last_name
order by department_id,employee_count;

getting output as:
DEPARTMENT_ID EMPLOYEE_COUNT LAST_NAME
------------- -------------- ----------
10 1 Whalen

20 1 Fay
Hartstein

30 1 Khoo
Tobias
Colmenares
Raphaely
Himuro
Baida


but how can i get the output to be:

DEPARTMENT_ID EMPLOYEE_COUNT LAST_NAME
------------- -------------- ----------
10 1 Whalen

20 2 Fay
Hartstein

30 6 Khoo
Tobias
Colmenares
Raphaely
Himuro
Baida


thanks

sachin


Tom Kyte
October 19, 2005 - 6:45 am UTC

select distinct
department_id,
count(employee_id) over (partition by department_id) employee_count,
last_name
from employees
order by 1, 2;

SQL

giri, October 19, 2005 - 1:40 pm UTC

Tom,

I am writing a pl/sql block..
where i am inserting the data from one table to another table

say i am having X table which contains more then 1 million rows and which contains non distinct values for empno i.e empno can be duplicate. i am inserting the values in the table Y which has a constraint on empno as distinct i.e. unique constraint. now in the cursor select query i am not giving the distinct clause as it the business logic don't need .

but when i am inserting the values in the Y i need inserting the distinct values because of the constraints so when ever there is second same record / duplicate record I want to skip that and go to the next record. And insert it without ant interruption

can u please gide me how to do this?






Tom Kyte
October 19, 2005 - 3:44 pm UTC

insufficient data.

you say in the source table the empno isn't distinct, has duplicates. Ok, say you have

EMPNO ENAME
-------- --------
1 frank
1 mary
1 bob


which one goes into the other table and why?

SQL Query Help

A reader, October 19, 2005 - 3:49 pm UTC

Hi Tom,

I'd really appreciate if you could look at the problem I had posted on 10/17/05 about the tree nodes under the title: SQL Query Help.

Thanks in advance.


Tom Kyte
October 19, 2005 - 4:40 pm UTC

if I don't see an obvious very fast answer and there are no create tables and no insert intos (eg: you expect me to take lots of time building your example)

I simply skip it.

And even if they are there no promises.


(yours looks like a union all of two connect bys -- one that connects down, one that connects up)

Here ya go (albeit 9iR2)....

Philip, October 19, 2005 - 4:40 pm UTC

Hi "A reader from VA",

I love these little teasers - so I thought would try to take a shot at it.

Please note you had a couple of errors - one in the insert for node F1 (it didn't follow the diagram - so I corrected it for this example...).

Here goes:
phil@ORA9iR2 -> @c:\asktom.sql
phil@ORA9iR2 -> DROP TABLE hier
2 /

Table dropped.

phil@ORA9iR2 ->
phil@ORA9iR2 -> CREATE TABLE hier (node VARCHAR2(2)
2 , KEY INT
3 , source_key INT
4 )
5 /

Table created.

phil@ORA9iR2 ->
phil@ORA9iR2 -> INSERT INTO hier VALUES ('A', 1, NULL);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('B1', 2, 1);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('B2', 3, 1);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('C1', 4, 2);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('C2', 5, 2);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('C3', 6, 3);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('C4', 7, 3);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('D1', 8, 6);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('D2', 9, 7);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('E1', 10, 8);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('E2', 11, 8);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('E3', 12, 8);

1 row created.

phil@ORA9iR2 -> INSERT INTO hier VALUES ('F1', 13, 11);

1 row created.

phil@ORA9iR2 -> COMMIT;

Commit complete.

phil@ORA9iR2 ->
phil@ORA9iR2 -> CREATE OR REPLACE PROCEDURE to_from_node( p_start_node IN hier.node%TYPE
2 , p_ref_cursor OUT sys_refcursor
3 )
4 AS
5 BEGIN
6 IF NOT p_ref_cursor%isopen THEN
7 OPEN p_ref_cursor FOR SELECT node
8 , KEY
9 , (SELECT node FROM hier WHERE KEY = main.source_key) AS from_node
10 , source_key AS from_node_key
11 FROM hier main
12 CONNECT BY KEY = PRIOR source_key
13 START WITH node = p_start_node
14 UNION ALL
15 SELECT *
16 FROM (SELECT node
17 , KEY
18 , PRIOR node AS from_node
19 , source_key AS from_node_key
20 FROM hier main
21 CONNECT BY PRIOR KEY = source_key
22 START WITH node = p_start_node
23 )
24 WHERE node <> p_start_node
25 ;
26 END IF ;
27 END;
28 /

Procedure created.

phil@ORA9iR2 ->
phil@ORA9iR2 -> VARIABLE b_start_node VARCHAR2(2) ;
phil@ORA9iR2 -> VARIABLE b_ref_cursor refcursor ;
phil@ORA9iR2 ->
phil@ORA9iR2 -> BEGIN
2 :b_start_node := 'E2' ;
3 to_from_node(p_start_node => :b_start_node, p_ref_cursor => :b_ref_cursor) ;
4 END;
5 /

PL/SQL procedure successfully completed.

phil@ORA9iR2 ->
phil@ORA9iR2 -> PRINT :b_ref_cursor ;

NO KEY FR FROM_NODE_KEY
-- ---------- -- -------------
E2 11 D1 8
D1 8 C3 6
C3 6 B2 3
B2 3 A 1
A 1
F1 13 E2 11

6 rows selected.

phil@ORA9iR2 ->
phil@ORA9iR2 -> BEGIN
2 :b_start_node := 'D1' ;
3 to_from_node(p_start_node => :b_start_node, p_ref_cursor => :b_ref_cursor) ;
4 END;
5 /

PL/SQL procedure successfully completed.

phil@ORA9iR2 ->
phil@ORA9iR2 -> PRINT :b_ref_cursor ;

NO KEY FR FROM_NODE_KEY
-- ---------- -- -------------
D1 8 C3 6
C3 6 B2 3
B2 3 A 1
A 1
E1 10 D1 8
E2 11 D1 8
F1 13 E2 11
E3 12 D1 8

8 rows selected.

phil@ORA9iR2 -> spool off

------------------------

Fair warning: this was done on Oracle 9i Release 2, I don't even have any 8i databases insalled anymore - so I didn't test it in 8i. The SYS_REFCURSOR stuff won't work - you'll have to create a REF CURSOR Type or something like that for 8i (please upgrade that if you can - 8i is unsupported).

One last thing: WHO-DEY, WHO-DEY, WHO-DEY think gonna beat dem Bengals...


reader

A reader, October 19, 2005 - 5:05 pm UTC

Hi Tom

is it possible to write this in single query i.e no nested query..for example can we use joins in this


Show the department number and the lowest salary of the department with the highest average
salary.

thanks
sachin

Tom Kyte
October 19, 2005 - 7:42 pm UTC

define "nested query" first, and be very very precise in your definition.

SQL

giri, October 20, 2005 - 6:05 am UTC

Tom,

sorry for insufisent data.

Say i have a table X which is loaded from the file with values
No Dept S_Date(Sig Date)
-------
10 A 10/05/2005(DD/MM/YYYY)
10 A 10/08/2005
10 A 10/10/2005
10 A 10/11/2005
10 B 10/11/2005
from this table i am inserting into Y
cursor is select no,Dept from X
and inserting in to Y where in Y we are inserting only
10 A
If i do distinct it will 10 A and 10 B
here it is some thing like first come fisrt go..
So can u please guide me the logic??








Tom Kyte
October 20, 2005 - 8:29 am UTC

no idea what you are asking....

but to put data from table1 into table2, you would just use insert into table1 select from table2, there would be no code.

and remember, rows in a table really don't have any "order" to them.

SQL Query Help

A reader, October 20, 2005 - 11:31 am UTC

Tom - I'm sorry I didn't know I have to give you create tables and insert intos. Now I do and I'll make sure I do that next time.

Hello Philip from Cincinnati, OH USA - Thank you very much. I still don't know how to make REF CURSOR works yet but your query works beautifully.

I really appreciate all the help that I've gotten from this web site.


Tom Kyte
October 20, 2005 - 4:46 pm UTC

(it says that on the page you read to put this text on the forum!!)

<quote>
If your followup requires a response that might include a query, you had better supply very very simple create tables and insert statements. I cannot create a table and populate it for each and every question. The SMALLEST create table possible (no tablespaces, no schema names, just like I do in my examples for you)
</quote>



SQL Query

A Reader, October 20, 2005 - 11:56 am UTC

Hi Tom,

I have to write a query that will store columns from one table as seperate rows in another table. Please see below.

CREATE TABLE tab1
( col1 NUMBER(2),
col2 NUMBER(2),
col3 NUMBER(2),
col4 NUMBER(2),
col5 NUMBER(2),
col6 NUMBER(2),
col7 NUMBER(2),
col8 NUMBER(2),
col9 NUMBER(2),
col10 NUMBER(2));
INSERT INTO tab1
( col1,
col2,
col3,
col4,
col5,
col6,
col7,
col8,
col9,
col10
)
VALUES
( 1,
2,
3,
4,
5,
6,
7,
8,
9,
10
);

SELECT *FROM tab1;

COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8 COL9 COL10
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
1 2 3 4 5 6 7 8 9 10
CREATE TABLE tab2
( col NUMBER(2) );

All the columns in table tab1 should be inserted as seperate rows in table tab2. So 'Select col from tab2' should display the following

COL
----------
1
2
3
4
5
6
7
8
9
10

Your help would be greatly appreciated.

Thanks
Ganesh


Tom Kyte
October 20, 2005 - 4:47 pm UTC

a multi-table insert is useful for this.

create table t ( id int, day1 int, day2 int, day3 int, day4 int );


insert into t
select rownum, dbms_random.random, dbms_random.random,dbms_random.random,dbms_random.random
from all_objects where rownum <= 100;

create table t2 ( id int, day varchar2(4), val int );


insert ALL
into t2 ( id, day, val ) values ( id, 'DAY1', day1 )
into t2 ( id, day, val ) values ( id, 'DAY2', day2 )
into t2 ( id, day, val ) values ( id, 'DAY3', day3 )
into t2 ( id, day, val ) values ( id, 'DAY4', day4 )
select * from t;


SQL Query

A Reader, October 21, 2005 - 2:58 am UTC

Thanks very much for your fedback.

reader

A reader, October 21, 2005 - 6:10 pm UTC

i wrote the query like this:

select employee_id,last_name,salary,department_id,
avg(salary) over (partition by department_id) a_sal
from employees
group by employee_id,department_id,last_name,salary
order by employee_id;

in answere it was like this:

SELECT e.employee_id, e.last_name,
e.department_id, AVG(s.salary)
FROM employees e, employees s
WHERE e.department_id = s.department_id
GROUP BY e.employee_id, e.last_name, e.department_id;

i have not understood what is happening in the answere query ,i know it is a self join but could you please explain me how it is working internally, like how it is showing the same answere as mine one.....

b) could you please also explain me in general like when a query is given and how to tackle that query ..like how to start building that query from scratch ,are there any particular steps or diagram i make to simplify that query or not

Tom Kyte
October 22, 2005 - 10:33 am UTC

in answer to what?

reader

A reader, October 24, 2005 - 11:07 am UTC

Hi Tom

could you please answer my second question

thanks.

Tom Kyte
October 24, 2005 - 11:51 am UTC

too many "a readers here", not sure what second question of which you are refering to.

if you mean right above, that is the thing of books - my implementation of that answer is currently called "Effective Oracle by Design".

reader

A reader, October 25, 2005 - 6:14 pm UTC

Thank you for your answer immediately above

i have one more question..

how to find ename from emp where ename contains 2 l's whether these l's are in begining like llyat or in the middle like allen or like this dill or anywhere else.

i have written a query -

SELECT ename
FROM emp
WHERE upper(ename) LIKE '%L%L%'
or upper(ename) LIKE 'LL%'
or upper(ename) LIKE '%LL';

but it is not finding all of them

is there any better way.

Tom Kyte
October 26, 2005 - 11:35 am UTC

well, %L%L% will find all of them by itself. (it'll also find alily - do you want it to?)

show me one that it does not find

Ravi, October 26, 2005 - 2:10 am UTC

This is one way of doing it.It will list all ename having 2 or more than 2 'L' in the ename

select ename
from emp
where instr(upper(ENAME),'L',1,2)>0

reader

A reader, October 26, 2005 - 10:51 am UTC

thanks you very much Ravi.

sachin


Complex query

Vikas Khanna, October 27, 2005 - 8:15 am UTC

Would appreciate if you can please help in writing this complex query.

I have two tables:

T1

Id
Text

With Values :

1,various artists
2,various artists american idol season 2 finalists
3,various artists various artists - miscellaneous - exercise/meditation

T2

Id
Aggr_date
Request_count

1 01-SEP-2005 20
1 02-SEP-2005 30
2 01-SEP-2005 10
2 02-sep-2005 5
3 01-SEP-2005 4
3 02-SEP-2005 8

Select A.id,A.text,sum(request_count)
from
T1 A,T2 B
Where A.id = B.id
and aggr_date between to_date('01-SEP-2005','DD-MON-YYYY') and to_date('02-SEP-2005','DD-MON-YYYY')
group by A.id,A.text
having sum(request_count) > 20

Will result into qualifying:

A.id A.text Sum(Request_Count)
1 various artists 50

This will be called popular query results. Now we have find the matcing Id's from the table T1 which are NOT a part of the Popular query and the satisfy the following:

1.The text of the popular query is contained in the string text for the remaining match term id's. like "various artists" is a part of "various artists american idol season 2 finalists" and also various artists various artists - miscellaneous - exercise/meditation when we check with the Instr function.

2. Since both satisfy we need to pick for the one whose length(text) is the MAX.

The result set should be:

Id 1 correspponds to Id 3

So the id no 3rd should be choosen to be picked away. However, if length of the text is the same then SUM(request_Count) should be choosen as the criteria. As

Id Sum(request_count)
2 15
3 12

Then id no. 2 qualifies.

The result set should be:

Id 1 corresponds to Id 2

Thanks


Tom Kyte
October 27, 2005 - 12:32 pm UTC

no create table...
no insert intos....

no looking by me.......




Listed down the DDL,DML's

Vikas, October 28, 2005 - 3:59 am UTC

Hi Tom,

Please find the DDL and the subsequent DML's:

Create table t1 (id int, text varchar(50));
Table created.

Create table t2 (id int, aggr_date DATE, Request_Count int);
Table created.

Insert into t1 Values (1,'various artists');
Insert into t1 Values (2,'various artists american idol season 2 finalists');
Insert into t1 Values (3, 'various artists - miscellaneous - exercise');


insert into t2 values (1,'01-SEP-2005',20);
insert into t2 values (1,'02-SEP-2005',30);
insert into t2 values (2,'01-SEP-2005',10);
insert into t2 values (2,'02-SEP-2005',5);
insert into t2 values (3,'03-SEP-2005',4);
insert into t2 values (3,'03-SEP-2005',8);

Thanks in anticipation.

Tom Kyte
October 28, 2005 - 12:56 pm UTC

ok, what of the "most popular" stuff results in hundreds of rows - do you have the time to wait for all of the LIKE searches to complete and to find the maximum length?

we can do this, it won't be "fast"

Please HELP!

vikas Khanna, November 01, 2005 - 11:48 pm UTC

Hi Tom,

The Popular query will return only a handful of rows, so please help me in writing this query. It's getting too complex for me.

Thanks for anticipation!

Help required

Kumar, November 09, 2005 - 1:28 am UTC

Dear Tom,

I have tables like,

create table open_item(form_id number,
open_item_amount float)

create table form_data (form_id number,
period date)

insert into open_item values (1, 100);
insert into open_item values (2, 100);
insert into open_item values (3, 100);
insert into open_item values (4, 100);
insert into open_item values (5, 100);

insert into form_data (1, '01-Jan-1995');
insert into form_data (2, '01-Dec-1994');

I want an output like,

Current        Previous    Variance
100        100            0
....

I have written a query like,

  1  select decode(y.period, to_date('1/1/1995', 'mm/dd/yyyy'), x.open_item_amount) as "current" ,
  2      decode(y.period, to_date('12/1/1994','mm/dd/yyyy'), x.open_item_amount) as "previous",
  3  decode(y.period, to_date('1/1/1995', 'mm/dd/yyyy'), x.open_item_amount)-
  4      decode(y.period, to_date('12/1/1994','mm/dd/yyyy'), x.open_item_amount) as "Variance"
  5  from open_item x, form_data y
  6  where x.form_id = y.form_id
  7*   and y.period between to_date('12/1/1994','mm/dd/yyyy') and to_date('1/1/1995', 'mm/dd/yyyy')
SQL> /

   current   previous   Variance
---------- ---------- ----------
       100
                  100

But, I am getting the above output. How to solve this problem? Can you please help me?

Thanks in advance.

Note: form_id is different in both the periods.
 

Tom Kyte
November 09, 2005 - 9:34 am UTC

it is sort of hard to reverse engineer a big query that gives the wrong answer and then deduce what you might have been thinking and figure out what you really wanted.....



perhaps this gets you started?

ops$tkyte@ORA10GR2> select f.*, o.*,
  2         lag(o.open_item_amount) over (order by f.form_id) last_open
  3    from form_data f, open_item o
  4   where f.form_id = o.form_id
  5  /

   FORM_ID PERIOD                FORM_ID OPEN_ITEM_AMOUNT  LAST_OPEN
---------- ------------------ ---------- ---------------- ----------
         1 01-JAN-95                   1              100
         2 01-DEC-94                   2              100        100


 

Vikas Khanna, November 14, 2005 - 3:06 pm UTC

Hi Tom,

The Popular query will return only a handful of rows, so please help me in
writing this query. It's getting too complex for me.

Would appreciate for your help!

Thanks!


Tom Kyte
November 14, 2005 - 4:12 pm UTC

ops$tkyte@ORA10GR2> with popular
  2  as
  3  (Select A.id,A.text,sum(request_count) cnt
  4     from T1 A,T2 B
  5    Where A.id = B.id
  6      and aggr_date between to_date('01-SEP-2005','DD-MON-YYYY') and to_date('02-SEP-2005','DD-MON-YYYY')
  7    group by A.id,A.text
  8   having sum(request_count) >  20 )
  9  select *
 10    from (
 11  select t1.id, t1.text, sum(request_count) cnt2
 12    from t1, t2, popular
 13   where t1.text not in ( select text from popular where text is not null )
 14     and t1.text like '%'||popular.text||'%'
 15     and t1.id = t2.id
 16   group by t1.id, t1.text
 17   order by length(t1.text) DESC, sum(request_count) DESC
 18         )
 19   where rownum = 1;

        ID TEXT                                                     CNT2
---------- -------------------------------------------------- ----------
         2 various artists american idol season 2 finalists           15


isn't what you said would be the outcome, but does follow the "rules" you laid out... 

Good Response

Vikas, November 15, 2005 - 8:32 pm UTC

Thanks a ton!

getting a pre-defined format from a string

Thiru, November 18, 2005 - 4:33 pm UTC

Hi Tom,

Not sure whether my question fits into this thread but anyway hoping to get an answer from you.

I need to clip two dates from a string.

The string will always be 'between date1 and date2'

eg:

string could be :
between '01 JAN 2005' AND '10-SEP-2005'


or
between '01 JAN 2005' AND '10-SEP-2005'
(with gaps )

How do I get these two dates into two variables in my stored proc. These strings as mentioned above is part of a large string and are passed in as parameters to a proc.



Tom Kyte
November 19, 2005 - 9:47 am UTC

substr and instr.

look for first ', second ' using instr
now substr.

look for third ' and fourth ' using instr
now substr.

if you have to find the between itself (assuming no other betweens) - use instr to find between, substr off everything in front of it. Then, using instr, find the fourth ' and substr off everything after that.

using quotes

A reader, November 21, 2005 - 12:49 pm UTC

Hi Tom
may i know please as to what is the difference betwen sql statements run as such without quotes and with quotes

like select * from emp;

and 'select * from emp'

i could not find an answer to this

thanks
sachin

Tom Kyte
November 21, 2005 - 2:23 pm UTC

select * from emp

is a sql statement.


'select * from emp'

is a character string, not much different from 'hello world'


not sure what you mean?

A reader, November 21, 2005 - 2:34 pm UTC

like if we put that(sql statement) in ' ' does that becomes dynamic sql or what is the use of putting sql statement like this:
'select * from emp';

Tom Kyte
November 21, 2005 - 3:15 pm UTC

in plsql, sure, you would be

open ref_cursor for 'select * from emp';


that is now "dynamic sql", sql not visible to programming environment.


but - it is just a string, and I assume you are only talking about plsql at this point.

A reader, November 21, 2005 - 3:23 pm UTC

exactaly i am talking about pl/sql only
1)may i know under which circumstances can we put that in quotes to make it a string and
2)what is the use of that like will it make the code to run faster or what

i am very much confused about it ..please explain

Tom Kyte
November 21, 2005 - 5:17 pm UTC

you should always use static sql whenever possible.

you should restort to dynamic sql (executing sql stored in a string, your "quotes" comment) only when you absolutely cannot do it any other way.

A reader, November 21, 2005 - 5:27 pm UTC

thanks for all your answeres but my question is WHAT do we get by doing that (putting the sql in quotes)...WHY do we do THAT??????

Thanks and Regards
Sachin

Tom Kyte
November 21, 2005 - 5:34 pm UTC

you get dynamic sql?

you can assign any string to a plsql variable and execute it.

meaning you can make up the sql at runtime.


but - don't do that unless you HAVE to.

A reader, November 21, 2005 - 5:38 pm UTC

THANKS got it.........

sujana

sujana, December 13, 2005 - 12:53 pm UTC

Hi Tom,
I have three questions...like
1. pl/sql block to genarate prime numbers
2. pl/sql block to check whether th ginen string is a polindrome or not
3. pl/sql block to allocate rank for student data base using cursors.fields are sno primary key,sname,regno,maths,physics,chemistry,total,rank.

I hope i will get your answers soon

thanks,
sujana

Tom Kyte
December 13, 2005 - 1:26 pm UTC

this sounds more like "I have three assignments for you to write code for me" doesn't it?


or even homework...

Tell your teacher that #3 is a bad approach since the builtin SQL funtion rank() exists. Also, to put grades "in record" like that in the attributes math, physics, etc is a really bad design.

#1 and #2 are pretty well known algorithms, google around - then implement them in code.

SQL case

Tony, December 14, 2005 - 1:49 am UTC

Tom,

I am facing a difficult situation here.

I have a program which will accept the expression from the metadata tables and run in proc code.

The code is written in such a way that it will accept only 60000 characters.

My expressions are going beyond the limit

Now I have to reduce the length of all the expression

the exp like

case
when Annual Income >0 and <=5000 then '0 – 5000'
when Annual Income >5000 and <=8000 then '5001 – 8,000'
when Annual Income >8000 and <=12000 then '8,001 – 12,000'
when Annual Income >12000 and <=15000 then '12,001 – 15,000'
when Annual Income >15000 and <=18000 then '15,001 – 18,000'
when Annual Income >18000 and <=26000 then '18,001 – 26,000'
when Annual Income >26000 and <=32000 then '26,001 – 32,000'
when Annual Income >32000 and <=42000 then '32,001 – 42,000'
when Annual Income >42000 and <=48000 then '42,001 – 48,000'
end

this is an example..

can u please guide me how to make it small without changing the output.

thanks in advance



Tom Kyte
December 14, 2005 - 8:13 am UTC

alias annual_income to be "ai" (shorter)

case
when ai<=5000 then '0-5000'
when ai<=8000 then '5001-8000'
...


you don't need the > part.


if the ranges were "equal", we could just divide of course.

Table method

Bob B, December 14, 2005 - 10:44 am UTC

Or you could create a table and a view

CREATE TABLE INCOME_THRESHHOLDS(
max NUMBER
);

ALTER TABLE INCOME_THRESHHOLDS ADD(
PRIMARY KEY( max )
);

INSERT INTO INCOME_THRESHHOLDS VALUES( 5000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 8000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 12000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 15000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 18000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 26000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 32000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 42000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 48000 );


CREATE OR REPLACE VIEW INCOME_RANGES AS
SELECT
A.MIN,
A.MAX,
TO_CHAR( A.MIN + 1, 'fm999,999' ) || ' - ' || TO_CHAR( A.MAX, 'fm999,999' ) DESCRIPTION
FROM (
SELECT
NVL( LAG( it.MAX ) OVER ( ORDER BY it.MAX ), -1 ) MIN,
it.MAX
FROM INCOME_THRESHHOLDS it
) A

---------------------------------------------------
And then:
(
SELECT ir.DESCRIPTION
FROM INCOME_RANGES ir
WHERE ir.MIN < Annual_Income AND Annual_Income <= ir.MAX
) INCOME_DESC_TEXT

Or you could join it to your query, assuming that annual income is not null and each Annual Income is in an income range:
SELECT your_query.*, ir.DESCRIPTION
FROM (
YOUR QUERY
) your_query, income_ranges ir
WHERE ir.MIN < your_query.Annual_Income
AND your_query.Annual_Income <= ir.MAX



Sql Question

Yoav, January 08, 2006 - 2:29 am UTC

Hi Tom,
I have a varchar2(20) field in (9iR2 environment) that should contain only capital letters or numbers or combination of numbers and letters.
1. How can i create such a constraint ?
2. How can i write a single select statment that show
the data that isnt meet this requirement ?

Some data:

create table t
(a varchar2(20));

insert into t values('ABC123');
insert into t values('A*B');
insert into t values('_CD');
insert into t values('E%F');
insert into t values('123');
insert into t values('abc');
COMMIT;

SELECT * FROM T;

A
--------
ABC123
A*B ==>
_CD ==>
E%F ==>
123
abc ==>

Thank you.


Tom Kyte
January 08, 2006 - 11:43 am UTC

one approach... regular expressions in 10g could make this more terse perhaps

use same function in SQL but say "where function IS NOT NULL" to find those that will be in violation (or add the check constraint with EXCEPTIONS INTO to have the bad rows rowids logged into a table)

ops$tkyte@ORA10GR2> create table t
  2  ( x varchar2(20)
  3    check(replace(translate(x,'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ',rpad('0',36,'0')),'0','') is null)
  4  )
  5  /

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> insert into t values('ABC123');

1 row created.

ops$tkyte@ORA10GR2> insert into t values('A*B');
insert into t values('A*B')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated


ops$tkyte@ORA10GR2> insert into t values('_CD');
insert into t values('_CD')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated


ops$tkyte@ORA10GR2> insert into t values('E%F');
insert into t values('E%F')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated


ops$tkyte@ORA10GR2> insert into t values('123');

1 row created.

ops$tkyte@ORA10GR2> insert into t values('abc');
insert into t values('abc')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated


ops$tkyte@ORA10GR2> COMMIT;

Commit complete.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> SELECT * FROM T;

X
--------------------
ABC123
123
 

Michel Cadot, January 08, 2006 - 12:37 pm UTC

Hi,

This one is a less complex constraint:

SQL> create table t 
  2  (a varchar2(20)
  3   check (translate(a,'_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ','_') is null));

Table created.

SQL> insert into t values('ABC123');

1 row created.

SQL> insert into t values('A*B');
insert into t values('A*B')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated


SQL> insert into t values('_CD');
insert into t values('_CD')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated


SQL> insert into t values('E%F');
insert into t values('E%F')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated


SQL> insert into t values('123');

1 row created.

SQL> insert into t values('abc');
insert into t values('abc')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated


SQL> COMMIT;

Commit complete.

SQL> 
SQL> SELECT * FROM T;
A
--------------------
ABC123
123

2 rows selected.

Regards
Michel
 

Problem with - EXCEPTIONS INTO caluse

A reader, January 09, 2006 - 6:51 am UTC

Hi Tom,
I tried to use the EXCEPTION INTO clause as you suggested.
The table T alterd , but the EXCEPTIONS table does not exists.
Is there any file i should run to create this table?

Thank You.

CONTROL-CNTL> create table t
2 ( barcode varchar2(20)
3 check(replace(translate(barcode,'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ',rpad('0',36,'0')),'0',
'') is null)
4 )
5 /

Table created.

CONTROL-CNTL> ALTER TABLE t
2 ENABLE VALIDATE CONSTRAINT SYS_C0035913
3 EXCEPTIONS INTO EXCEPTIONS
4 /

Table altered.

CONTROL-CNTL> desc EXCEPTIONS;
ERROR:
ORA-04043: object EXCEPTIONS does not exist


Tom Kyte
January 09, 2006 - 8:04 am UTC

ops$tkyte@ORA10GR2> <b>@?/rdbms/admin/utlexcpt</b>

ops$tkyte@ORA10GR2> rem
ops$tkyte@ORA10GR2> rem $Header: utlexcpt.sql,v 1.1 1992/10/20 11:57:02 GLUMPKIN Stab $
ops$tkyte@ORA10GR2> rem
ops$tkyte@ORA10GR2> Rem  Copyright (c) 1991 by Oracle Corporation
ops$tkyte@ORA10GR2> Rem    NAME
ops$tkyte@ORA10GR2> Rem      except.sql - <one-line expansion of the name>
ops$tkyte@ORA10GR2> Rem    DESCRIPTION
ops$tkyte@ORA10GR2> Rem      <short description of component this file declares/defines>
ops$tkyte@ORA10GR2> Rem    RETURNS
ops$tkyte@ORA10GR2> Rem
ops$tkyte@ORA10GR2> Rem    NOTES
ops$tkyte@ORA10GR2> Rem      <other useful comments, qualifications, etc.>
ops$tkyte@ORA10GR2> Rem    MODIFIED   (MM/DD/YY)
ops$tkyte@ORA10GR2> Rem     glumpkin   10/20/92 -  Renamed from EXCEPT.SQL
ops$tkyte@ORA10GR2> Rem     epeeler    07/22/91 -         add comma
ops$tkyte@ORA10GR2> Rem     epeeler    04/30/91 -         Creation
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table exceptions(row_id rowid,
  2                          owner varchar2(30),
  3                          table_name varchar2(30),
  4                          constraint varchar2(30));

Table created.
 

A reader, January 13, 2006 - 1:48 pm UTC

I tried executing the following:

select dn, stragg(charge) from charges;

and I am getting

ERROR at line 1:
ORA-00904: "STRAGG": invalid identifier

Thanks.

Tom Kyte
January 13, 2006 - 1:49 pm UTC

did you do the search? did you read about it? stragg is something I wrote.

Alex, January 13, 2006 - 2:41 pm UTC

Should he be posting that stuff? Are those real addresses?

Tom Kyte
January 15, 2006 - 3:17 pm UTC

indeed, they were - and all close to eachother (same neighborhood). I removed them.

ansi sql query weirdness

Stefan, January 17, 2006 - 8:38 am UTC

Hi tom,

i just came across what i believed to be a bug in oracle, and support then told me that this is correct behaviour according to ansi sql standard. albeit this just doesn't make any sense to me:

grant create session, create table to test identified by test
/
connect test/test
create table t1 (x int)
/
create table t2 (y int)
/

select * from t1 where x in (select x from t2);

this - to my surprise - returns "no rows selected".
but isn't that just plain wrong ? there is no column "X" in table t2, how come that is processed as if there was such a column ? is there any use for this "feature" ? :)



Tom Kyte
January 17, 2006 - 9:08 am UTC

not wierd at all - it is called simply a correlated subquery

you would not be surprised if:


select *
from t1
where t1.x in ( select t1.x
from t2)

worked right? Ths is the same thing - it is just a correlated subquery.

Alexander the ok, January 17, 2006 - 9:49 am UTC

I'm with Stefan on this one I don't understand that either.
How can you select a column from a table where it doesn't exist? Also why would you do that, what does it mean?

Is that the same as

select *
from t1
where t1.x = t1.x

In this case?



Tom Kyte
January 17, 2006 - 10:16 am UTC

it is not the same as "where t1.x = t1.x"

select * from t1 where x in (select x from t2);

is the same as:

select *
from t1
where x is not null
and exists ( select null
from t2 );


but, does this query make sense to you:


select *
from emp
where deptno in ( select dept.deptno
from dept
where dept.deptno = emp.deptno )


if so (correlated subquery) then



select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno )

should likewise make sense, and if dept didn't have a column named deptno but rather SOME_THING_dept, then:

select *
from emp
where deptno in ( select deptno
from dept
where dept.SOME_THING_deptno = deptno )

would likewise make sense.


select * from t1 where x in ( select x from t2 )

is simply a correlated subquery (and an excellent reason to consider using correlation names in all cases ;)

Alexander the ok, January 17, 2006 - 4:13 pm UTC

Yes those make sense. I just don't see why it's a correlated subquery.

select * from t1 where x in ( select x from t2 )

Where's the correlation? Doesn't there need to be a join from t1 to t2 in the subquery for this to be considered correlated? Your other examples have them.

Tom Kyte
January 17, 2006 - 4:28 pm UTC

why do you care of the x is after the word select or after the word where?

It matters NOT where the "x" appears.


You were happy with:

select *
from emp
where deptno in ( select deptno
from dept
where dept.SOME_THING_deptno = deptno )

this is not any different.

You were happy with this:

select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno )

this is not any different.


there never "need" be a join, ever. Not that the above EMP/DEPT examples are "joins", they are correlated subqueries with (arbitrary) predicates - the predicates need not be there, they are not "special" or magic.



Alexander the ok, January 18, 2006 - 9:02 am UTC

Tom sorry if I'm not making sense. My question kind of went from "how does that statement compile" to just trying to get my terminology straight. I see many names for different things like correlated subquery, scalar subquery etc. What I thought made the subquery correlated was a "correlation" via join in the subquery to the parent statement as indicated here:

select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno ) <-----

and the other did not contain this, thus I wonder why you called it correlated. But it seems I was wrong to think this.



Tom Kyte
January 19, 2006 - 7:44 am UTC

it is correlated because the subquery refers to EMP.

any "subquery that refers to it's outer query" is correlated. It is not "joined" to, it uses correlation variables from the outer query.

Trevor, January 18, 2006 - 2:02 pm UTC

Alexander and Stefan, I think the easiest way to understand why this query works is to look at the scope of what is being selected. For the example given above:

create table t1 (x int)
/
create table t2 (y int)
/

select * from t1 where x in (select x from t2);

The reason why it returns no rows selected rather than an error is because it is looking at t2, seeing there is no x, and then going ahead and selecting x from t1 since that is the next up the chain. In other words, what you are expecting it to be doing is:
select * from t1 where t1.x in (select t2.x from t2);

and instead, what it is doing is:
select * from t1 where t1.x in (select t1.x from t2);

(It took me a while to wrap my head around this, and once I did, Tom's answer made much more sense.)


Alexander the ok, January 19, 2006 - 9:15 am UTC

</Quote

The reason why it returns no rows selected rather than an error is because it is
looking at t2, seeing there is no x, and then going ahead and selecting x from
t1 since that is the next up the chain.

</Quote

Is this true Tom? If so this would be the missing "piece" I was failing to see. The examples you gave were selecting a column in the subquery that actually existed in the table.
I just couldn't understand what that would be selecting. Even though I know it's using t1.x, but t1.x from t2?? It's just tough to compute.

I'll really be able to mess with some people at work with this one.

Tom Kyte
January 19, 2006 - 1:26 pm UTC

sorry, I sort of thought I said this:


you would not be surprised if:

select * 
  from t1
 where t1.x in ( select t1.x
                   from t2)

worked right?  Ths is the same thing 


I put the correlation names in there for you - the ones that were implied....


If you want to mess with people at work - show them this and ask them "how" (requires 10g and above)



ops$tkyte@ORA10GR2> select * from t;

        ID MSG
---------- --------------------
         3 blah blah blah

ops$tkyte@ORA10GR2> select * from t;

        ID MSG
---------- --------------------
         2 boo

ops$tkyte@ORA10GR2> select * from t;

        ID MSG
---------- --------------------
         1 go away

ops$tkyte@ORA10GR2> select count(*) from t where id > 0;

  COUNT(*)
----------
         3

 

Alexander the ok, January 19, 2006 - 1:31 pm UTC

It's no problem, it's a good thing to know, thank you.

How was the training? I wonder what the top 5 things done wrong are. Storing dates as strings in there ;)

Tom Kyte
January 19, 2006 - 1:56 pm UTC

Storing dates in strings was in the effective schema section!



Stefan, January 20, 2006 - 6:03 am UTC

Tom, in response to your previous post.

What would happen if you were to issue a 4th select * from t ? :)

Trevor, thanks for the clarification i think i got it now as well


Tom Kyte
January 20, 2006 - 10:29 am UTC

I should mention that in the above silly example, I was in a single user database - there were no other sessions mucking with the data....

Alexander, January 20, 2006 - 9:14 am UTC

I'm going to have go through the 10g new features guide to figure out how you did that. I don't have 10g yet.

Are you going to leave me hangin on the top 5 things done wrong?

Tom Kyte
January 20, 2006 - 10:34 am UTC

o not using binds

o not having a test environment

o not considering statements that begin with "CREATE", "ALTER", "DROP" and so on
to be SOURCE CODE (and hence not using any sort of configuration management on
it and hence doing silly things like diffing two databases in an attempt to
figure out what is different between version 1 and version 2)

o trying to be database independent

o DBA vs Developer vs DBA relationship

sql query

mn, January 23, 2006 - 9:21 am UTC

Hi Tom,

When executing the below query(1) LC alias is doubling instead of getting 28500.In Query(2) when use /count(*) i am getting expected results .Is it possible to get expected results without using /count(*)?
(1)
SELECT CASE WHEN sum(TF_LEGR_BAL_CTRY)>0 AND GL_BAL_IND='Y'
THEN SUM(TF_LEGR_BAL_CTRY) ELSE 0 END ,TF_LEGR_BAL_CTRY
FROM TGDW_GTFMAS,TGDW_GFACLMT,PAR_GL
WHERE GL_CD=TF_GL_CD AND TF_ACC_NBR=FLN_ACC_NBR GROUP BY GL_BAL_IND
SQL> /

CASEWHEN sum(TF_LEGR_BAL_CTRY)>0ANDGL_BAL_IND='Y'THENSUM(TF_LEGR_BAL_CTRY)ELSE0 END LC----------------------------------------------------------------------------
TF_LEGR_BAL_CTRY
----------------
0
0

57000
28500


(2)

SELECT CASE WHEN sum(TF_LEGR_BAL_CTRY)>0 AND GL_BAL_IND='Y'
THEN SUM(TF_LEGR_BAL_CTRY)/COUNT(*) ELSE 0 END ,TF_LEGR_BAL_CTRY
FROM TGDW_GTFMAS,TGDW_GFACLMT,PAR_GL
WHERE GL_CD=TF_GL_CD AND TF_ACC_NBR=FLN_ACC_NBR GROUP BY GL_BAL_IND
5 /

ASEWHEN sum(TF_LEGR_BAL_CTRY)>0ANDGL_BAL_IND='Y'THENSUM(TF_LEGR_BAL_CTRY)/COUNT(*)ELS

-------------------------------------------------------------------------------

F_LEGR_BAL_CTRY
---------------
0

0

28500

28500

These are below tables involved in my query 

SQL> select * from tgdw_gtfmas;

TF_LEGR_BAL_CTRY TF_GL_CD LOAD_ TF_ACC_NBR
---------------- ---------- ----- ----------
28500 102345 SG 1000
0 102345 SG 1000



SQL> select * from tgdW_gfaclmt;

FLN_ACC_NBR
-----------
1000

SQL> select * from par_gl;

GL_CD G I
---------- - -
102345 Y I
102345 Y 0


Thanks in advance
MM 
 

Tom Kyte
January 23, 2006 - 10:38 am UTC

no creates...
no insert intos...

but basically you have a partial cartesian join here, you join by "A", you group by "B", you are aggregating multiple records into one. dividing the sum by the count of records aggregated "fixed" it.

but it likely means you don't yet fully understand the question you are trying to ask of the data - so, skip the sql, phrase the question.

query

ravi, January 24, 2006 - 3:14 am UTC

Hi Tom,
I have two tables.1st table with 210 columns.
Table structure like this
1st table
sid,record_date,up1,dwn1,up2,dwn2...up50,dwn50.
ex data:
1,24-dec-2005,10,20,30,40...50,50
2,24-dec-2005,20,20,20,20,..20,20
2nd table
name,col1,col2. This table contains some value and corresponding column names from table 1
ex data:
Ravi,up1,dwn1
gvs,up2,dwn2
now given a "name" in table2 i want to find all the corresponding column values in table1.(table2 actually contains column names of table1 with respect to the value).
How to get column values from table1 with respect to a value in table2 say
for ravi i know columns are up1,dwn1 and need to get their values from table1.
Thanks in advance
ravi

2 numbers from 3rd number

info439, January 27, 2006 - 3:13 pm UTC

Tom, If i have a list of number say from 1 to 128
, I want to fetch every 2 numbers after 3rd number in the list. How do i get this?

if the list is 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20..
then
I want to fetch 3&4,7&8,11&12 etc..

Thanks ,




Tom Kyte
January 28, 2006 - 12:52 pm UTC

ops$tkyte@ORA10GR1> create table t
  2  as
  3  select level x
  4    from dual
  5  connect by level <= 20;

Table created.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select x
  2    from (
  3  select x, rownum r
  4    from (select x
  5            from t
  6                   order by x)
  7             )
  8   where mod(ceil(r/2),2) = 0
  9  /

         X
----------
         3
         4
         7
         8
        11
        12
        15
        16
        19
        20

10 rows selected.


That assumes that the data in "T" is not really just 1, 2, 3, .... but sort of "random data", for if I just wanted to generate 3/4, 7/8, .... there are better ways. 

Thanks Tom,

A reader, January 28, 2006 - 1:52 pm UTC

That is not a random data in table. we have a sequence of codes like that from 1..256 and for some from 1..512 etc..[apart from other columns]. For some special conditions, we need to pickup the way i have mentioned earlier.
I think the way you showed me looks good[ as long as it gets me what i wanted]. But since you kind of mentioned it, what are other ways to do this :-)?

thank you.

Tom Kyte
January 29, 2006 - 8:08 am UTC

well, for starters, if they are contigous 1,2,3,4.... you don't need to use rownum, you already have the contigous numbers, so you could skip the assign rownum bit.



SQL problem

ramis, February 03, 2006 - 4:19 pm UTC

hello,

I have table T with the following columns, are all number fields, except of course quiz_date

student_id
class_id
quiz_id
marks
quiz_date

now each student belongs to one class_id. each student participates in many quizes taken throughout the month.each quiz has its unique id..

i want to get a very complex analysis where i can find out:

1. marks obtained by each student in quizes in any number of days period (i.e. no. of days can be any figure greater than zero),
2. marks obtained by each student in quizes in a month for any single year period (i.e. Jan, feb,...Dec),

suppose the data is

student_id class_id quiz_id marks quiz_date
1 1 1 50 22 Jan 2004
2 2 2 40 24 Jan 2004
3 1 3 34 24 Jan 2005
1 1 4 10 26 Jan 2005
1 1 5 30 29 Jan 2005
1 1 6 31 Jan 2005
3 2 7 34 02 Feb 2005
3 2 8 33 09 Feb 2005
3 2 9 56 09 Feb 2005
1 1 7 90 26 Feb 2005
2 2 8 0 26 Feb 2005
1 1 8 80 28 Feb 2005
2 2 8 65 28 Feb 2005
1 1 9 31 Mar 2005
2 2 9 11 31 Mar 2005

now i would like to have a query that shows me the most number of marks obtained by each student in any number of days period (in ascending order of date), say 10 days, 15 days etc...

this query should show the following columns in the output

desired output (i.e. most number of marks by each student in a in any 10 day period)

st_id cl_id t_quizes t_marks Hs_marks st_quiz end_quiz start_date end_date
1 1 1 50 50 1 1 22 Jan 2004 31 Jan 2004
1 1 3 40 30 4 6 26 Jan 2005 31 Jan 2005
1 1 23 170 90 7 8 26 Feb 2005 28 Feb 2005
2 so on...





secondly, most number of marks obtained by each student in a month for any single year

this query should show the following columns in the output


desired output (i.e. most number of marks by each student in a in any month)

st_id cl_id t_quizes t_marks Hs_marks st_quiz end_quiz month year
1 1 1 50 50 1 1 Jan 2004
1 1 3 40 30 4 6 Jan 2005
1 1 23 170 90 7 8 Feb 2005
2 so on...
[/pre]

where

st_id = studnet_id
cl_id = class_id
t_quizes = total_quizes
t_marks = total_marks
Hs_marks = highest _quizes
st_quiz = start_quiz no. of each sequence
end_quiz = end_quiz no. of each sequence
month = month in which the marks were obtained
year = year in which the marks were obtained



I would be most grateful if any one do it with shortest possible and fastest query

create table T
(student_id number,
class_id number,
quiz_id number,
marks number
quiz_date date)


INSERT INTO T (1,1, 1, 50, ‘22 Jan 2004’);
INSERT INTO T (2,2, 2, 40, ‘24 Jan 2004’);
INSERT INTO T (3,1, 3, 34, ‘24 Jan 2005’);
INSERT INTO T (1,1, 4, 10, ‘26 Jan 2005’);
INSERT INTO T (1,1, 5, 30, ‘29 Jan 2005’);
INSERT INTO T (1,1, 6, ‘31 Jan 2005’);
INSERT INTO T (3,2, 7,34, ‘02 Feb 2005’);
INSERT INTO T (3,2, 8,33, ‘09 Feb 2005’);
INSERT INTO T (3,2, 9,56, ‘09 Feb 2005’);
INSERT INTO T (1,1, 7, 90, ‘26 Feb 2005’);
INSERT INTO T (2,2, 8, 0, ‘26 Feb 2005’);
INSERT INTO T (1,1, 8, 80, ‘28 Feb 2005’);
INSERT INTO T (2,2, 8, 65, ‘28 Feb 2005’);
INSERT INTO T (1,1, 9, ‘31 Mar 2005’);
INSERT INTO T (2,2, 9, 11, ‘31 Mar 2005’);

sql problem???

ramis, February 04, 2006 - 12:16 am UTC

Hi tom,
regarding my question just above this message, I have got a solution to my 2nd request and I thought i my first request wasn't even clear to understand..so I have reaksed my first request, i hope in a better way. Please ignore my last message..

I am sure you would provide me a great solution...


I have table T with the following columns, are all number fields, except of course quiz_date

student_id
class_id
quiz_id
marks
quiz_date

now each student belongs to one class_id. each student participates in many quizes taken throughout the month.each quiz has its unique id..

I require...

1. marks obtained by each student in quizes in any number of days period (i.e. no. of days can be any figure greater than zero),


what I want is that the most number of marks obtained by each student in any number of days (which we will specify in the query) period, say 10 days, 15 days, 365 days etc...

for example this is the sample data for student 1 form the list i provided in my original message.

[pre]
st_id c_id quiz_id marks quiz_date
1 1 1 50 22 Jan 2004
1 1 4 10 26 Jan 2005
1 1 5 30 29 Jan 2005
1 1 6 31 Jan 2005
1 1 7 90 26 Feb 2005
1 1 8 80 28 Feb 2005
1 1 9 31 Mar 2005

[/pre]

now I want to calculate the sum of marks obtained by this student in any period of 'n' days. This 'n' is what we would specify in the query.

now for above data, say, I want to calculate marks obtained by student 1 in any period of 10 days starting from his first quiz, then from 2nd quiz, then from 3rd and so on, in the same result.

his first quiz was on 22 Jan 2004 starting from 22nd' 10 days are completed on 31 jan 2004- this is shown by the first row in my desired output below.

now his next quiz is a year later on 26 Jan 2005, counting ten days from 26th are Feb 4, 2005 - 2nd row of my desired output.

the third quiz on 29 Jan 2005, 10 days are completed on 07 Feb 2005, same is for the third row..

[pre]
desired output
st_id sum_marks start_date end_date
1 50 22 Jan 2004 31 Jan 2004
1 40 26 Jan 2005 04 Feb 2005
1 30 29 Jan 2005 07 Feb 2005
1 31 Jan 2005 09 Feb 2005
1 170 26 Feb 2005 07 Mar 2005
1 80 28 Feb 2005 09 Mar 2005
1 31 Mar 2005 09 Apr 2005

[/pre]

in short the calculation will be done for 'n' days from each of his quizes in acsending order

same is the case for the other students..

I hope this would clear my requirement

I would love to have the shortest possible query for this...

create table T
(student_id number,
class_id number,
quiz_id number,
marks number
quiz_date date)


INSERT INTO T (1,1, 1, 50, ‘22 Jan 2004’);
INSERT INTO T (2,2, 2, 40, ‘24 Jan 2004’);
INSERT INTO T (3,1, 3, 34, ‘24 Jan 2005’);
INSERT INTO T (1,1, 4, 10, ‘26 Jan 2005’);
INSERT INTO T (1,1, 5, 30, ‘29 Jan 2005’);
INSERT INTO T (1,1, 6, ‘31 Jan 2005’);
INSERT INTO T (3,2, 7,34, ‘02 Feb 2005’);
INSERT INTO T (3,2, 8,33, ‘09 Feb 2005’);
INSERT INTO T (3,2, 9,56, ‘09 Feb 2005’);
INSERT INTO T (1,1, 7, 90, ‘26 Feb 2005’);
INSERT INTO T (2,2, 8, 0, ‘26 Feb 2005’);
INSERT INTO T (1,1, 8, 80, ‘28 Feb 2005’);
INSERT INTO T (2,2, 8, 65, ‘28 Feb 2005’);
INSERT INTO T (1,1, 9, ‘31 Mar 2005’);
INSERT INTO T (2,2, 9, 11, ‘31 Mar 2005’);


regards
ramis

A reader, February 06, 2006 - 9:32 pm UTC

Tom,
I have a.b tables as below:

SQL> create table a (id number NOT NULL, note1 VARCHAR2(10),note2 VARCHAR2(10),note3 VARCHAR2(10));

Table created.

SQL> insert into a values(1,'ER',NULL,'MOD');

1 row created.

SQL> insert into a values(2,NULL,'DC','HOT');

1 row created.

SQL> insert into a values(3,'SI','BE',NULL);

1 row created.

SQL> insert into a values(4,NULL,NULL,'BAD');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from a;

        ID NOTE1      NOTE2      NOTE3
---------- ---------- ---------- ----------
         1 ER                    MOD
         2            DC         HOT
         3 SI         BE
         4                       BAD

SQL> create table b (id number NOT NULL, note1 VARCHAR2(10),note2 VARCHAR2(10),note3 VARCHAR2(10));

Table created.

SQL> insert into b values(1,'ER',NULL,NULL);  

1 row created.

SQL>  insert into b values(2,'FG','CC',NULL);

1 row created.

SQL> insert into b values(3,NULL,'MX','DEF');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from b;

        ID NOTE1      NOTE2      NOTE3
---------- ---------- ---------- ----------
         1 ER
         2 FG         CC
         3            MX         DEF

Need to update table a by table b with the logics as :

if table a' field is not null, keep it; if table a's field is null, update a by table b's 
given data. Also if b's id not in a's id, do not update that row in a table.

So after updating, it looks like:


        ID NOTE1      NOTE2      NOTE3
---------- ---------- ---------- ----------
         1 ER                    MOD
         2 FG         DC         HOT
         3 SI         BE         DEF
         4                       BAD

Can we just use one SQL statement instead of using PL/SQL to get expected results in 9.2.0.4?

Thanks. 

Tom Kyte
February 07, 2006 - 1:29 am UTC

place a primary key on B(ID) instead of NOT NULL and this will do it:

ops$tkyte@ORA9IR2> update ( select a.note1 anote1, a.note2 anote2, a.note3 anote3,
  2                  b.note1 bnote1, b.note2 bnote2, b.note3 bnote3
  3                     from a, b
  4            where a.id = b.id )
  5    set anote1 = nvl(anote1,bnote1),
  6        anote2 = nvl(anote2,bnote2),
  7        anote3 = nvl(anote3,bnote3)
  8  /

3 rows updated.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from a;

        ID NOTE1      NOTE2      NOTE3
---------- ---------- ---------- ----------
         1 ER                    MOD
         2 FG         DC         HOT
         3 SI         BE         DEF
         4                       BAD
 

Great answer

A reader, February 07, 2006 - 10:29 am UTC

Tom,
Thanks so much for your great help. Always learn something
new and effient from your site.

To Ramis SQL problem

Michel Cadot, February 07, 2006 - 4:18 pm UTC

Ramis,

If you get a solution for one of your problems, why don't you post it here then everyone can take advantage of it?

Another point, it will be fair (if you want us to help you) to post a working test case. Yours contains many errors.

Regards
Michel


OK

Prasad, February 08, 2006 - 5:00 am UTC

Hi Tom,
This query retrieves duplicate values from a table..

SQL> desc t
 Name                                                              Null?    

Type
 ----------------------------------------------------------------- -------- 

--------------
 X                                                                          

NUMBER(38)


SQL> select * from t t1
  2  where not exists(
  3  select 'x' from t
  4  group by x
  5  having min(rowid) = t1.rowid)
  6  /


Any other way to put this query?? 

Tom Kyte
February 08, 2006 - 8:11 am UTC

consecutive days??

Ramis, February 09, 2006 - 12:47 pm UTC

Tom,

sorry for the my earlier erroneous ddl data..

I have now corrected it and have reasked my query..hope would solve my problem



 create table T
 (student_id number,
 class_id number,
 quiz_id number,
 marks number,
 quiz_date date)
/

INSERT INTO T VALUES (1,1, 1, 50, '22 Jan 2004');
INSERT INTO T VALUES (2,2, 2, 40, '24 Jan 2004');
INSERT INTO T VALUES (3,1, 3, 34, '24 Jan 2005');
INSERT INTO T VALUES (1,1, 4, 10, '26 Jan 2005');
INSERT INTO T VALUES (1,1, 5, 30, '29 Jan 2005');
INSERT INTO T VALUES (1,1, 6, NULL,  '31 Jan 2005');
INSERT INTO T VALUES (3,2, 7,34, '02 Feb 2005');
INSERT INTO T VALUES (3,2, 8,33,  '09 Feb 2005');
INSERT INTO T VALUES (3,2, 9,56,  '09 Feb 2005');
INSERT INTO T VALUES (1,1, 7, 90, '26 Feb 2005');
INSERT INTO T VALUES (2,2, 8, 0,   '26 Feb 2005');
INSERT INTO T VALUES (1,1, 8, 80, '28 Feb 2005');
INSERT INTO T VALUES (2,2, 8, 65, '28 Feb 2005');
INSERT INTO T VALUES (1,1, 9, NULL ,  '31 Mar 2005');
INSERT INTO T VALUES (2,2, 9, 11, '31 Mar 2005');



SQL> SELECT * FROM T
  2  
SQL> /

STUDENT_ID   CLASS_ID    QUIZ_ID      MARKS QUIZ_DATE
---------- ---------- ---------- ---------- ----------
         1          1          1         50 22-01-2004
         2          2          2         40 24-01-2004
         3          1          3         34 24-01-2005
         1          1          4         10 26-01-2005
         1          1          5         30 29-01-2005
         1          1          6            31-01-2005
         3          2          7         34 02-02-2005
         3          2          8         33 09-02-2005
         3          2          9         56 09-02-2005
         1          1          7         90 26-02-2005
         2          2          8          0 26-02-2005

STUDENT_ID   CLASS_ID    QUIZ_ID      MARKS QUIZ_DATE
---------- ---------- ---------- ---------- ----------
         1          1          8         80 28-02-2005
         2          2          8         65 28-02-2005
         1          1          9            31-03-2005
         2          2          9         11 31-03-2005


I require

marks obtained by each student in quizes in any number of days period (i.e. 
no. of days (which we will specify in the query) period, say 10 days, 15 days, 365 days etc...


for example this is the sample data for student 1 form the list I provided above
original message.

SQL> SELECT * FROM T where student_id = 1 
  2  
SQL> /

st_id c_id  quiz_id  marks quiz_date
1     1     1        50    22 Jan 2004
1     1     4        10    26 Jan 2005
1     1     5        30    29 Jan 2005
1     1     6              31 Jan 2005
1     1     7        90    26 Feb 2005
1     1     8        80    28 Feb 2005
1     1     9              31 Mar 2005


now for above data, say, I want to calculate marks obtained by student 1 in any period of 10 days starting from his first quiz, then from 2nd quiz, then from 3rd and so on, in the same result.

his first quiz was on 22 Jan 2004 starting from 22nd' 10 days are completed on 31 jan 2004- this is shown by the first row in my desired output below. 

now his next quiz is a year later on 26 Jan 2005, counting ten days from 26th are Feb 4, 2005 - 2nd row of my desired output.

the third quiz on 29 Jan 2005, 10 days are completed on 07 Feb 2005, same is for the third row..


desired output
st_id   sum_marks  start_date   end_date
1       50        22 Jan 2004   31 Jan 2004
1       40        26 Jan 2005   04 Feb 2005
1       30        29 Jan 2005   07 Feb 2005
1                 31 Jan 2005   09 Feb 2005 
1      170        26 Feb 2005   07 Mar 2005
1       80        28 Feb 2005   09 Mar 2005
1                 31 Mar 2005   09 Apr 2005 

same is the case for the other students..I want the answer for all students from one query in a single output..


regards

Ramis 

To Ramis

Michel Cadot, February 09, 2006 - 2:47 pm UTC

Well, i still don't see the solution you found to your 2nd query (Feb 03-04 2006). Maybe you don't want to share.

Here's for the first one:

SQL> /
STUDENT_ID  SUM_MARKS START_DATE  END_DATE
---------- ---------- ----------- -----------
         1         50 22 Jan 2004 31 Jan 2004
         1         40 26 Jan 2005 04 Feb 2005
         1         30 29 Jan 2005 07 Feb 2005
         1            31 Jan 2005 09 Feb 2005
         1        170 26 Feb 2005 07 Mar 2005
         1         80 28 Feb 2005 09 Mar 2005
         1            31 Mar 2005 09 Apr 2005
         2         40 24 Jan 2004 02 Feb 2004
         2         65 26 Feb 2005 07 Mar 2005
         2         65 28 Feb 2005 09 Mar 2005
         2         11 31 Mar 2005 09 Apr 2005
         3         68 24 Jan 2005 02 Feb 2005
         3        123 02 Feb 2005 11 Feb 2005
         3         89 09 Feb 2005 18 Feb 2005
         3         89 09 Feb 2005 18 Feb 2005

15 rows selected.

Regards
Michel
 

Michel Cadot

ramis, February 09, 2006 - 3:04 pm UTC

Michel Cadot,
thanks for your reply

perhaps u have missed to post your query with the result..please post it

I apologize for forgetting to post my query for the other problem

here is that

select  student_id st_id,
  class_id cl_id,
  count(*) t_quizes,
  sum(nvl(marks,0)) t_marks,
  max(nvl(marks,0)) hs_marks,
  min(quiz_id) st_quiz,
  max(quiz_id) end_quiz,
  to_char(quiz_date,'Mon') month,
 to_char(quiz_date,'yyyy') year
 from t
 group by student_id, class_id, to_char(quiz_date,'Mon'), to_char(quiz_date,'yyyy')
 order by student_id, year, to_date(to_char(quiz_date,'Mon'),'Mon')


SQL> /

ST_ID  CL_ID   T_QUIZES    T_MARKS   HS_MARKS    ST_QUIZ   END_QUIZ MON YEAR
---------- ---------- ---------- ---------- ---------- ---------- ---------- --- ----
1      1          1         50         50          1          1 Jan 2004
1      1          3         40         30          4          6 Jan 2005
1      1          2        170         90          7          8 Feb 2005
1      1          1          0          0          9          9 Mar 2005
2      2          1         40         40          2          2 Jan 2004
2      2          2         65         65          8          8 Feb 2005
2      2          1         11         11          9          9 Mar 2005
3      1          1         34         34          3          3 Jan 2005
3      2          3        123         56          7          9 Feb 2005

regards
ramis 

To Ramis

Michel Cadot, February 09, 2006 - 4:57 pm UTC

Maybe you're right.

SQL> select student_id, 
  2         sum(marks) 
  3           over (partition by student_id order by quiz_date
  4                 range between current row and interval '9' day following) 
  5           sum_marks,
  6         quiz_date start_date, 
  7         quiz_date + interval '9' day end_date
  8  from t
  9  order by student_id, quiz_date
 10  /
STUDENT_ID  SUM_MARKS START_DATE  END_DATE
---------- ---------- ----------- -----------
         1         50 22 Jan 2004 31 Jan 2004
         1         40 26 Jan 2005 04 Feb 2005
         1         30 29 Jan 2005 07 Feb 2005
         1            31 Jan 2005 09 Feb 2005
         1        170 26 Feb 2005 07 Mar 2005
         1         80 28 Feb 2005 09 Mar 2005
         1            31 Mar 2005 09 Apr 2005
         2         40 24 Jan 2004 02 Feb 2004
         2         65 26 Feb 2005 07 Mar 2005
         2         65 28 Feb 2005 09 Mar 2005
         2         11 31 Mar 2005 09 Apr 2005
         3         68 24 Jan 2005 02 Feb 2005
         3        123 02 Feb 2005 11 Feb 2005
         3         89 09 Feb 2005 18 Feb 2005
         3         89 09 Feb 2005 18 Feb 2005

15 rows selected.

Regards
Michel
 

Thanks Michel

ramis, February 09, 2006 - 11:22 pm UTC

Michel thanks alot for your query.
regards
Ramis


SQL QUERY

Bhavesh Ghodasara, February 14, 2006 - 8:07 am UTC

hi Tom,
I am creating the report in which i have to fill rows dynamically...
similar to this

CREATE TABLE t(a NUMBER,b NUMBER)

INSERT INTO t VALUES(1,'');

INSERT INTO t VALUES(2,'');

INSERT INTO t VALUES('',3);

INSERT INTO t VALUES('',4);

INSERT INTO t VALUES('',5);

SQL> select *
  2  from t;

        A         B
--------- ---------
        1
        2
                  3
                  4
                  5

now i want output like this:

        A         B
--------- ---------
        1       3
        2         4
                  5

its like from a and b only one column have value while other have null.
and it doesnt matter that how many values a or b have..the no. of rows should 
be greatest between a and b..here a having 2 valuesh while b having 3 so final 
output contains 3 rows.
i want to avoid self-join,want to use analytics. I try it using first_value,lag and lead
but fail to do so.
how can i do that?
thanks in advance.      

Tom Kyte
February 14, 2006 - 8:28 am UTC

ops$tkyte@ORA10GR2> select rn, max(a), max(b)
  2    from (
  3  select a, b,
  4         coalesce(
  5             case when a is not null then rn1 end ,
  6         case when b is not null then rn2 end
  7             ) rn
  8    from (
  9  select a, b,
 10         row_number() over (partition by case when b is null then 1 end order by a) rn1,
 11             row_number() over (partition by case when a is null then 1 end order by b) rn2
 12    from t
 13   where a is null OR b is null
 14         )
 15         )
 16   group by rn
 17  /

        RN     MAX(A)     MAX(B)
---------- ---------- ----------
         1          1          3
         2          2          4
         3                     5



that only works if A is null or B is null - which I presume I can assume. 

great work

Bhavesh Ghodasara, February 15, 2006 - 8:41 am UTC

great work tom thanks very much...
after my post i also succedded to do so..
my query is:
SELECT *
FROM
(
SELECT a,lag(b,GREATEST(y.cnta,y.cntb)-1)over(ORDER BY b ) lb
FROM t x,(SELECT COUNT(a)cnta,COUNT(b)cntb FROM t)y)z
WHERE NVL(a,lb)>0

thanks again..

NVLs and Outer Joins

Rish G, July 11, 2006 - 4:50 pm UTC

Tom,
I've been trying to figure out the difference between these 2 queries.

create table recip_elig(
recip_id varchar2(5),
med_stat varchar2(2),
fdos date )

insert into recip_elig
values('10010', 'WI', '10-APR-2006');

insert into recip_elig
values('10020', 'SC', '11-APR-2006');

insert into recip_elig
values('10030', 'L1', '02-APR-2006');

insert into recip_elig
values('10040', '23', '13-MAY-2006');

CREATE TABLE RECIP2082(
recip_id varchar2(5),
elig_ind varchar2(1),
trans_date date);

insert into recip2082
values('10010', 'L', '01-APR-2006');

insert into recip2082
values('10010', 'L', '01-MAY-2006');

insert into recip2082
values('10020', 'U', '01-MAR-2006');

insert into recip2082
values('10020', 'U', '01-APR-2006');

insert into recip2082
values('10020', 'U', '01-FEB-2006');

insert into recip2082
values('10020', 'U', '01-MAY-2006');

insert into recip2082
values('10030', 'A', '01-FEB-2006');

Criteria : I want to assign a code to each recipient in the recip_elig table based on certain conditions.
When the med_stat code for a recipient is either WI or L1, then assign a code = 01
When the med_stat code for a recipient is SC and the corresponding elig_ind in the recip2082 table is U and the FDOS in recip elig table lies within the month following the trans_date in the recip2082 table then assign a code of 02
when the med_stat code is 23 then assign a code of 03.

A simple query and this query that gives me the correct answer
SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
CASE WHEN MED_STAT IN ('WI', 'L1')
THEN '01'
WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
WHEN MED_STAT = '23' THEN '03'
END AS DUAL_ELIG
FROM RECIP_ELIG RE, RECIP2082 R2
WHERE RE.RECIP_ID = R2.RECIP_ID(+)
AND FDOS BETWEEN last_day(R2.TRANS_DATE(+))+1 AND last_day(add_months(R2.TRANS_DATE(+) ,1))

RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10030 L1 02-APR-06 01
10040 23 13-MAY-06 03
10010 WI 10-APR-06 01

Later, I had include another table in this query that had to be outer joined on recip_elig. Since Oracle does not allow a table to be outer joined to more than one table I tried to work around the above query by using nvl.
Here is the query :
SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
CASE WHEN MED_STAT IN ('WI', 'L1')
THEN '01'
WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
WHEN MED_STAT = '23' THEN '03'
END AS DUAL_ELIG
FROM RECIP_ELIG RE, RECIP2082 R2
WHERE RE.RECIP_ID = R2.RECIP_ID(+)
AND FDOS BETWEEN last_day(nvl(R2.TRANS_DATE, '01-JAN-1900'))+1 AND last_day(add_months(NVL(R2.TRANS_DATE, '01-JAN-2099') ,1))

RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10040 23 13-MAY-06 03

I was wondering why the NVL didnt workon this criteria
FDOS BETWEEN last_day(nvl(R2.TRANS_DATE, '01-JAN-1900'))+1 AND last_day(add_months(NVL(R2.TRANS_DATE, '01-JAN-2099') ,1) whereas the outer join worked? I cant seem to understand the difference.

Recip 10010 matches the 1st case condition, and should be assigned the code 01. why is it bumped off from the resultset of the second query?

Thanks once again,
Rish G.

Tom Kyte
July 12, 2006 - 3:26 pm UTC

why not use the ansi outer join syntax?  

PLEASE use to_date with a format - stop relying on implicit conversions, you'll get seriously burned soon.

you missed a plus.


ops$tkyte@ORA10GR2> SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
  2  CASE WHEN MED_STAT IN ('WI', 'L1')
  3  THEN '01'
  4  WHEN MED_STAT = 'SC'  AND R2.ELIG_IND = 'U' THEN '02'
  5  WHEN MED_STAT = '23' THEN '03'
  6  END AS DUAL_ELIG
  7  FROM RECIP_ELIG RE, RECIP2082 R2
  8  WHERE RE.RECIP_ID = R2.RECIP_ID(+)
  9  AND FDOS BETWEEN last_day(nvl(R2.TRANS_DATE(+), '01-JAN-1900'))+1 AND
 10  last_day(add_months(NVL(R2.TRANS_DATE(+), '01-JAN-2099') ,1))
 11  /

RECIP ME FDOS      E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10040 23 13-MAY-06             03
10010 WI 10-APR-06             01
10030 L1 02-APR-06             01


You wanted to look at the date in the outer join part.  When you leave off the outer join you get:

ops$tkyte@ORA10GR2> SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
  2  CASE WHEN MED_STAT IN ('WI', 'L1')
  3  THEN '01'
  4  WHEN MED_STAT = 'SC'  AND R2.ELIG_IND = 'U' THEN '02'
  5  WHEN MED_STAT = '23' THEN '03'
  6  END AS DUAL_ELIG
  7  FROM RECIP_ELIG RE, RECIP2082 R2
  8  WHERE RE.RECIP_ID = R2.RECIP_ID(+)
  9  /

RECIP ME FDOS      E TRANS_DAT DU
----- -- --------- - --------- --
10010 WI 10-APR-06 L 01-APR-06 01
10010 WI 10-APR-06 L 01-MAY-06 01
10020 SC 11-APR-06 U 01-MAR-06 02
10020 SC 11-APR-06 U 01-APR-06 02
10020 SC 11-APR-06 U 01-FEB-06 02
10020 SC 11-APR-06 U 01-MAY-06 02
10030 L1 02-APR-06 A 01-FEB-06 01
10040 23 13-MAY-06             03

8 rows selected.


the rows are not "missing", but the dates are out of range. 

SQL VARIABLE

Henikein, July 12, 2006 - 12:33 pm UTC

Please help me this tom,

I want to populate a value in Variable from select statment to an procedure from sql prompt
let me show it..

sql> select max(ach_id) achid from ad_ach_history;
sql> exec ad_log_min(achid);

see the aliase probably contains max ach_id (2287) , i want to pass this value into ad_log_min procedure automatically, this is a sql script and runs periodically

error log says identifier achid must be declared

please tell me how to store value in a variable in SQLprompt session and use it again to pass in Procedure/Function/Package

Tom Kyte
July 12, 2006 - 3:50 pm UTC

ops$tkyte@ORA10GR2> create or replace procedure p( p_input in number )
  2  as
  3  begin
  4          dbms_output.put_line( 'you sent me... ' || p_input );
  5  end;
  6  /

Procedure created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> declare
  2          l_input number;
  3  begin
  4          select max(user_id) into l_input
  5            from all_users;
  6          p(l_input);
  7  end;
  8  /
you sent me... 172

PL/SQL procedure successfully completed.

 

Spool

A reader, July 13, 2006 - 4:43 am UTC

Great i figured that out yesterday but then

How do i Spool is a question

I opend a Anonymus block and was able to pass the Variable as desired but the Spooling of the Output is not achived,

I need to spool the Execution of the Procedure and how is that possible inside a Pl/Sql Block.

Thanks Tom your the best..

Tom Kyte
July 13, 2006 - 7:53 am UTC

you do not "spool" inside of an anonymous block (since that is plsql and SPOOL is a sqlplus command)

You spool an anonymous block:


spool foo
begin
code...
end;
/
spool off

help

Paul, July 27, 2006 - 10:14 pm UTC

Hi Tom

Would you please help me.
I want deploy query that show all tablespace_name, owner of all tablespaces, and table_names of all owners.

the result must thus be something

--TABLESPACE_NAME

EXAMPLE
+HR -- schema or owner
.REGIONS --tables of schema
.LOCATIONS
.DEPARTMENTS
.JOBS
...

+OE
.CUSTOMERS
.WAREHOUSES
.ORDER_ITEMS
...

...

USERS
+Gk
.emp --rest of tables
.dept

+JC
... --rest of schemas
.... --rest of tablespaces

I used that queries but I do not how make relation between cursors.

select distinct(tablespace_name) from all_tables;
select distinct(owner) from all_tables where tablespace_name=<tablespace_name>;
select table_name from all_tables where owner=<owner>;

Please help me.


Tom Kyte
July 27, 2006 - 10:50 pm UTC

problem: there is no such thing as an "owner of all tablespaces"

do you mean - I want to show

"all tablespaces, the schemas that own stuff in them, the stuff they own"?

that would be a single query against dba_segments.

Query

Paul, July 28, 2006 - 1:03 pm UTC

Hi Tom,

But en dba_segments are not table_name .

Yes, I want to show all tablespaces, all schemas of each tablespace, all tables of each schema in procedure or query.


Tom Kyte
July 28, 2006 - 8:44 pm UTC

but in dba_segments is the segment_name, which is the table name.

dba_segments is what you want to query, it shows all segments, it shows their names, it shows what type they are.

it is in fact what you want

SQL Query

Paul, August 01, 2006 - 11:13 am UTC

Thanks Tom,

But my problem is that I do not know how deploy at the same time, tablespace_name, schemas and table_names in pl/sql server pages or query. I worked with cursors and arrays but I do not how connect cursors for get that view or result.


For example:
USERS
+ HR
+REGIONS
+LOCATIONS
+DEPARTMENTS
+JOBS
+EMPLOYEES
...
+ JC
+X
+DEPT
....
...
EXAMPLES
+ K
+ CITY
+ COUNTRIES
+ MC
...
...

Would you please help me?

Tom Kyte
August 01, 2006 - 7:01 pm UTC

what about just a "table" of data....


ops$tkyte%ORA10GR2> select decode( rn1, 1, tablespace_name ) tsname,
  2         decode( rn2, 1, owner ) seg_owner,
  3             segment_name, blocks
  4    from (
  5  select tablespace_name, owner, segment_name, blocks,
  6         row_number() over (partition by tablespace_name order by owner, segment_name) rn1,
  7         row_number() over (partition by tablespace_name, owner order by segment_name) rn2
  8    from dba_segments
  9         )
 10  order by tablespace_name, owner, segment_name
 11  /

TSNAME       SEG_OWNER    SEGMENT_NAME                       BLOCKS
------------ ------------ ------------------------------ ----------
BIG_TABLE    BIG_TABLE    BIG_TABLE                          147456
                          BIG_TABLE_PK                        22528
                          BT_OWNER_IDX                        23552
                          BT_OWNER_OBJECT_TYPE_IDX            35840
             OPS$TKYTE    ABC                                     8
INDEX2K      LOTTOUSER    GAME_BET_LOG_IDX002                 13312
                          GBL_DATE_GAME_TRANS_LIC             13312
                          GBL_LICENSEE_ID                     25600
LOTTO        LOTTOUSER    GAME_BET_LOG                          128
                          SYS_C001088                           128
SYSAUX       CTXSYS       DR$CLASS                                8
                          DR$INDEX                                8
                          DR$INDEX_ERROR                          8
                          DR$INDEX_PARTITION                      8
                          DR$INDEX_SET                            8
 

Interactive

Paul, August 07, 2006 - 9:57 pm UTC

Thanks very much Tom.

But how can I do it with cursors because I want to deploy in html page with checkbox for user choose tables of any schema.
I am doing with cursors, but i do not how do recursive cursors.

Would you please help?




Tom Kyte
August 08, 2006 - 7:35 am UTC

I have no idea what you mean, why you do you think you need "recursive cursors" and what does a checkbox have to do with it?

column sort

Anitha, August 08, 2006 - 2:22 am UTC

i have recorset in table like this

category academic_year class section
--------- ------------- ----- ---------
icse 2006-2007 I a
state 2004-2005 II b
state 2006-2007 Lkg b
icse 2006-2007 X a

i Want to get the output like this

class
------
Pre-kg
lkg
ukg
I
II
III
IV
V
VI
VII
VIII
IX
X
i want the solution for this please advise

sincearly
Anitha





Tom Kyte
August 08, 2006 - 7:45 am UTC

I have no idea how to turn your inputs into those outputs. I see lots of data in the output that I cannot see where it came from.

that and I have no table creates, no inserts.

Question

Paul, August 08, 2006 - 3:42 pm UTC

I will explain better.
I am doing psp form, in which deploy as tree that deploy tablespaces, schemas, table_name for choice user, but I do not how deploy tablespaces with cursors and how pass name of tablespace to deploy own schemas and tables.


Tom Kyte
August 09, 2006 - 10:04 am UTC

have you considered using HTML/DB - APEX instead of lovingly hand written code?

I would suggest for this - since the results will simply be MASSIVE, that you do not even attempt a "hierarchy"

Simply present a list of tablespaces.
let them click on one.
then show that tablespace segments (owner.segment_name).

What is better way to write such SQLS ?

Parag J Patankar, August 09, 2006 - 7:45 am UTC

Hi Tom,

Following is an SQL example of finding out missing data using mainly t12, t17 and v11 tables in 9.2 db

select a, b, c, d, e, f
from t12, t17, t31, d
where < big join condition between these tables >
a not in
(
select a
from t12
minus
select b
from t12, t17, t31, d, v11
where < big join condition between various tables >
)
/

t12 and t17 having 45000 records no index
v11 having 32000 records and no index

all tables analyzed using dbms_stats.

Will you pl tell me is there any better way to write SQL interms of perfomance ? I am ready to create additional indexs if required.

thanks for educating me.

best regards
pjp



Tom Kyte
August 09, 2006 - 10:46 am UTC

I don't know what you mean by "missing data" here.

missing data !!!

Parag J Patankar, August 10, 2006 - 12:39 am UTC

Hi Tom,

In my question in this thread "missing data" means missing reference nos. More details are

1/ I have to get references ( field b ) of table v11 by joining various tables to this table

2/ I have to select data from table t12, t17 where reference ( field a ) from table t12 is not exists in table v11

I hope this clarify my question.

thanks & regards
pjp

Tom Kyte
August 10, 2006 - 9:13 am UTC

not really - I don't know your data model, the implied relations, the actual in place relations and all.

select a, b, c, d, e, f
from t12, t17, t31, d
where < big join condition between these tables >
a not in
(
select a
from t12
minus
select b
from t12, t17, t31, d, v11
where < big join condition between various tables >
)
/

select ....
from t12, t17, t32, d
where <join>
and not exist (select null
from t12 t12_inner, v11
where t12_inner.a = t12.a
and v11.whatever = whatever.whatever )


seems to be similar. but cannot say for sure - you might look at this "concept" and see if it applies.

Paul, August 24, 2006 - 8:52 pm UTC

Thanks very much Tom

How does oracle evaluate null or is null ?

A reader, September 12, 2006 - 1:35 pm UTC

Tom,

I have a table with 1 Mil. rows out of which I have 1k rows where perticular timestamp filed is null.

now if I were to run query like

update t
set last_tmst = systimestamp
where tas_tmst is null <=== ***


we have a non-unique index on last_tmst. Although explain plan shows full tablescan.


Q.) how would oracle find which and where these nulls are located ? and then update ?

can you please explain ?






Tom Kyte
September 12, 2006 - 5:34 pm UTC

you have an index on last_tmst
you are searching on tas_tmst

so - why in the world would you think we would use it?

if you had an index on:

t(tas_tmst,0)

or

t(tas_tmst,ANY_NON_NULLABLE_COLUMN)


then we could use that index to find the nulls.
</code> http://asktom.oracle.com/Misc/something-about-nothing.html <code>
for more info

no of times a procedure or package has been executed

A reader, September 13, 2006 - 2:16 pm UTC

is there a way we can find out that how many times a pack or procedure has been executed from dynamic views.


Tom Kyte
September 13, 2006 - 3:15 pm UTC

not really.

top level procedure calls can be seen in v$sql "begin procedure; end;" type of statements - but anything is calls would not be recorded.

auditing may help.
do it yourself auditing would definitely help.

Brad, October 23, 2006 - 1:06 pm UTC

I have 7 tables...each table has 30 columsn each and around 8 million rows each
..They all have a primary key on 2 fields (line_number,store_number)..whats the best way to join all these
tables..i tried the union all and max() trick..but it blew out the temp space...any other ways?


Tom Kyte
October 23, 2006 - 1:53 pm UTC

there is only one way to join them all - via a SQL statement that "joins"

Not sure what you were using union all and max() or what "trick" was involved - you are not being very clear.

A reader, October 23, 2006 - 4:12 pm UTC

i want to full outer join all tables...


its basically the following


select
line_number,
store_number,
max(sales),
max(stock)
from
(
select
line_number,
store_number,
sales,
null stock
from table_1
union all
select
line_number,
store_number,
null sales,
stock
from table_2

----
---
table3
--
table4
...
table5
table7

)
group by
line_number,
store_number

Tom Kyte
October 23, 2006 - 5:30 pm UTC

umm, you are not even joining.

is store number/line number unique in each of these seven tables.

this is not making any sense - I don't see a single join anywhere.

Excellent

Venkat, October 27, 2006 - 7:09 pm UTC

I have this simple query. Iam trying to get the column heading as Range1 and Range2 when i run this query. Let us say the birthdate is between 40 and 50 date range i would like it to display 40 and 50 not hardcoding the value like "40 and 50". Basically i want to concat B.RANGE1 and B.RANGE2 to show the column heading. I tried but not able to do this. Could you help?

select
(CASE when trunc( months_between(sysdate,birthdate) /12 ) between B.RANGE1 and B.RANGE2 then 1 else 0 END)
"RANGE1 and RANGE2"
FROM PS_PERSONAL_DATA A, PS_AGE_RANGE B
WHERE EMPLID = 'XXXXX'

Tom Kyte
October 27, 2006 - 8:19 pm UTC

you will have to "hard code" identifiers, column names are identifiers.


in general, every single row in your query could have a different value for its name!!!


To: Venkat

Michel Cadot, October 28, 2006 - 2:12 am UTC

As it, your query, if you have 10 ranges, will display nine lines with "0" and onr with "1". If you achieve what you want, you will have something like:

Between 40 and 50
-----------------
0
0
0
0
0
1
0
0
0
0

Which is quite useless.
So what do you really want to achieve?

Michel


Thanks

venkat, October 28, 2006 - 9:39 am UTC

What iam tryimg is lets us the age range table has values.
Range1 Range2
10 20
20 30
30 40
40 50
50 60
60 70
70 80
80 90
90 100

If the age is between 10 and 20 the colum should display '10 and 20' or between 20 and 30 the colum should display '20 and 30' and so on.. and i do not want to hard code '10 and 20' or '20 and 30' or ... I want these come from columns Range1 and Range2. If i were giving age between 10 and 20 in my query i can hradcode the column as "10 and 20". When i do not the values in Range1 and Range2 columns i cannot hardcode, right? The ranges in the table change so i want to make it dynamic instead of hardcoding.I hope i made my question clear.
Thanks again for your time to answer my questions.

Tom Kyte
October 28, 2006 - 10:42 am UTC

think about this for a minute please.


You have lots of ranges there.

And how many names can an individual column be known by exactly????????

think about it - really. A column can have a name of "10 and 20" but what happens when we join to your second row eh???

you are asking for something that is not quite possible to do regardless of how you try to do it - a column has A NAME.

To: Venkat

Michel Cadot, October 28, 2006 - 10:54 am UTC


I still don't understand.
What should be the result if you search for 2 EMPLID one in range 40-50 and the other one in range 10-20?

Michel


Tom Kyte
October 28, 2006 - 11:19 am UTC

(like I said "think about it" :)

Thanks Michel for the response

Venkat, October 29, 2006 - 6:06 pm UTC

So, are you saying it is not possible to display the results something like this below, showing the count of employees with their age falling in each range?

10 AND 20 20 AND 30 30 AND 40 40 AND 50 50 AND 60
========= ========= ========= ========= =========
20 20 20 20 20

Thanks again for all your time.

Tom Kyte
October 29, 2006 - 6:32 pm UTC

that definitely is - but - and this is important to know - IDENTIFIERS ARE STATIC

in order to write that query, you must:

a) query your range table to
b) discover your ranges to
c) discover the NUMBER OF COLUMNS you have to
d) create the query that incorporates your ranges as column names AND selects that many columns.

To: Venkat

Michel Cadot, October 30, 2006 - 2:46 am UTC

To show how to use what Tom said.
You can do it in SQL*Plus, if you know the number of ranges. For instance using good old SCOTT schema, I know there are 5 grades in SALGRADE, I can do:

SCOTT> def nbRange=5
SCOTT> def colWidth=12
SCOTT> col r1 format a&colWidth new_value c1
SCOTT> col r2 format a&colWidth new_value c2
SCOTT> col r3 format a&colWidth new_value c3
SCOTT> col r4 format a&colWidth new_value c4
SCOTT> col r5 format a&colWidth new_value c5
SCOTT> col f new_value f
SCOTT> select max(decode(grade,1,losal||' to '||hisal)) r1,
2 max(decode(grade,2,losal||' to '||hisal)) r2,
3 max(decode(grade,3,losal||' to '||hisal)) r3,
4 max(decode(grade,4,losal||' to '||hisal)) r4,
5 max(decode(grade,5,losal||' to '||hisal)) r5,
6 lpad('0',&colWidth-1,'9') f
7 from salgrade
8 /
R1 R2 R3 R4 R5 F
------------ ------------ ------------ ------------ ------------ -----------
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999 99999999990

1 row selected.

SCOTT> col c1 format &f heading "&c1"
SCOTT> col c2 format &f heading "&c2"
SCOTT> col c3 format &f heading "&c3"
SCOTT> col c4 format &f heading "&c4"
SCOTT> col c5 format &f heading "&c5"
SCOTT> select
2 count(case when grade=1 and sal between losal and hisal then 1 end) c1,
3 count(case when grade=2 and sal between losal and hisal then 1 end) c2,
4 count(case when grade=3 and sal between losal and hisal then 1 end) c3,
5 count(case when grade=4 and sal between losal and hisal then 1 end) c4,
6 count(case when grade=5 and sal between losal and hisal then 1 end) c5
7 from emp, salgrade
8 /
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999
------------ ------------ ------------ ------------ ------------
3 3 2 5 1

1 row selected.

Of course, the first query should be enclosed between "set termout off" and "set termout on" to avoid its result in the report.

Now if you don't know how many grades you have. You can simulate a SQL*Plus report with SQL:

SCOTT> set heading off
SCOTT> col nop noprint
SCOTT> col ul fold_before
SCOTT> with
2 ranges as (
3 select grade, lpad(losal||' to '||hisal, &colWidth) range,
4 count(*) over () nb
5 from salgrade
6 ),
7 data as (
8 select grade, to_char(count(*),&f) cnt
9 from emp, salgrade
10 where sal between losal and hisal
11 group by grade
12 )
13 select 1 nop,
14 replace(sys_connect_by_path(range,'/'),'/',' ') rg,
15 sys_connect_by_path(lpad('=',&colWidth,'='),' ') ul
16 from ranges
17 where level = nb
18 connect by prior grade = grade-1
19 start with grade = 1
20 union all
21 select 2 nop,
22 replace(sys_connect_by_path(nvl(d.cnt,lpad('0',&colwidth)),'/'),
23 '/',' '),
24 ''
25 from ranges r, data d
26 where level = r.nb
27 and d.grade (+) = r.grade
28 connect by prior r.grade = r.grade-1
29 start with r.grade = 1
30 order by 1
31 /
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999
============ ============ ============ ============ ============
3 3 2 5 1

2 rows selected.

I let you decypher the query, just execute it step by step to see what happens at each level.

Michel

Very useful

Biswadip Seth, October 30, 2006 - 6:39 am UTC

Hello Tom,
thanks for the query.It helped me alot...now i have a problem.in a table i have the data like

ID Name Dept
-----------------------
1 C EDM
3 A RSS
2 B KOL
4 P AAA
5 V BBB


in a query i want the output in sorted order of all the columns although there will be no relation between the columns of the row...i want a out like below

ID Name Dept
-----------------------
1 A AAA
3 B BBB
2 C EDM
4 P KOL
5 V RSS

Is it possible to get such a output? any help will be highly appreciative.

Thnaking You
Biswadip Seth



Tom Kyte
October 30, 2006 - 9:15 am UTC

that does not even begin to make sense to me, why would you want to do that? What kind of data could this be that this would make sense? I am very very curious on this one.

If you tell me, I'll show you how. don't forget the create table and insert intos as well!

To: Biswadip Seth

Michel Cadot, October 30, 2006 - 8:42 am UTC


Yes, it is possible but see Tom's answer at (for instance):
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3083286970877#70128556732374 <code>

Michel


Thanks Michel Again!!

Venkat, October 30, 2006 - 10:33 am UTC

I will try the query you have given. Thanks again for your valuable time.

Thanks Michel

Biswadip, October 30, 2006 - 11:22 pm UTC

Thanks Michel for your reply.
Actully i have a scenario where i need to show all the data always in the sorted manner like user want to display all the numbers fields data in sorted(highest value in the top) and string field alphabatically sorted,They were not very particular about the relationship, but they are more keen on see the value in the sorted order always.
So i needed that query.
I able to write a query....but it's show....if you able to write some thing better please reply me.

The following query is mine
----------------------------
Select a.id ,b.name ,c.dept
FROM (SElect rownum r ,ID
from (Select ID ,NULL NAME, NULL DEPT
from test_111 A Order By ID
))a,
(SElect rownum r ,NAME
from (Select NULL ID,NAME NAme, null dept
from test_111 A Order by name
))b,
(SElect rownum r ,dept
from (Select NULL ID,NUll NAme, Dept dept
from test_111 A Order by dept
))c
where a.r = b.r
and a.r = c.r
and b.r = c.r




Tom Kyte
October 31, 2006 - 8:41 am UTC

this is baffling

of what POSSIBLE use could this result be.

Utterly baffling.


select *
from (select id, row_number() over (order by id) rn from t) a,
(select name, row_number() over (order by name) rn from t) b,
(select dept, row_number() over (order by dept) rn from t) c
where a.rn = b.rn and b.rn = c.rn
order by a.rn;

if by "but it's show" you meant "but it's SLOW" - the only answer to that would be "no kidding, think about it"

Make the end user wait a long time
Use inordinate machine resources
to give them a scrambled egg

I cannot imagine the real world business case whereby this even BEGINS to make sense to do.

Again a Tricky one.....Urgent

Biswadip, October 30, 2006 - 11:30 pm UTC

I have a table called test(ID Number, ParentID Number, Name Varchar2(20))
i have the following the data in a hierarchical structure

Root (0)
|
----LP-0 (1)
|
|
----LI-1 (2)
| |
| |--LP-1 (2.1)
| |--LP-2 (2.2)
| |--LP-3 (2.2)
|
|
----LO-1 (3)
| |
| |
| |--LP-4 (3.1)
| |--LP-5 (3.2)
| |--LO-2 (3.3)
| |
| |--LP-6 (3.3.1)
| |--LP-7 (3.3.2)
| |--LO-3 (3.3.3)
| |
| |--LP-8 (3.3.3.1)
| |--LP-9 (3.3.3.2)
|----LP-10 (4)


So The data in the table is looks like
LEVEL ID PARENTID NAME
==============================================
1 1 Root
2 2 1 LP-0
2 3 1 LI-1
3 4 3 LP-1
3 5 3 LP-2
3 6 3 LP-3
2 7 1 LO-1
3 8 7 LP-4
3 9 7 LP-5
3 10 7 LO-2
4 11 10 LP-6
4 12 10 LP-7
4 13 10 LO-3
5 14 13 LP-8
5 15 13 LP-9
2 16 1 LP-10

I need a output with another column say LevelNumber the value of which are displayed in the
tree structure adjacent to each node. Read the number from the right,1st number from before the
1st dot(.) indicates which child it is of it's parent. like 1st or 2nd or 3rd or so on. and rest of the number in the concatenated manner indicates again which level and which parent
it belongs to.
i have written a query to get the value of the LevelNumber as I want. The query is below

SELECT m.ID
, m.Name
, m.ParentID ParentID
, NVL(LTRIM(SYS_CONNECT_BY_PATH(m.Rank, '.'), '.'),'0') LevelNumber
FROM (SELECT ID,Name,ParentID,(CASE
WHEN PARENTID IS NULL THEN NULL
ELSE RANK() OVER (PARTITION BY ParentID ORDER BY ID)
END) Rank
FROM test) m
CONNECT BY m.ParentID = PRIOR m.ID
START WITH m.ParentID IS NULL
ORDER BY ID
/

LEVEL ID NAME PARENTID LEVELNUMBER
=====================================================
1 1 Root 0
2 2 LP-0 1 1
2 3 LI-1 1 2
3 4 LP-1 3 2.1
3 5 LP-2 3 2.2
3 6 LP-3 3 2.3
2 7 LO-1 1 3
3 8 LP-4 7 3.1
3 9 LP-5 7 3.2
3 10 LO-2 7 3.3
4 11 LP-6 10 3.3.1
4 12 LP-7 10 3.3.2
4 13 LO-3 10 3.3.3
5 14 LP-8 13 3.3.3.1
5 15 LP-9 13 3.3.3.2
2 16 LP-10 1 4


I am able to get the output as I wanted....but the problem is I am inserting huge number of rows like more than 10000 rows at a time and then I am generating this LevelNumber which are inserted to it's 4 child table. I need this LevelNumber in the child tables so I am querying this select statement for all the child tables for 10000 rows which in turn makes my application very slow


Is there any better way to create this kind of number automatically.
Any kind of help is highly appreciative.
Thanks.

With Regards
Biswadip Seth




To: Biswadip Seth

Michel Cadot, October 31, 2006 - 2:17 am UTC

You neither read the first 3 lines of the link I posted nor Tom's followup last sentence:

"no create table
no insert into
no look"

"don't forget the create table and insert intos as well!"

Btw, the query and explaination you gave does not match with the result you showed in your previous post in which ID is not sorted.

Michel

Michel Cadot, October 31, 2006 - 9:10 am UTC

Tom,

Just as an exercise, here's a solution that makes only one table scan:

with data as (
select id, name, dept, rn,
row_number () over (order by id) rid,
row_number () over (order by name) rnam,
row_number () over (order by dept) rdep
from (select rownum rn, t.* from t) )
select case
when rid = rn then id
when rid < rn
then lead(id,decode(sign(rn-rid),-1,0,rn-rid)) over (order by rid)
when rid > rn
then lag(id,decode(sign(rid-rn),-1,0,rid-rn)) over (order by rid)
end id,
case
when rnam = rn then name
when rnam < rn
then lead(name,decode(sign(rn-rnam),-1,0,rn-rnam)) over (order by rnam)
when rnam > rn
then lag(name,decode(sign(rnam-rn),-1,0,rnam-rn)) over (order by rnam)
end name,
case
when rdep = rn then name
when rdep < rn
then lead(dept,decode(sign(rn-rdep),-1,0,rn-rdep)) over (order by rdep)
when rdep > rn
then lag(dept,decode(sign(rdep-rn),-1,0,rdep-rn)) over (order by rdep)
end dept
from data
order by rn
/

With a little time I can find a more complicated way to solve this (funny) question. :)

Michel


Create and Insert Scripts related to "a Tricky one...."

Biswadip, November 02, 2006 - 7:20 am UTC

The scripts are as follows

create table TEST
(
ID NUMBER,
PARENTID NUMBER,
NAME VARCHAR2(20)
)
/
Insert Scripts are as below
---------------------------
insert into TEST (ID, PARENTID, NAME) values (1, null, 'Root');
insert into TEST (ID, PARENTID, NAME) values (2, 1, 'LP-0');
insert into TEST (ID, PARENTID, NAME) values (3, 1, 'LI-1');
insert into TEST (ID, PARENTID, NAME) values (4, 3, 'LP-1');
insert into TEST (ID, PARENTID, NAME) values (5, 3, 'LP-2');
insert into TEST (ID, PARENTID, NAME) values (6, 3, 'LP-3');
insert into TEST (ID, PARENTID, NAME) values (7, 1, 'LO-1');
insert into TEST (ID, PARENTID, NAME) values (8, 7, 'LP-4');
insert into TEST (ID, PARENTID, NAME) values (9, 7, 'LP-5');
insert into TEST (ID, PARENTID, NAME) values (10, 7, 'LO-2');
insert into TEST (ID, PARENTID, NAME) values (11, 10, 'LP-6');
insert into TEST (ID, PARENTID, NAME) values (12, 10, 'LP-7');
insert into TEST (ID, PARENTID, NAME) values (13, 10, 'LO-3');
insert into TEST (ID, PARENTID, NAME) values (14, 13, 'LP-8');
insert into TEST (ID, PARENTID, NAME) values (15, 13, 'LP-9');
insert into TEST (ID, PARENTID, NAME) values (16, 1, 'LP-10');
commit;



Sree, November 02, 2006 - 11:05 am UTC

Hi,

I am currently using Forms 4.5 with Oracle 8.0 as backend. We are migrating all forms & database to 10g. As a part of migration everything is under control except one issue.

In Oracle Forms, the below statements are not working. I can't write the big query but giving you a hint by taking an example of EMP table where the code is written to update the statement in this manner.

update emp
set ename = :emp.ename,
sal = :emp.sal,
category = :emp.category,
where current of emp;


The above statement is throwing with an error when trying to compile in 10g Developer Suite.

Error 389:
Table, View or Alias name 'EMP' not allowed in this context.

Please throw light on this Tom



Tom Kyte
November 02, 2006 - 12:10 pm UTC