SQL query
munz, July 21, 2002 - 5:07 pm UTC
TOm:
Based on this, can you always define a variable, define a string for it and the opne cursor for it.
will this also work from in list search like:
l_list=('va','ca','pa');
l_query:='select * from table where col in '||l_list;
open cursor for l_query;
July 21, 2002 - 11:56 pm UTC
yes it will but it'll violate my "you better use bind variables" rule wouldn't it.
Make sure to put an
execute immediate 'alter session set cursor_sharing=force';
open ...
execute immediate 'alter session set cursor_sharing=exact';
around the dynamic open to lessen the impact. I would prefer NOT to use this approach at all.
Great
Bhavani, July 21, 2002 - 5:38 pm UTC
This Solves Lot of My problems. Once again, you are the Best.
Ope ncursor
munz, July 22, 2002 - 9:19 am UTC
TOm:
A followup to what you wrote:
execute immediate 'alter session set cursor_sharing=force';
execute immediate 'alter session set cursor_sharing='exact';
what doest this do?
Thanks
July 22, 2002 - 10:03 am UTC
search this site for cursor_sharing and read all about it!
or read chapter 10 of my book "Expert one on one"
Great!
A reader, July 22, 2002 - 11:16 am UTC
Simple, concise suggestion and explanation of a useful development principle!
Thanks Tom!
that's cross tab query
Bhavesh Tailor, July 22, 2002 - 1:21 pm UTC
Hi Tom:
The above result can be easily obtained in MS Access using crosstab query. Why Oracle does NOT provide crosstab query?
July 22, 2002 - 6:23 pm UTC
MS Access the GUI reporting tool can do a cross tab (and heck, you can use that tool against Oracle if you like).
Oracle reports the GUI reporting tool -- does crosstabs.
Oracle Discover -- the GUI ad-hoc query tool -- does crosstabs.
We have it. If you want to do it in PURE SQL in Access (eg: not using the GUI reporting tool) what then? Show me the code you would write in VB using Access directly that will do a cross tab? That is what we should be comparing here.
Jim, July 22, 2002 - 6:08 pm UTC
Although not recommended, liked the example using
cursor_sharing parameter.
Very Nice
R.Chakravarthi, August 29, 2003 - 12:33 am UTC
Dear Sir,
can I have a query that selects rows from all tables in my
schema.Is a single select possible?Please inform me if you
have a query like that.
Thanks.
August 29, 2003 - 9:06 am UTC
you could write one that union alls all of the tables.
you would have to make every table look structurally the same (same number of columns -- eg, select NULL to make up columns when a table doesn't have enough, same types of columns -- eg, use to_char on number/dates to make everything a string)
of what possible use this could ever be -- well, that I cannot fathom.
Nice
R,Chakravarthi, September 02, 2003 - 1:23 am UTC
Dear Sir,
Well and wish the same from you.I have some questions related to queries.
1)Suppose if I have a string like "Oracle",How to write a query that prints each character in each line?
2)Have a look at the following queries.
(i)select ename,deptno,sal from emp e1 where sal = (select max(sal) from emp e2 where e1.deptno = e2.deptno) order by deptno;
(ii)select ename,deptno,sal from emp where sal in(select
max(sal) from emp group by deptno;
My Question is Is there any other way that this query can
be issued?
3) How to fix the "Single row subquery returns more than one
row" message?
Expecting your reply.
Thanks
September 02, 2003 - 7:26 am UTC
1)
ops$tkyte@ORA920> variable x varchar2(20)
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec :x := 'Oracle'
PL/SQL procedure successfully completed.
ops$tkyte@ORA920>
ops$tkyte@ORA920> select substr( :x, rownum, 1 )
2 from all_objects
3 where rownum <= length(:x);
S
-
O
r
a
c
l
e
6 rows selected.
2) given that you gave me 2 totally different queries that return 2 totally different answers -- I'd have to say "i must be able to give an infinite set of responses since you are just looking for random queries" :)
ops$tkyte@ORA920> drop table emp;
Table dropped.
ops$tkyte@ORA920> create table emp as select * from scott.emp;
Table created.
ops$tkyte@ORA920> drop table dept;
Table dropped.
ops$tkyte@ORA920> create table dept as select * from scott.dept;
Table created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> update emp set sal = 2000;
14 rows updated.
ops$tkyte@ORA920> update emp set sal = 3000 where rownum = 1;
1 row updated.
ops$tkyte@ORA920> commit;
Commit complete.
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename,deptno,sal
2 from emp e1
3 where sal = (select max(sal)
4 from emp e2
5 where e1.deptno = e2.deptno)
6 order by deptno;
ENAME DEPTNO SAL
---------- ---------- ----------
CLARK 10 2000
KING 10 2000
MILLER 10 2000
SMITH 20 3000
ALLEN 30 2000
TURNER 30 2000
JAMES 30 2000
WARD 30 2000
MARTIN 30 2000
BLAKE 30 2000
10 rows selected.
ops$tkyte@ORA920>
ops$tkyte@ORA920> select ename,deptno,sal
2 from emp
3 where sal in (select max(sal)
4 from emp
5 group by deptno );
ENAME DEPTNO SAL
---------- ---------- ----------
ALLEN 30 2000
WARD 30 2000
JONES 20 2000
BLAKE 30 2000
SCOTT 20 2000
TURNER 30 2000
JAMES 30 2000
MILLER 10 2000
FORD 20 2000
ADAMS 20 2000
KING 10 2000
CLARK 10 2000
MARTIN 30 2000
SMITH 20 3000
14 rows selected.
<b>see, totally different answers to your query with the same data</b> This is why I have a chapter in my new book on "don't tune a query, tune a question"
I have to assume that you want the set of people, by department, that make the TOP salary in that dept... queries such as:
select ename, deptno, sal
from ( select ename, deptno, sal,
rank() over ( partition by deptno order by sal desc nulls last ) r
from emp )
where r = 1;
select ename, deptno, sal
from ( select ename, deptno, sal,
max(sal) over ( partition by deptno ) max_sal
from emp )
where sal = max_sal
/
select ename, deptno, sal
from ( select ename, deptno, sal,
first_value(sal) over ( partition by deptno order by sal desc nulls last ) first_sal
from emp )
where sal = first_sal
/
will do that.
3) there is nothing to fix here.
You have a logic/algorithm problem.
You are basically saying something like:
select * from t where x = ( subquery )
and subquery ISN'T a single value but a set. what would you have us do? you need to phrase your problem in english and then we might be able to tell you how to solve it.
the error "single row subquery" isn't an "oracle" error -- it is a bug in your logic/code/data....
Nice
R.Chacravarthi, September 03, 2003 - 5:29 am UTC
Dear Sir,
Thanks for your reply.My second question was
2)I have put a query which returns highest salary getting employees departmentwise.Both the queries return the same resultset in my system.I use oracle v8.0 personal edition.Queries and their resultsets follow after this.
** What I asked you was "Is there any other way that this query can be expressed?".But you are agonising over my question.I don't know why
Any how Thanks for your time spending.If you have any other way of expressing
the same query,Please do write a follow up.
SQL> select ename,sal,deptno from emp e1 where sal = (select max(sal) from
2 emp e2 where e1.deptno = e2.deptno) order by deptno;
ENAME SAL DEPTNO
---------- ---------- ----------
KING 5000 10
SCOTT 3000 20
FORD 3000 20
BLAKE 2850 30
SQL> select ename,sal,deptno from emp where sal in(select max(sal) from emp
2 group by deptno) order by deptno;
ENAME SAL DEPTNO
---------- ---------- ----------
KING 5000 10
SCOTT 3000 20
FORD 3000 20
BLAKE 2850 30
SQL> spool off
September 03, 2003 - 7:05 am UTC
your queries -- in general -- do NOT RETURN THE SAME data -- i proved that. so what if you data is setup such that they do not. they are *not* the same query. they ask totally totally different questions. they are not even a little comparable.
that is why i "agonized" over this -- you are making a very very very false assumption. One cannot tell you a 3rd way to write a query given that the first two queries are not even remotely similar.
remember -- it only takes one counter case to prove something wrong. your queries are not similar.
no matter how many times you run them -- they are not similar. they are not the same.
I did try to answer your question which I asssume was "find the highest paid people by deptno". I gave you many examples. your first query does that. your second query DOES NOT..
your second query returns the set of people who make as much as the highest paid person IN ANY DEPTNO. not just theirs -- ANY deptno.
(i'm trying to teach you how to read the sql so you do not make errors like this, that is the "agonization" here)....
Nice
R.Chacravarthi, September 08, 2003 - 9:12 am UTC
Dear sir,
well and wish the same from you.
1)I tried to put a query which groups employees under their managers.But it always returned errors.I tried a pl/sql block which returned the result set I was looking for.Can this anonymous block be transformed into a sql query?Could you please help? Please don't forget to specify the different formats of the same query.
Thanks in advance.
SQL> begin
2 for x in (select distinct mgr from emp order by mgr) loop
3 dbms_output.put_line('Manager:'||x.mgr);
4 for y in (select ename from emp where mgr = x.mgr) loop
5 dbms_output.put_line(y.ename);
6 end loop;
7 dbms_output.put_line(' ');
8 end loop;
9 end;
10 /
Manager:7566
SCOTT
FORD
Manager:7698
ALLEN
MARTIN
TURNER
JAMES
Manager:7782
MILLER
Manager:7788
ADAMS
Manager:7839
JONES
BLAKE
CLARK
JOE
Manager:7902
SMITH
Manager:
PL/SQL procedure successfully completed.
SQL> spool off
2)Is there a query format like
select 'select * from a' from b;
If yes,please provide some examples.
Thanks for your previous reply.
September 08, 2003 - 12:33 pm UTC
1 select mgr, cursor( select ename from emp emp2 where emp2.mgr = emp.mgr )
2* from ( select distinct mgr from emp ) emp
ops$tkyte@ORA920LAP> /
MGR CURSOR(SELECTENAMEFR
---------- --------------------
7566 CURSOR STATEMENT : 2
CURSOR STATEMENT : 2
ENAME
----------
SCOTT
FORD
7698 CURSOR STATEMENT : 2
CURSOR STATEMENT : 2
ENAME
----------
ALLEN
WARD
MARTIN
TURNER
JAMES
is one way.
ops$tkyte@ORA920LAP> break on mgr skip 1
ops$tkyte@ORA920LAP> select mgr, ename from emp order by mgr, ename;
MGR ENAME
---------- ----------
7566 FORD
SCOTT
7698 ALLEN
JAMES
MARTIN
TURNER
WARD
7782 MILLER
7788 ADAMS
7839 BLAKE
CLARK
JONES
7902 SMITH
KING
14 rows selected.
is perhaps another.
Sql Query
archary, September 09, 2003 - 11:14 am UTC
Hi Tom,
Need your help.
For example I have tables and data as bellow.
I have same empname in all deptno with diff in sal.
If there is no employee in a deptno. still i have to display empname with sal = 0.
Please help. Thanks in advance.
SQL> desc tt
Name Null? Type
----------------------------------------- -------- -------- ENAME VARCHAR2(10)
SAL NUMBER
DNO NUMBER
SQL>
SQL>
SQL> select * from tt;
ENAME SAL DNO
---------- ---------- ----------
A 100 10
A 200 10
A 300 20
Elapsed: 00:00:00.66
SQL> desc t
Name Null? Type
----------------------------------------- -------- -------- DNO NUMBER
SQL> select * from t;
DNO
----------
10
20
30
Like to display output as bellow.
ENAME SAL DNO
---------- ---------- ----------
A 100 10
A 200 10
A 300 20
A 0 30
September 09, 2003 - 11:56 am UTC
where the heck did the ename come from?????? if there is no emp in dno=30, how do they have a name????
in general, this is an outer join
select ename, nvl(sal,0), dno
from tt, t
where tt.dno(+) = t.dno;
Sql Query
archary, September 10, 2003 - 3:56 am UTC
Hi,
Thank you very much.
My intension is
In a year 2003 I have fixed customers for 12 months.
So I have year and month info in one table.
My senario is I haven't made any sales to my fixed customers in some months in a year.
This info I am not logging in my transaction table.
Means I don't have any info about customers for few months.
So in this case I need to generate a report for 12 months. If no sales in a moth still i have to show customer name with zero amount.
SQL> desc t_cust
Name Null? Type
----------------------------------------- -------- -------
CNO NUMBER
AMT NUMBER
MTH NUMBER
YR NUMBER
SQL> select * from t_cust;
CNO AMT MTH YR
---------- ---------- ---------- ----------
1001 100 1 2003
1001 200 2 2003
1001 4 4 2003
1001 500 2 2003
1002 2000 1 2003
1002 3000 2 2003
1002 2000 3 2003
1002 3000 3 2003
1002 6000 6 2003
SQL> desc myr
Name Null? Type
----------------------------------------- -------- ----------
MTH NUMBER
YR NUMBER
SQL> select * from myr;
MTH YR
---------- ----------
1 2003
2 2003
3 2003
4 2003
5 2003
6 2003
7 2003
8 2003
9 2003
10 2003
11 2003
12 2003
Please provide solution. Thanks in advance.
I am able to get for one customer.
select mth, yr, cno, sum(amt)
from
(
select a.mth mth, a.yr yr,
decode(a.cnt,0,(select unique cno from t_cust where cno = 1001),cno) cno, nvl(a.amt,0) amt, a.c
from
(
select cno cno, myr.mth mth, myr.yr yr, amt amt, count(cno) over ( partition by myr.mth ) cnt
from t_cust, myr
where t_cust.mth(+) = myr.mth
and t_cust.yr(+) = myr.yr
and t_cust.cno(+) = 1001
) a
)
group by mth, yr, cno
September 10, 2003 - 7:33 pm UTC
you need to join your list of distinct customer ids to the year/month table -- and then outer join to that.
outer join to:
select year, month, cust_id
from year_month_table, (select distinct cust_id from customer_table)
query
A reader, October 17, 2003 - 4:43 am UTC
Hi Tom
I have a data in two table as given below
Table..A
colA1 colA2
1 null
2 null
3 null
4 11
5 null
6 null
Table..B
colB1 colA2
1 11
2 12
3 13
I want to update column colA2 of table..A with colA2 of tableB
in a single statement.The final result of Table..A is like this
Table..A
colA1 colA2
1 null
2 12
3 null
4 11
5 13
6 null
Here Value 11 already exist in TableA so need not to update
so the rest 12,13 from TableB should update in TableA
any order.
Can you please give me the solution .
October 17, 2003 - 10:05 am UTC
search for
update join
on this site
Sql Query to cut string
ARC, October 23, 2003 - 8:28 am UTC
Hi Tom,
Need your help to write query to cut a string into parts.
I have data as bellow.
Table : TB_ODW_GO
REF_NBR LOG_DESC
----------------------------------------------------------
0234567 -111: ORDER:10901 NOT FOUND-222 ZIPCODE: NOTFOUND
1274033 -111: ORDER:12343 NOT FOUND
I want to display output as bellow.
REF_NBR ERR_CD ERR_MSG
-----------------------------------------------------------
0234567 -111 ORDER:10901 NOT FOUND
0234567 -222 ZIPCODE: NOTFOUND
1274033 -111 ORDER:12343 NOT FOUND
Thanking you in advance.
ARC.
October 23, 2003 - 12:52 pm UTC
you want a single row as two? is that correct? is it "two" or "n"
Sql Query
ARC, October 28, 2003 - 12:47 am UTC
Hi Tom,
I need it for 'n' lines.
Thanks
ARC
October 28, 2003 - 7:55 am UTC
you'll be doing it procedurally -- in 9i, you can write a pipelined function to do it.
something like this. I made a simplifying assumping that the string was formated as
-<ERROR_CODE>: <MESSAGE>-<ERROR_CODE>: <MESSAGE>
that is, - precedes error code, error code terminated by :. If that is not your case, you need to write the code to parse your string:
ops$tkyte@ORA920PC> create table t ( ref_nbr number, log_desc varchar2(80) );
Table created.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into t values ( 0234567, '-111: ORDER:10901 NOT FOUND-222: ZIPCODE: NOTFOUND' );
1 row created.
ops$tkyte@ORA920PC> insert into t values ( 1274033, '-111: ORDER:12343 NOT FOUND' );
1 row created.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace type myScalarType as object
2 ( ref_number number, err_cd number, err_msg varchar2(40) )
3 /
Type created.
ops$tkyte@ORA920PC> create or replace type myTableType as table of myScalarType
2 /
Type created.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace function parse( p_cur in sys_refcursor )
2 return myTableType
3 PIPELINED
4 as
5 type array is table of t%rowtype index by binary_integer;
6 l_data array;
7 l_rec myScalarType := myScalarType(null,null,null);
8 l_tmp long;
9 n number;
10 begin
11 loop
12 fetch p_cur bulk collect into l_data limit 100;
13 for i in 1 .. l_data.count
14 loop
15 l_rec.ref_number := l_data(i).ref_nbr;
16 l_data(i).log_desc := l_data(i).log_desc || '-';
17
18 loop
19 n := instr( l_data(i).log_desc, '-', 2 );
20 exit when (nvl(n,0)=0);
21
22 l_tmp := substr( l_data(i).log_desc, 1, n-1);
23 l_data(i).log_desc := substr( l_data(i).log_desc, n );
24
25 n := instr( l_tmp, ':' );
26 l_rec.err_cd := substr(l_tmp, 1, n-1 );
27 l_rec.err_msg := trim( substr( l_tmp, n+1 ) );
28 pipe row(l_rec);
29 end loop;
30 end loop;
31 exit when p_cur%notfound;
32 end loop;
33 close p_cur;
34 return;
35 end;
36 /
Function created.
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select * from TABLE( parse( cursor(select * from t) ) );
REF_NUMBER ERR_CD ERR_MSG
---------- ---------- ----------------------------------------
234567 -111 ORDER:10901 NOT FOUND
234567 -222 ZIPCODE: NOTFOUND
1274033 -111 ORDER:12343 NOT FOUND
Sql Query cut string
ARC, October 30, 2003 - 12:56 am UTC
Tom,
Thanks for solution. Really it is very good and also I came to know about "pipelined" with this solution which I am not aware of before.
The same thing I achived using PL/SQL records and table in version 8.1.7
Thanks again.
ARC
Sort Explicit Static Cursor at Runtime
robert, February 06, 2004 - 7:13 pm UTC
Tom,
Is this possible. ?
I want to web app user sorting option.
No error raised below but the ORDER BY didn't happen
------------------------
DECLARE
l_sort INTEGER := 2;
CURSOR empcur
IS
SELECT empno, ename FROM emp ORDER BY l_sort ;
CURSOR empcur2(psort IN INTEGER)
IS
SELECT empno, ename FROM emp ORDER BY psort ;
BEGIN
FOR i IN empcur
LOOP
dbms_output.put_line(i.ename);
END LOOP;
dbms_output.put_line('-----------------');
FOR i IN empcur2(l_sort)
LOOP
dbms_output.put_line(i.ename);
END LOOP;
END ;
thanks
February 07, 2004 - 2:22 pm UTC
that is alot like:
select * from emp order by '2';
it ordered by a CONSTANT, the constant value "2"
You can either:
a) use dynamic sql
b) use case or decode
eg:
select empno, ename from emp
order by decode( psort, 1, to_char(empno,'00000009'), 2, ename );
you have to to_char numbers and dates since you'll be ordering by a character string -- use YYYYMMDDHH24MISS on dates. If your numbers go negative -- you'll have to deal with that somehow as well.
Nice
Gerhard, February 09, 2004 - 10:03 am UTC
Dear Sir,
Well and wish the same from you.
Is there any other way to put this query?
sql> select ename,nvl(comm,'Commission is null') from emp
or
sql> select ename,decode(comm,null,'Commission is null',comm) from emp
It is just a matter of eagerness and curiosity.I think
*case* is possible but is there any other way?
Please do reply.
Bye!
February 09, 2004 - 10:10 am UTC
there are thousands of ways.
select ename, 'commission is null' from emp where comm is null
union all
select ename, comm from emp where comm is not null;
is yet another, case, nvl2, user defined functions -- would all work as well.
NVL() however is the "correct answer" -- but beware -- comm is a number and 'commision is null' is not a number - beware IMPLICIT conversions. best to use to_char() on comm to make it a string (since commission is null cannot be made into a number)
How to write this sql
Bole Taro Dol Baje, March 10, 2004 - 1:47 pm UTC
CREATE TABLE T ( GP VARCHAR2(10), GPP VARCHAR2(20))
/
Table created.
ALTER TABLE T ADD CONSTRAINT PK1 PRIMARY KEY ( GP, GPP);
/
INSERT INTO T VALUES('A', 'A');
INSERT INTO T VALUES('A', 'B');
INSERT INTO T VALUES('Z', 'A');
I want to find out all those GP values which have a common GPP value? How can I do it, and what are
the multiple ways I can do it. Please help.
March 10, 2004 - 3:39 pm UTC
ops$tkyte@ORA9IR2> select *
2 from (
3 select gp, gpp, count(*) over (partition by gpp) cnt
4 from t
5 )
6 where cnt > 1;
GP GPP CNT
---------- -------------------- ----------
A A 2
Z A 2
first one that popped into my head...
Is this OK ?
A reader, March 10, 2004 - 5:39 pm UTC
select a.* from t a , t b where a.gpp = b.gpp and a.rowid <> b.rowid
GP GPP
---------- --------------------
Z A
A A
2 rows selected
March 10, 2004 - 6:37 pm UTC
there are potentially an infinite number of ways to do this, yes.
That one would overdo it in the event there are three records:
ops$tkyte@ORA9IR2> INSERT INTO T VALUES('x', 'A');
1 row created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
2 from (
3 select gp, gpp, count(*) over (partition by gpp) cnt
4 from t
5 )
6 where cnt > 1;
GP GPP CNT
---------- -------------------- ----------
A A 3
Z A 3
x A 3
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select a.* from t a , t b where a.gpp = b.gpp and a.rowid <> b.rowid
2 /
GP GPP
---------- --------------------
Z A
x A
A A
x A
A A
Z A
6 rows selected.
Which of the above two query will be more performant ?
A reader, March 10, 2004 - 5:49 pm UTC
I think yours may be.
March 10, 2004 - 6:39 pm UTC
they give different answers :)
not fair to compare.
Ah missed the distinct
A reader, March 10, 2004 - 9:07 pm UTC
select distinct a.* from t a , t b where a.gpp = b.gpp and
a.rowid <> b.rowid.
Now they give identical results. Do u agree.
Will the analytic soln outperform the distinct self join soln
Thanx
March 10, 2004 - 9:13 pm UTC
i would prefer analytics over a partial cartesian product
experts' suggestions
A reader, March 11, 2004 - 12:35 am UTC
hi Tom & all learners
I am a beginner with oracle.
what would be the best suggestion to get a perfect hand in oracle other than time and practice
really appreciate suggestions/ books name/sites form all learners/experts on my email.
THANKS in advance to all
March 11, 2004 - 8:28 am UTC
"other than time and practice" -- well, until we as a human race develop the ability to imprint knowledge directly on the brain.......
Read the concepts guide (available on otn for all releases) from cover to cover and if you remember 10% of it, you'll already know 90% more than most people do about Oracle.
A reader, March 11, 2004 - 12:40 am UTC
learnz2004@yahoo.com
return true only if all deptnos exists in child table ver.9.2
A reader, March 12, 2004 - 5:50 pm UTC
The famous DEPT and EMP tables.
How to write an SQL which returns true only if all depts have employees.
My attempt works but I need tom's Brilliant and simple solution and other possible solutions.
1)
select count(*) from dual where exists(
select null from dept having count(*) = (select count(distinct deptno) from emp).
2)select count(*) from emp e,dept d where d.deptno = e.deptno(+) and e.deptno is null
The basic emp table has no emps for deptno = 40, once one is added the above query returns 1 else 0.
Can u please give atleast 3 or 4 or more alternatives ?
Thanx much
March 12, 2004 - 8:00 pm UTC
quiz time?
select decode( count(*), 0, 1, 0 )
from dept
where deptno not in ( select deptno from emp );
add where rownum = 1 (and you'll have 2 more, but thats cheating perhaps...)
select decode( count(*), 0, 1, 0 )
from dept
where NOT EXISTS ( select null from emp where emp.deptno = dept.deptno )
and rownum = 1;
select decode( count(*), 0, 1, 0 )
from ( select deptno from dept
MINUS
select deptno from emp );
U r Great
A reader, March 12, 2004 - 8:13 pm UTC
Thanx for the Quiz answers.You have a lot of gray cells.
Complicated query??
Thiru, May 04, 2004 - 4:12 pm UTC
Tom,
getting stuck with this query. Need your help.
create table t1 (c1 char(2),c2 number,c3 number,c4 date);
create table t2 (c1 char(2),c2 number,c3 number,c4 date);
insert into t1 values('a',100,0,'03-MAY-04');
insert into t1 values('b',100,0,'03-MAY-04');
insert into t1 values('a',0,100,'03-MAY-04');
insert into t1 values('b',0,100,'03-MAY-04');
insert into t1 values('a',150,0,'05-MAY-04');
insert into t1 values('a',0,175,'05-MAY-04');
insert into t values('a',100,100,'03-MAY-04');
insert into t values('b',100,100,'03-MAY-04');
insert into t values('a',150,175,'03-MAY-04');
table: t1
c1 c2 c3 C4
a 100 0 03-MAY-04
b 100 0 03-MAY-04
a 0 100 03-MAY-04
b 0 100 03-MAY-04
a 150 0 05-MAY-04
a 0 175 05-MAY-04
table t2:
c1 c2 c3
a 100 100 03-MAY-04
b 100 100 03-MAY-04
a 150 175 05-MAY-04
How to compare these two tables. The data in the two tables
are actually the same except the way it is spread. How to write
a query to prove thay they are the same. Also how do I get the
table t1 data to look like so that I can use MINUS operator against
the two tables.
table t1:
c1 c2 c3 c4
a 100 100 03-MAY-04
b 100 100 03-MAY-04
a 150 175 05-MAY-04
May 04, 2004 - 7:36 pm UTC
what do you mean "it is the same but the spread"?
You seem to be assigning some relevance to the order of rows in a table, but of course without and order by -- there is no such concept as "order of rows"
I see nothing remotely similar between those two tables.
Think he means...
Gary, May 05, 2004 - 12:11 am UTC
I think he is looking for a simple :
select c1, sum(c2), sum(c3), c4
from t1
group by c1,c4
minus
select c1,c2,c3,c4
from t2
Thiru, May 05, 2004 - 9:19 am UTC
I think I used the incorrect word "spread". It is just that
the second table holds the same values as the first table but in
a different way. Also I am not looking at the Sum function there as suggested. My aim is to find out if there is any difference in the two tables. Just a MINUS wil show differences as the data in table1 exists in two different rows while the data in table2 has the same data in one row.
For eg: table1:
a 100 0 03-MAY-04
a 0 100 03-MAY-04
Table2:
a 100 100 03-MAY-04
In this case, the data is the same in both the tables, but how do I
do that through a query? If I can get the data in the table1 to look
like
a 100 100 03-MAY-04
then I can use the MINUS function to get the differences in the entire
table. Hope I am clear.
Thanks for all the help.
May 05, 2004 - 10:03 am UTC
if the answer wasn't the one given by gary above, then I'd be at a loss as to how to know those two rows should be combined.
so, if gary's answer is not accurate, you'll need to explain in lots more detail...
thiru, May 05, 2004 - 10:44 am UTC
Excuse me for not making things clear for you.
This is what the scenario looks like:
CREATE OR REPLACE TYPE my_obj AS OBJECT (
c1 VARCHAR2(6),
c2 VARCHAR2(6),
c3 number,
c4 DATE,
);
/
CREATE OR REPLACE TYPE my_list AS TABLE OF my_OBJ;
/
My_pkg has the undernoted declaration:
TYPE my_list_pkg IS TABLE OF my_OBJ INDEX BY PLS_INTEGER;
My_pkg_body has ---the actuals are lot bigger.The relevant part shown
Procedure get_list(c1 in varchar2,c2 in varchar2,c3 in number,c4 in date, p_out out my_list)
as
l_listA My_pkg.my_list_pkg;
l_listB My_pkg.my_list_pkg;
----
I have two queries, the first doing a bulk collect into l_listA and second a bulk
collect into l_listB. And then doing the following :
size := l_listA.count;
p_out := my_list();
for cnt in 1..l_listB.count loop
l_listA(size + cnt) := l_listB(cnt);
end loop;
p_out.EXTEND(l_listA.COUNT);
for cnt in 1..l_listA.count loop
p_out(cnt) := l_listA(cnt);
end loop;
RETURN ;
When this list is inserted into table1 it goes in this fashion ( in two rows):
c1 c2 c3 c4
a 100 0 03-MAY-04
a 0 150 03-MAY-04
There is another table table2 that has the same data (in one row )but in this fashion:
c1 c2 c3 c4
a 100 150 03-MAY-04
The relationship is: where table1.c1=table2.c1 for all columns
My point is to compare the two tables to see if there is any difference. With this data,
the result should be that there is no difference. how should the query be written to
get the difference in the two tables?
May 05, 2004 - 2:34 pm UTC
i see no way, other than what gary above said -- to group by c1 and c4, to accomplish your goal
A reader, May 06, 2004 - 9:40 am UTC
Yes Tom, I got it with Gary's query. Thanks again.
Another one!!!!!!!!!!!!!!!!
A reader, May 06, 2004 - 12:45 pm UTC
hi tom,
i am executing this query (first with all_rows optimizer)
SELECT * FROM( SELECT R.RECNUMBER, COUNT(R.RECNUMBER) APUESTAS
FROM STKTRANSACTION T, STKRECORD R
WHERE T.TRNSORTEODATE BETWEEN TO_DATE('01-01-2004','DD-MM-YYYY')
AND TO_DATE('31-01-2004','DD-MM-YYYY')
AND R.RECTRNID = T.TRNID
GROUP BY (R.RECNUMBER) ORDER BY 2 DESC)
WHERE ROWNUM <6
Plan:
Elapsed: 00:00:21.00
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16195 Card=5 Bytes
=227420)
1 0 COUNT (STOPKEY)
2 1 VIEW (Cost=16195 Card=11371 Bytes=227420)
3 2 SORT (ORDER BY STOPKEY) (Cost=16195 Card=11371 Bytes=2
84275)
4 3 SORT (GROUP BY) (Cost=16195 Card=11371 Bytes=284275)
5 4 HASH JOIN (Cost=14991 Card=223186 Bytes=5579650)
6 5 INDEX (FAST FULL SCAN) OF 'INDX_CMPID_SRTDATE' (
NON-UNIQUE) (Cost=2918 Card=70301 Bytes=984214)
7 5 INDEX (FAST FULL SCAN) OF 'INDX_FK_RECTRNID' (NO
N-UNIQUE) (Cost=6801 Card=15004857 Bytes=165053427)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1132328 consistent gets
3254 physical reads
0 redo size
522 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
5 rows processed
Now with the First_Rows Hint:
1 SELECT * FROM( SELECT /*+ FIRST_ROWS */R.RECNUMBER, COUNT(R.RECNUMBER) APUESTAS
2 FROM STKTRANSACTION T, STKRECORD R
3 WHERE T.TRNSORTEODATE BETWEEN TO_DATE('01-01-2004','DD-MM-YYYY')
4 AND TO_DATE('31-01-2004','DD-MM-YYYY')
5 AND R.RECTRNID = T.TRNID
6 GROUP BY (R.RECNUMBER) ORDER BY 2 DESC)
7* WHERE ROWNUM <6
SQL> /
RECNUMBER APUESTAS
---------- ----------
1362 10896
327 7966
315 6081
362 5282
321 5054
Elapsed: 00:00:20.05
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=16195 Card
=5 Bytes=227420)
1 0 COUNT (STOPKEY)
2 1 VIEW (Cost=16195 Card=11371 Bytes=227420)
3 2 SORT (ORDER BY STOPKEY) (Cost=16195 Card=11371 Bytes=2
84275)
4 3 SORT (GROUP BY) (Cost=16195 Card=11371 Bytes=284275)
5 4 HASH JOIN (Cost=14991 Card=223186 Bytes=5579650)
6 5 INDEX (FAST FULL SCAN) OF 'INDX_CMPID_SRTDATE' (
NON-UNIQUE) (Cost=2918 Card=70301 Bytes=984214)
7 5 INDEX (FAST FULL SCAN) OF 'INDX_FK_RECTRNID' (NO
N-UNIQUE) (Cost=6801 Card=15004857 Bytes=165053427)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1132335 consistent gets
0 physical reads
0 redo size
522 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
5 rows processed
As you can see the consistent gets are very high on both accounts. But while using First_rows the physical reads come down to 0. So are the consistent gets are high b'coz of count. The stktransaction table has 4752646 records and stkrecord table has twice that. Is there a better way to do this query. We need to generate the Top 5 only.
Any other data required, i will be glad to provide. Waiting for your response.
May 06, 2004 - 4:53 pm UTC
more likely because the first query ran the data into the cache the second query ran it out of the cache (if not, use tkprof, it'll show us where the PIO's are occurring and we'd need that to say anything more)
not knowing what indexes are on what tables.... or how many records would actually be returned by the variable predicates (eg: how many rows are actually in that "between" clause there.....
If this were a query I needed to run lots and lots and lots -- i might (MIGHT, really -- only MIGHT) look to changing the model a tad so that we aggregate on the insert/update of rectrnid/delete on R (maintaining a running total).
If this isn't something I needed to run lots and lots -- guess I'd be going with what you have (unless there is some relationship between rectrnid and trnsorteodate -- eg: bigger rectrnid implies bigger date or something like that.
Multiple sub query
Thiru, June 17, 2004 - 4:04 pm UTC
Tom
Getting lost with this update.
table a:
tid cash key
100 1999.00 U
100 -99.00 G
s
table b: ( id is unique)
id tid
200 100
table c:
id cash key
200 9.00 U
200 -1.00 G
I need to update table c with cash values from table a using table b for gettting
the id value.
Result expected:
table c:
id cash key
200 1999.00 U
200 -99.00 G
My trial: ( Looks horrible!)
update c set cash = ( select cash from a where a.tid =(select b.tid from b,c
where b.id = c.id and a.key=c.key) ;
June 17, 2004 - 6:29 pm UTC
no create tables.....
no insert into's.....
hmmmmmm
excuse me.. Here it is.
A reader, June 17, 2004 - 11:04 pm UTC
table a:
tid cash key
100 1999.00 U
100 -99.00 G
create table a (tid number,cash number(10,2),key char(1));
insert into a values(100,1999.00,'U');
insert into a values(100,-99.00,'G');
commit;
table b: ( id is unique)
id tid
200 100
Create table b (id number unique,tid number);
insert into b values(200,100);
commit;
table c:
id cash key
200 9.00 U
200 -1.00 G
create table c(id number,cash number(10,2),key char(1));
insert into c values(200,9.00,'U');
insert into c values(200,-1.00,'G');
commit;
I need to update table c with cash values from table a using table b for
gettting
the id value.
Result expected:
table c:
id cash key
200 1999.00 U
200 -99.00 G
June 18, 2004 - 10:31 am UTC
sorry if that annoyed you, I'm just trying to figure out how to make this text more clear, it doesn't seem to be working. It is the text you read before submitting comment:
<b>
Please limit this to a comment or question relevant to the other text on this page. Please do not try to start a new thread of discussion here (I will ignore any followups that are not related to the discussion on this page). Also if your followup includes an example you want me to look at, I'll need it to have a create table, insert into statements and such that I can easily cut and paste into sqlplus myself (like I give you) in order to play with. I spent too many hours turning something like:
I have a table like:
scott@ORA9IR2> desc dept
Name Null? Type
------------------------------------ -------- -------------------------
DEPTNO NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
with the data:
scott@ORA9IR2> select * from dept;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
I *need* to have
I have a table:
create table dept( deptno number(2) primary key, dname varchar2(14), loc varchar2(13) );
with this data:
insert into dept values ( 10, 'accounting', 'new york' );
....
and please -- NO tablespaces/storage clauses, etc. simple, concise examples!
</b>
And the problem was you gave me exactly what I said "don't do". If anyone has ideas on better wording -- please feel free to contribute.
I'm looking for that sort of stuff (create table, inserts) before I even read the question anymore.
ops$tkyte@ORA9IR2> select * from c;
ID CASH K
---------- ---------- -
200 9 U
200 -1 G
ops$tkyte@ORA9IR2> update c
2 set cash = ( select a.cash
3 from a,b
4 where b.id = c.id
5 and b.tid = a.tid
6 and a.key = c.key )
7 /
2 rows updated.
ops$tkyte@ORA9IR2> select * from c;
ID CASH K
---------- ---------- -
200 1999 U
200 -99 G
ops$tkyte@ORA9IR2>
great
sapna chhetri, June 18, 2004 - 6:42 am UTC
the queries are resolved greatly but i dont have much idea about plsql kindly suggest how to learn the same in a short period.
sapna
June 18, 2004 - 10:50 am UTC
read?
practice?
just like learning how to write C, or Java, or VB, or SQL, or <technology X>
SQL Query
Sreedhar, June 22, 2004 - 5:52 am UTC
Tom,
I have the script below script which gives the values with different datebands entered in master table. I am looking for a output that can be run only with SQL query instead of going for PL/SQL program.
Please advice
-----------------------------------------------------------
SCRIPT :
prompt Created on 22 June 2004 by Sreedhar
set feedback off
set define off
create table CRP_T_PRICE
(
DATEBAND_FROM DATE,
DATEBAND_TO DATE,
PRICE NUMBER
);
prompt Loading CRP_T_PRICE Table...
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-03-2004', 'dd-mm-yyyy'), to_date('31-03-2004', 'dd-mm-yyyy'), 0);
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-04-2004', 'dd-mm-yyyy'), to_date('11-05-2004', 'dd-mm-yyyy'), 0);
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('12-05-2004', 'dd-mm-yyyy'), to_date('11-06-2004', 'dd-mm-yyyy'), 6522);
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('12-06-2004', 'dd-mm-yyyy'), to_date('19-06-2004', 'dd-mm-yyyy'), 6522);
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('20-06-2004', 'dd-mm-yyyy'), to_date('31-07-2004', 'dd-mm-yyyy'), 6885);
insert into CRP_T_PRICE (DATEBAND_FROM, DATEBAND_TO, PRICE)
values (to_date('01-08-2004', 'dd-mm-yyyy'), to_date('31-10-2004', 'dd-mm-yyyy'), 6522);
Commit;
set feedback on
set define on
prompt Done.
-----------------------------------------------------------
SQL> SELECT * FROM CRP_T_PRICE;
DATEBAND_FROM DATEBAND_TO PRICE
01/03/2004 31/03/2004 0
01/04/2004 11/05/2004 0
12/05/2004 11/06/2004 6522
12/06/2004 19/06/2004 6522
20/06/2004 31/07/2004 6885
01/08/2004 31/10/2004 6522
OUTPUT I AM LOOKING FOR IS ............
DATEBAND_FROM DATEBAND_TO PRICE
01/03/2004 11/05/2004 0
12/05/2004 19/06/2004 6522
20/06/2004 31/07/2004 6885
01/08/2004 31/10/2004 6522
June 22, 2004 - 8:48 am UTC
ops$tkyte@ORA9IR2> select min(dateband_from), max(dateband_to), price
2 from (
3 select dateband_from, dateband_to, price,
4 max(rn) over (order by dateband_from) grp
5 from (
6 select dateband_from, dateband_to, price,
7 case when lag(price) over (order by dateband_from) <> price or
8 row_number() over (order by dateband_from) = 1
9 then row_number() over (order by dateband_from)
10 end rn
11 from crp_t_price
12 )
13 )
14 group by grp, price
15 order by 1
16 /
MIN(DATEB MAX(DATEB PRICE
--------- --------- ----------
01-MAR-04 11-MAY-04 0
12-MAY-04 19-JUN-04 6522
20-JUN-04 31-JUL-04 6885
01-AUG-04 31-OCT-04 6522
ops$tkyte@ORA9IR2>
see
https://asktom.oracle.com/Misc/oramag/on-format-negation-and-sliding.html "analytics to the rescue"
this is a technique i call "carry forward" -- the max() trick. we simply mark the beginning of whatever defines our "group" with row_number() (and if we need to carry forward something else -- row_number() || something_else) and use max to carry it forward.
suggest you run the query bit by bit to see what it does (run the inline views one by one, building on each one -- that is how I develop them).
very neat!
A reader, June 22, 2004 - 8:50 pm UTC
I have a quick question. Given the data below:
----
scott@ORA10G> drop table t1;
Table dropped.
scott@ORA10G> create table t1
2 (
3 x number,
4 y number,
5 z number
6 );
Table created.
scott@ORA10G>
scott@ORA10G> insert into t1 values( 1, 2, 3 );
1 row created.
scott@ORA10G> insert into t1 values( 2, 2, 10 );
1 row created.
scott@ORA10G> insert into t1 values( 3, 5, 5 );
1 row created.
scott@ORA10G> commit;
Commit complete.
scott@ORA10G>
scott@ORA10G> select * from t1;
X Y Z
---- ---- ----
1 2 3
2 2 10
3 5 5
------
I want to get all the column values along with average
of each column for all rows. Since the average
will be the same in this case in all rows, is
it possible to not repeat the average values?
Apart from the network traffic saved (say from a
JDBC client query), do you see any other pros/cons
with this approach versus writing 2 different queries
(one for the column and one for the average values)?
I used your "carry forward" trick for this (see below)
but you may have a simpler solution in mind:)
---
scott@ORA10G> column x format 999
scott@ORA10G> column y format 999
scott@ORA10G> column z format 999
scott@ORA10G> column avg_x format 999
scott@ORA10G> column avg_y format 999
scott@ORA10G> column avg_z format 999
scott@ORA10G>
scott@ORA10G> select x, y, z, decode( rn, 1, avg_x, null) avg_x,
2 decode( rn, 1, avg_y, null) avg_y,
3 decode( rn, 1, avg_z, null) avg_z
4 from
5 (
6 select x, y, z, avg_x, avg_y, avg_z,
7 case when avg_x != lag_avg_x or
8 row_num = 1 then
9 row_number() over (order by x, y, z)
10 end rn
11 from
12 (
13 select x, y, z, avg_x, avg_y, avg_z,
14 lag( avg_x) over(order by x, y, z) lag_avg_x,
15 lag( avg_y) over(order by x, y, z) lag_avg_y,
16 lag( avg_z) over(order by x, y, z) lag_avg_z,
17 row_number() over ( order by x, y, z ) row_num
18 from
19 (
20 select x, y, z,
21 avg(x) over() avg_x,
22 avg(y) over() avg_y,
23 avg(z) over() avg_z
24 from t1
25 )
26 ))
27 ;
X Y Z AVG_X AVG_Y AVG_Z
---- ---- ---- ----- ----- -----
1 2 3 2 3 6
2 2 10
3 5 5
June 22, 2004 - 10:35 pm UTC
ops$tkyte@ORA10G> select avg(x), avg(y), avg(z),
2 decode( grouping(x), 1, '<<<== avg' )
3 from t1
4 group by grouping sets((x,y,z),())
5 /
AVG(X) AVG(Y) AVG(Z) DECODE(GR
---------- ---------- ---------- ---------
1 2 3
2 2 10
3 5 5
2 3 6 <<<== avg
thanx!!
A reader, June 23, 2004 - 9:54 am UTC
something new to learn!:) yet again!:)
grouping set
A reader, June 23, 2004 - 10:34 am UTC
Hi tom
in the above select what does
"group by grouping sets((x,y,z),())"
mean? In particular, what does the "()" imply?
I notice that if you remove it we dont get the
last row (the one with the average)?
Thanx!
June 23, 2004 - 11:15 am UTC
getting the average of x,y,z by x,z,y is the meaning of (x,y,z) (eg: no grouping really, the "details")
getting the average of x,y,z by "nothing" is the meaning if () (eg: like no group by at all.
scott@ORA9IR2> select avg(sal) from emp group by sal;
AVG(SAL)
----------
4050
4809.38
5568.75
6328.13
6581.25
7593.75
8100
12403.13
14428.13
15060.95
15187.5
25312.5
12 rows selected.
scott@ORA9IR2> select avg(sal) from emp /* group by 'nothing' */;
AVG(SAL)
----------
10495.65
it just did both of those.
ok
A reader, June 23, 2004 - 10:36 am UTC
I guess "group by grouping sets((x,y,z),())"
menas union the result of
group by x, y, z - with that obtained by no grouping by.
Is that correct?
thanx!
June 23, 2004 - 11:20 am UTC
see above
thanx!
A reader, June 23, 2004 - 11:58 am UTC
that makes it clear!
another similar question
A reader, June 23, 2004 - 7:56 pm UTC
hi tom
another question with a suggested solution - let me know
if you have an alternative solution.
same schema as above -this time I want to calculate
the result of a function that takes the minimum of
all x, y, z values and max of all x, y, z values and
then select this result along with all x, y, z values.
Following is my code with the schema and a solution..
thanx!
---
scott@ORA10G> set echo on
scott@ORA10G> set head on
scott@ORA10G> drop table t1;
Table dropped.
scott@ORA10G> create table t1
2 (
3 x number,
4 y number,
5 z number
6 );
Table created.
scott@ORA10G>
scott@ORA10G> insert into t1 values( 1, 2, 3 );
1 row created.
scott@ORA10G> insert into t1 values( 2, 2, 10 );
1 row created.
scott@ORA10G> insert into t1 values( 2, 2, 9 );
1 row created.
scott@ORA10G> insert into t1 values( 3, 5, 5 );
1 row created.
scott@ORA10G> commit;
Commit complete.
scott@ORA10G>
scott@ORA10G> select * from t1;
X Y Z
---- ---- ----
1 2 3
2 2 10
2 2 9
3 5 5
scott@ORA10G>
scott@ORA10G> column x format 999
scott@ORA10G> column y format 999
scott@ORA10G> column z format 999
scott@ORA10G> column max_x format 999.99
scott@ORA10G> column max_y format 999.99
scott@ORA10G> column max_z format 999.99
scott@ORA10G> column max format 999.99
scott@ORA10G> column details format 999.99
scott@ORA10G>
scott@ORA10G> select x, y, z
2 from t1;
X Y Z
---- ---- ----
1 2 3
2 2 10
2 2 9
3 5 5
scott@ORA10G>
scott@ORA10G> /* a function that calculates something based on the
scott@ORA10G> the max and min of all x,y,z each.
scott@ORA10G>
scott@ORA10G> */
scott@ORA10G> create or replace function f( max_x_y_z number, min_x_y_z number )
2 return varchar2
3 is
4 begin
5 -- some computation
6 return 'TESTING';
7 end;
8 /
Function created.
scott@ORA10G> show errors;
No errors.
scott@ORA10G> column f_result format a10
scott@ORA10G> /* we need the result of x, y, z, rows alongwith the
scott@ORA10G> result of the above function f()
scott@ORA10G> */
scott@ORA10G>
scott@ORA10G> select x, y, z, f( max_all, min_all ) f_result
2 from
3 (
4 select x, y, z, greatest( max_x, max_y, max_z) max_all,
5 least ( min_x, min_y, min_z ) min_all
6 from
7 (
8 select x, y, z,
9 max(x) over() max_x,
10 max(y) over() max_y,
11 max(z) over() max_z,
12 min(x) over() min_x,
13 min(y) over() min_y,
14 min(z) over() min_z
15 from t1
16 )
17 );
X Y Z F_RESULT
---- ---- ---- ----------
1 2 3 TESTING
2 2 10 TESTING
2 2 9 TESTING
3 5 5 TESTING
scott@ORA10G> spool off
June 24, 2004 - 9:12 am UTC
we can certainly simplify it a bit and given that the f(a,b) is constant for all rows in the query -- optimize it a bit as well:
ops$tkyte@ORA9IR2> create or replace function f(a in number, b in number) return varchar2
2 as
3 begin
4 dbms_application_info.set_client_info( userenv('client_info')+1 );
5 return 'testing';
6 end;
7 /
Function created.
that'll let us count how often the function is called:
ops$tkyte@ORA9IR2> column f format a10
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_application_info.set_client_info( 0 )
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select x, y, z, f( min_xyz, max_xyz ) f
2 from (
3 select x, y, z,
4 min(least(x,y,z)) over () min_xyz,
5 max(greatest(x,y,z)) over () max_xyz
6 from t1
7 )
8 /
X Y Z F
---------- ---------- ---------- ----------
1 2 3 testing
2 2 10 testing
2 2 9 testing
3 5 5 testing
ops$tkyte@ORA9IR2> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
4
<b>that shows the simplification, but also shows that the function was called 4 times. We can use a scalar subquery here to reduce that to one in this case (since the inputs to the function are invariant)</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_application_info.set_client_info( 0 )
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select x, y, z, (select f( min_xyz, max_xyz ) from dual) f
2 from (
3 select x, y, z,
4 min(least(x,y,z)) over () min_xyz,
5 max(greatest(x,y,z)) over () max_xyz
6 from t1
7 )
8 /
X Y Z F
---------- ---------- ---------- ----------
1 2 3 testing
2 2 10 testing
2 2 9 testing
3 5 5 testing
ops$tkyte@ORA9IR2> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
1
awesome!!
A reader, June 24, 2004 - 10:45 am UTC
Thank you!! I liked the simplification a lot and
also the scalar subquery idea!!
one question
A reader, June 24, 2004 - 10:50 am UTC
Hi Tom
Could you kindly explain why the scalar subquery
trick reducees the function call to 1 from 4?
June 24, 2004 - 2:27 pm UTC
thank you!!!
A reader, June 24, 2004 - 2:54 pm UTC
A reader, June 30, 2004 - 6:20 am UTC
Tom,
Thanks, very much useful to me.
I have one doubt, how to limit, the result that you get in analytics for the first record or till 10 records, as we do say row_num <2 or < 10
i tried this ..
SQL> SELECT month_num,year_num,
2 MIN(month_num) OVER(PARTITION BY year_num ORDER BY month_num) as p_cmin
3 --RANGE UNBOUNDED PRECEDING) as p_cmin
4 FROM tb_etr003_geci_etmon_financial a
5 where a.type_nam='Project Space'
6 AND object_nam = 'Development of DEQ coil manuf'
7 AND revision_nam='2910885710716221'
8 AND relobj_type_nam='Benefit Details';
MONTH_NUM YEAR_NUM P_CMIN
---------- ---------- ----------
10 2004 10
10 2004 10
10 2004 10
10 2004 10
11 2004 10
11 2004 10
11 2004 10
11 2004 10
12 2004 10
12 2004 10
12 2004 10
MONTH_NUM YEAR_NUM P_CMIN
---------- ---------- ----------
12 2004 10
1 2005 1
1 2005 1
1 2005 1
1 2005 1
i have multiple values for 10 2004 ..(first 4 records), how to get only the first one and how to get the first 4 (here, since they are having same value)
RANGE UNBOUNDED PRECEDING -- means? .. where can i see this explanation, or can you explain in your quick words...
thanks
June 30, 2004 - 10:17 am UTC
selelct * from (
SELECT month_num,year_num,
row_number() over (partition by year_num order by month_num) RN
FROM tb_etr003_geci_etmon_financial a
where a.type_nam='Project Space'
AND object_nam = 'Development of DEQ coil manuf'
AND revision_nam='2910885710716221'
AND relobj_type_nam='Benefit Details'
)
where rn = 1;
you can either read the chapter on analytics in my book "Expert one on one Oracle" if you have it or check out the data warehouse guide (freely available on otn.oracle.com) to find out what analytics are all about.
simle question
A reader, June 30, 2004 - 1:40 pm UTC
Hi Tom
Please consider the following schema:
---
scott@ORA92I> drop table t;
Table dropped.
scott@ORA92I> create table t
2 ( x number,
3 y date
4 );
Table created.
scott@ORA92I>
scott@ORA92I> insert into t
2 select rownum x, sysdate
3 from all_objects
4 where rownum <= 10;
10 rows created.
scott@ORA92I>
scott@ORA92I> -- want to set the value of y to descending order of
scott@ORA92I> -- date based on the order of x values - e.g. the largest
scott@ORA92I> -- value of x should correspond to today, the second
scott@ORA92I> -- largest to yesterday and so on.
scott@ORA92I> update ( select x, y, rownum from t order by x )
2 set y = sysdate - rownum;
update ( select x, y, rownum from t order by x )
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
----
How can I update the table t so that the largest value of
x has y as sydate, the second largest has y as sysdate-1
and so on...This would be done through rowum but not sure
how..
Thanx for a great site!!
June 30, 2004 - 2:03 pm UTC
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t
2 ( x number PRIMARY KEY,
3 y date
4 );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
2 select rownum x, sysdate
3 from all_objects
4 where rownum <= 10;
10 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> merge into t a
2 using ( select x, rownum-1 r
3 from ( select *
4 from t
5 order by x) ) b
6 on (a.x = b.x)
7 when matched then update set y = sysdate-r
8 when not matched then insert (x) values(null)
9 /
10 rows merged.
the when not matched will never happen of course.
thank you!!
A reader, June 30, 2004 - 2:28 pm UTC
sql query question
A reader, July 02, 2004 - 1:13 pm UTC
Consider the following schema..
----
scott@ORA92I> set head on
scott@ORA92I> set echo on
scott@ORA92I> drop table t;
Table dropped.
scott@ORA92I> create table t
2 (
3 x number,
4 y number,
5 dt date
6 );
Table created.
scott@ORA92I>
scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/01/2004', 'mm/dd/yyyy') );
1 row created.
scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/02/2004', 'mm/dd/yyyy') );
1 row created.
scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/03/2004', 'mm/dd/yyyy') );
1 row created.
scott@ORA92I> insert into t ( x, y, dt )
2 values ( 100, 200, to_date( '06/05/2004', 'mm/dd/yyyy') );
1 row created.
scott@ORA92I>
scott@ORA92I> commit;
Commit complete.
scott@ORA92I>
scott@ORA92I> column x format 99999
scott@ORA92I> column y format 99999
scott@ORA92I> alter session set nls_date_format='mm/dd/yyyy';
Session altered.
scott@ORA92I>
scott@ORA92I> select * from t;
X Y DT
------ ------ ----------
100 200 06/01/2004
100 200 06/02/2004
100 200 06/03/2004
100 200 06/05/2004
I want to get an output of
X Y DT
------ ------ ----------
100 200 06/01/2004
100 200 06/02/2004
100 200 06/03/2004
0 0 06/04/2004 <--- extra row for the missing day
100 200 06/05/2004
Is it possible?
Thanx!
July 02, 2004 - 2:34 pm UTC
you need to generate a set of those dates:
(
select to_date( '....starting date....' ) + rownum-1 DT
from all_objects
where rownum <= NUMBER_OF_DAYS_YOU_ARE_INTERESTED_IN
) dates
and outer join your query to that
where dates.dt = t.dt(+)
thanx!
A reader, July 02, 2004 - 3:41 pm UTC
sql query
A reader, July 26, 2004 - 4:14 pm UTC
Hi tom
Following is the schema:
scott@ORA10G>
scott@ORA10G> drop table t1;
Table dropped.
scott@ORA10G> create table t1
2 (
3 name varchar2(30),
4 type varchar2(30),
5 value number
6 );
Table created.
scott@ORA10G>
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 1 );
1 row created.
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 2 );
1 row created.
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name2', 'type1', 3 );
1 row created.
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type1', 5 );
1 row created.
scott@ORA10G> insert into t1( name, type, value )
2 values( 'name1', 'type2', 5 );
1 row created.
scott@ORA10G> commit;
Commit complete.
scott@ORA10G>
scott@ORA10G> column name format a15
scott@ORA10G> column type format a10
scott@ORA10G> column value format 999
scott@ORA10G> column shared_count format 999
scott@ORA10G> select name, type, value
2 from t1;
NAME TYPE VALUE
--------------- ---------- -----
name1 type1 1
name1 type1 2
name2 type1 3
name1 type1 5
name1 type2 5
----------------------------------
Now the required result is as shown below..
I need to find the count of cases where columns
name and type have same value and then display
them along side the above results as:
scott@ORA10G> NAME TYPE VALUE SHARED_COUNT
scott@ORA10G> --------------- ---------- -----
scott@ORA10G> name1 type1 1 3
scott@ORA10G> name1 type1 2 3
scott@ORA10G> name2 type1 3 1
scott@ORA10G> name1 type1 5 3
scott@ORA10G> name1 type2 5 1
------------
My answer to this is a little convoluted - I thought
you may be able to give a better answer...
Let me know..
Following is the way I did it..
-----
scott@ORA10G> select name, type, value,
2 max( shared_count ) over( partition by name, type order by name, type)
3 shared_count
4 from
5 (
6 select name, type, value,
7 row_number()
8 over( partition by name, type order by name, type) shared_count
9 from t1
10 order by name, type
11 );
NAME TYPE VALUE SHARED_COUNT
--------------- ---------- ----- ------------
name1 type1 1 3
name1 type1 2 3
name1 type1 5 3
name1 type2 5 1
name2 type1 3 1
------------
Thank you!
--
July 26, 2004 - 5:21 pm UTC
ops$tkyte@ORA9IR2> l
1 select name, type, value, count(*) over (partition by name, type) cnt
2 from t1
3* order by name, type
ops$tkyte@ORA9IR2> /
NAME TYPE VALUE CNT
------ ------ ---------- ----------
name1 type1 1 3
name1 type1 2 3
name1 type1 5 3
name1 type2 5 1
name2 type1 3 1
thanx!
A reader, July 26, 2004 - 9:40 pm UTC
SQL Query
A reader, July 27, 2004 - 1:50 am UTC
Hi Tom,
In order to get the nth maximum salary, which approach is good in terms of performance and efficiency?
1. SELECT MIN(sal)
FROM ( SELECT sal
FROM ( SELECT DISTINCT sal
FROM emp
ORDER BY sal
DESC )
WHERE ROWNUM <= &nthmaxsalary );
2. SELECT MAX(sal)
FROM emp
WHERE LEVEL = &nthmaxsalary
CONNECT BY PRIOR sal > sal;
3. SELECT DISTINCT t.sal
FROM ( SELECT sal, DENSE_RANK()
OVER (ORDER BY sal DESC) salrank
FROM emp)t
WHERE t.salrank = &nthmaxsalary ;
Thxs in advance
July 27, 2004 - 7:16 am UTC
#3 would be my first choice.
another sql query
A reader, July 27, 2004 - 1:02 pm UTC
hi tom
slight change in my requirements:
----
schema:
scott@ORA10G> drop table t1;
Table dropped.
scott@ORA10G> create table t1
2 (
3 name varchar2(30),
4 type varchar2(30),
5 id number,
6 value number
7 );
Table created.
scott@ORA10G>
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type1', 1, 1 );
1 row created.
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type1', 2, 2 );
1 row created.
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name2', 'type1', 3, 3 );
1 row created.
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name1', 'type2', 4, 5 );
1 row created.
scott@ORA10G> insert into t1( name, type, id, value )
2 values( 'name2', 'type1', 1, 5 );
1 row created.
scott@ORA10G> commit;
Commit complete.
scott@ORA10G>
scott@ORA10G> column name format a15
scott@ORA10G> column type format a10
scott@ORA10G> column value format 999
scott@ORA10G> column id format 999
scott@ORA10G> column shared_count format 999
scott@ORA10G> set head on
scott@ORA10G> select name, type, id, value
2 from t1;
NAME TYPE ID VALUE
--------------- ---------- ---- -----
name1 type1 1 1
name1 type1 2 2
name2 type1 3 3
name1 type2 4 5
name2 type1 1 5
----
The required results:
---
cott@ORA10G> NAME TYPE ID VALUE SHARED_COUNT
scott@ORA10G> --------------- ---------- ---- ----- -----------
scott@ORA10G> name1 type1 1 1 1
scott@ORA10G> name1 type1 2 2 2
scott@ORA10G> name2 type1 3 3 1
scott@ORA10G> name3 type2 4 5 1
scott@ORA10G> name2 type1 1 5 2
scott@ORA10G>
scott@ORA10G>
---
Basically
Count rows such that different names of the same type have the same ID. For example, in the above case, count for row # 2 and 5 is 2 because for the same type type1, there are two different names - name1 and name2 that share the same ID of 1.
Thank you!
ok - i think i got it..
A reader, July 27, 2004 - 1:05 pm UTC
scott@ORA10G> select name, type, id, value,
2 count(*) over( partition by type, id order by type, id ) shared_count
3 from t1;
NAME TYPE ID VALUE SHARED_COUNT
--------------- ---------- ---- ----- ------------
name1 type1 1 1 2
name2 type1 1 5 2
name1 type1 2 2 1
name2 type1 3 3 1
name1 type2 4 5 1
----
lemme know if this is what you would have suggested..
thanx!
oops - spoke too soon!
A reader, July 27, 2004 - 1:09 pm UTC
The above solution does not take care of
the case when you have another row
that has name1, type1 and the same global id :(
July 27, 2004 - 2:08 pm UTC
a verbaitim reading of:
Count rows such that different names of the same type have the same ID.
results in this query:
ops$tkyte@ORA9IR2> select name, type, id, value, count(distinct name) over (partition by type, id) cnt
2 from t1;
NAME TYPE ID VALUE CNT
------------------------------ -------- ---------- ---------- ----------
name1 type1 1 1 2
name2 type1 1 5 2
name1 type1 2 2 1
name2 type1 3 3 1
name1 type2 4 5 1
thanx !!!
A reader, July 27, 2004 - 2:40 pm UTC
that works perfectly!:)
Have a great day!:)
Thiru, August 05, 2004 - 11:46 am UTC
stuck with this query:
create table tt1( a number,b varchar2(10));
create table tt2( a number,c varchar2(10));
insert into tt1 values(100,'abc');
insert into tt1 values(200,'abc');
insert into tt1 values(200,'abc');
insert into tt1 values(200,'bcd');
insert into tt2 values(100,'abc');
insert into tt2 values(100,'bcd');
output reqd: ( combining the values from both tables)
t1.a t1.b
-----------
600 abc
300 bcd
select tt1.b,sum(tt1.a+tt2.a)
from tt1,tt2
where tt1.b=tt2.c
group by tt1.b;
gives
B SUM(TT1.A+TT2.A)
---------- ----------------
abc 800
bcd
August 05, 2004 - 1:12 pm UTC
ops$tkyte@ORA9IR2> select sum(a), x
2 from (select sum(a) a, b x from tt1 group by b
3 union all
4 select sum(a) a, c x from tt2 group by c)
5 group by x
6 /
SUM(A) X
---------- ----------
600 abc
300 bcd
A reader, August 05, 2004 - 12:55 pm UTC
stuck with this query:
create table tt1(id number,sdate date,edate date,amt number,flag char(1));
create table tt2(id number,sdate date,edate date,amt number);
insert into tt1 values(1,'01-AUG-04','31-AUG-04',100,NULL);
insert into tt1 values(2,'01-AUG-04','31-AUG-04',100,NULL);
insert into tt1 values(3,'02-AUG-04','31-AUG-04',100,'Y');
insert into tt1 values(4,'02-AUG-04','31-AUG-04',100,'Y');
insert into tt1 values(5,'03-AUG-04','31-AUG-04',100,NULL);
insert into tt2 values(3,'02-AUG-04','31-AUG-04',200);
insert into tt2 values(4,'02-AUG-04','31-AUG-04',200);
SHOULD BE GROUPED BY TT1.SDATE,TT1.EDATE AND
(IF TT1.FLAG='Y' THEN TAKE THE CORRESPONDING AMT FROM TT2 WHERE TT1.ID=TT2.ID AND TT1.SDATE=TT2.SDATE
AND TT1.EDATE = TT2.EDATE)
THE OUTPUT TO BE:
TT1.SDATE TT1.EDATE AMT
01-AUG-04 31-AUG-04 200
02-AUG-04 31-AUG-04 400
03-AUG-04 31-AUG-04 100
August 05, 2004 - 1:16 pm UTC
see above and see if you can adapt it to your needs..
Answer using Scalar Subquery
Logan Palanisamy, August 06, 2004 - 8:01 pm UTC
1 select sdate, edate,
2 sum(decode(flag, null, amt, (select amt from tt2 where tt1.id = tt2.id))) amt
3 from tt1
4* group by sdate, edate
SQL> /
SDATE EDATE AMT
--------- --------- ----------
01-AUG-04 31-AUG-04 200
02-AUG-04 31-AUG-04 400
03-AUG-04 31-AUG-04 100
Answer Using Scalar Subquery
Logan Palanisamy, August 06, 2004 - 8:07 pm UTC
1 select sdate, edate,
2 sum(decode(flag, null, amt, (select amt from tt2 where tt1.id = tt2.id))) total
3 from tt1
4* group by sdate, edate
SQL> /
SDATE EDATE TOTAL
--------- --------- ----------
01-AUG-04 31-AUG-04 200
02-AUG-04 31-AUG-04 400
03-AUG-04 31-AUG-04 100
q on a scenario
A reader, August 09, 2004 - 6:09 pm UTC
hi tom
i have scenario where I am generating report based
on a period. The numbers involved are rolled up - so
we have a table where we store hourly data points,
another table where we store daily data points etc.
I am writing a query where if the data is in the daily point I pick data from there - otherwise, I pick
it from the hourly data points table. I am writing
it in two queries. The first selects data from the
daily points table. If there are no records in this
I select data from hourly points table. Is this the
right approach to do it or would you have done it
differently?
Thank you!
August 09, 2004 - 8:49 pm UTC
if you used a materialized view to roll the hourly up to daily -- you could write all of your queries against the details -- and when appropriate, the database would rewrite them against the aggregated daily data.......
thanx tom!
A reader, August 10, 2004 - 11:54 am UTC
unfortunately we are not in a position to use MVs
as the infrastructure is already in place...Given that
is the reality, how would you approach this issue?
btw, how would the MVs know if they have to go to daily
and if not to hourly data - is there a mechanism that
helps you with this in MV?
Thanx!
August 10, 2004 - 3:42 pm UTC
in reality (this will shock you) i would use mv's -- as the in place infrastructure could just be "erased"
Yes, MV's are built to do this sort of stuff, check out the data warehousing guide. MV's are like the "indexes of your data warehouse". they are used to pre-aggreate, etl, pre-join, whatever data -- so you don't have to do that over and over in response to every single query.
thanx!
A reader, August 10, 2004 - 4:20 pm UTC
one reason I believe we dont have MVs is that
we need history of rollup points...
August 10, 2004 - 4:27 pm UTC
so? if you have a history of houring points, you have a history of daily points in your mv's
I see...
A reader, August 10, 2004 - 5:14 pm UTC
good point :-) - so the way you would have done it is:
store history of points at the lowest granular level
(hourly)
1. The daily, weekly, monthly, quarterly, yearly
etc. would come from materialized views built on
top of the hourly table, correct?
2. I suppose stats like min, max, avg etc would
also come from MVs?
3. Assuming I have an MV based solution,
can you briefly explain how MVs can automatically
go to hourly if data does not exist for daily? Is
this one of the functionalities of MV?
thanx!
August 10, 2004 - 7:31 pm UTC
1) actuallhy, daily on hourly, weekly on daily, monthly on weekly, and so on (you can have mv's of mv's in 9i)
2) absolutely
3) absolutely - you want to read the data warehousing guide (one chapter -- really, the one on MV's). much will be gleaned from that.
OK
James, August 11, 2004 - 10:31 am UTC
Dear Tom,
Is there any other way to put this query?
sql> select * from t where x is not null and y is not null;
Please do reply.
August 11, 2004 - 1:12 pm UTC
sure, where y is not null and x is not null
:)
nvl
decode
case
come to mind immediately
thnax for answering my mv related questions above!:)
A reader, August 11, 2004 - 1:23 pm UTC
full outer join
thiru, August 11, 2004 - 2:17 pm UTC
Full outer join: Is this the right way? Looks like
I am missing something here. Please have a look.
create table p1 ( sdate date,ccy char(3),amt number);
create table p2 as select * from p1;
create table p3 (sdate date,ccy char(3),amt_p1 number,amt_p2 number);
insert into p1 values('16-AUG-04','BCD',100);
insert into p1 values('16-AUG-04','CDE',200);
insert into p1 values('16-AUG-04','ABC',300);
insert into p1 values('16-AUG-04','XYZ',300);
insert into p2 values('16-AUG-04','EFG',100);
insert into p2 values('16-AUG-04','CDE',200);
insert into p2 values('16-AUG-04','ABC',300);
insert into p2 values('16-AUG-04','BCD',100);
table p3 to be combined from p1 and p2: (output required)
sdate ccy amt_p1 amt_p2
16-AUG-04 ABC 300 300
16-AUG-04 BCD 100 100
16-AUG-04 CDE 200 200
16-AUG-04 EFG 0 100
16-AUG-04 XYZ 300 0
I used this query but the sdate and ccy are missing for one row ie., 'EFG' for p2.
SELECT P1.SDATE,P1.CCY,nvl(P1.AMT,0),nvl(P2.AMT,0)
FROM P2 FULL OUTER JOIN P1
ON P1.SDATE = P2.SDATE
AND P1.CCY = P2.CCY;
SDATE CCY AMT AMT
--------- --- ---------- ----------
16-AUG-04 ABC 300 300
16-AUG-04 BCD 100 100
16-AUG-04 CDE 200 200
0 100
16-AUG-04 XYZ 100 0
August 12, 2004 - 7:32 am UTC
well, if p1 is "made up" - it's values are going to be null.
so, perhaps you want
nvl(p1.sdate,p2.sdate), nvl(p1.ccy,p2.ccy), ....
in the select list.
weeks do not always roll-up
Gabe, August 11, 2004 - 2:20 pm UTC
<quote>actuallhy, daily on hourly, weekly on daily, monthly on weekly, and so on</quote>
monthly on weekly? ... hmmm ... weeks do not always fit into months, or years for that matter. The concept of the "week" is quite murky ... use 'IW' or 'WW' format date masks and one gets different definitions ... with 'W' and 'WW' weeks don't always have 7 days ... it is true though that 'W'-weeks roll-up into months (and hence years) and 'WW'-weeks roll-up into years (but not to months).
The hierarchies are:
Hour >-- Day >-- Month >-- Quarter >-- Year
Hour >-- Day >-- Week ('IW' ISO)
Hour >-- Day >-- Week ('W') >-- Month >-- Quarter >-- Year
Hour >-- Day >-- Week ('WW') >-- Year
August 12, 2004 - 7:33 am UTC
ok ok, daily into monthly ;)
it was the fact that a mv can roll into an mv can roll into an mv i was trying to get across. you can aggregate the aggregates as needed.
"Full outerjoin" resolved for thiru
Logan Palanisamy, August 11, 2004 - 6:51 pm UTC
Thiru,
You need to NVL the first two columns also.
1 SELECT nvl(P1.SDATE, p2.sdate), nvl(P1.CCY, p2.ccy), nvl(P1.AMT,0),nvl(P2.AMT,0)
2 FROM P2 FULL OUTER JOIN P1
3 ON P1.SDATE = P2.SDATE
4* AND P1.CCY = P2.CCY
SQL> /
NVL(P1.SD NVL NVL(P1.AMT,0) NVL(P2.AMT,0)
--------- --- ------------- -------------
16-AUG-04 BCD 100 100
16-AUG-04 CDE 200 200
16-AUG-04 ABC 300 300
16-AUG-04 EFG 0 100
16-AUG-04 XYZ 300 0
thiru, August 11, 2004 - 10:47 pm UTC
Thanks a lot.
greta thread!
A reader, August 12, 2004 - 11:32 am UTC
"it was the fact that a mv can roll into an mv can roll into an mv i was trying
to get across. you can aggregate the aggregates as needed. "
In our case the hourly data is kept for 2 weeks, the
daily data for a month and the monthly for a year.
Does this have a bearing on an MV based solution
since a query will depend on the real data underlying
in the tables. In other words, how can we have mvs on
mvs with different "purging" periods?
August 12, 2004 - 11:40 am UTC
oh, well, that'll not work -- you are back into DIY land and you have to train your users to query the right data at the right time. MV's cannot have different "purge periods"
ok - thanx!
A reader, August 12, 2004 - 11:52 am UTC
I guess in cases where you have to do this (due to sheer volume of data) mvs are not possible. So in such a case
if we have, say hourly data to yearly data and if we want
an mv based solution, we will have to keep the hourly
data for a year for it to work, correct?
thank you!!
August 12, 2004 - 12:29 pm UTC
correct, you have the details, and summations of the details
thanx!
A reader, August 12, 2004 - 12:31 pm UTC
query
A reader, August 13, 2004 - 2:48 pm UTC
First the schema:
scott@ORA92I> drop table t1;
Table dropped.
scott@ORA92I> create table t1
2 (
3 x varchar2(10),
4 y number
5 );
Table created.
scott@ORA92I>
scott@ORA92I> insert into t1( x, y ) values ( 'A', 2 );
1 row created.
scott@ORA92I> insert into t1( x, y ) values ( 'A', 3 );
1 row created.
scott@ORA92I> insert into t1( x, y ) values ( 'B', 3 );
1 row created.
scott@ORA92I> --insert into t1( x, y ) values ( 0, 0 );
scott@ORA92I> --insert into t1( x, y ) values ( 0, 0 );
scott@ORA92I>
scott@ORA92I> select * from t1;
X Y
---------- ----------
A 2
A 3
B 3
------------------------------------
requirement is to get a flag indicating if
at least one record exists for x = 'A'
and so on. Possible values of x are
'A', 'B', 'C'; for the above data we
would get
A B C
- - -
Y Y N
my solution is:
scott@ORA92I> select max(decode( x, 'A', 'Y', 'N')) A_count,
2 max(decode( x, 'B', 'Y', 'N')) B_count,
3 max(decode( x, 'C', 'Y', 'N')) C_count
4 from t1;
A B C
- - -
Y Y N
the problem is I dont wanna count beyond the first
record...Do you have a better solution?
thanx!
August 13, 2004 - 6:16 pm UTC
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select nvl((select 'Y' from dual where exists(select null from t1 where x='A')),'N') A,
2 nvl((select 'Y' from dual where exists(select null from t1 where x='B')),'N') B,
3 nvl((select 'Y' from dual where exists(select null from t1 where x='C')),'N') C
4 from dual;
A B C
- - -
Y Y N
Thank you!
A reader, August 13, 2004 - 6:38 pm UTC
q on a query
A reader, August 30, 2004 - 7:15 pm UTC
First the schema and data
----
scott@ORA92I> set echo on
scott@ORA92I> set head on
scott@ORA92I> alter session set nls_date_format = 'YYYY-MM-DD HH:MI:SS';
Session altered.
scott@ORA92I> --schema
scott@ORA92I>
scott@ORA92I> drop table curr_t1;
Table dropped.
scott@ORA92I> create table curr_t1
2 (
3 a varchar2(10),
4 value number,
5 time_of_entry date
6 );
Table created.
scott@ORA92I>
scott@ORA92I> drop table daily_rollup_t1;
Table dropped.
scott@ORA92I> create table daily_rollup_t1
2 (
3 a varchar2(10),
4 maximum number,
5 minimum number,
6 average number,
7 sample_count number,
8 time_rolled_up date
9 );
Table created.
scott@ORA92I>
scott@ORA92I> insert into curr_t1( a, value, time_of_entry )
2 values( 'a', 1, sysdate );
1 row created.
scott@ORA92I> insert into curr_t1( a, value, time_of_entry )
2 values( 'b', 7, sysdate );
1 row created.
scott@ORA92I>
scott@ORA92I> commit;
Commit complete.
scott@ORA92I> select * from curr_t1;
A VALUE TIME_OF_ENTRY
---------- ---------- -------------------
a 1 2004-08-30 03:50:09
b 7 2004-08-30 03:50:09
scott@ORA92I>
scott@ORA92I> insert into daily_rollup_t1( a, maximum, minimum, average, sample_count,
2 time_rolled_up )
3 values( 'a', 10, 3, 5, 2, sysdate-1 );
1 row created.
scott@ORA92I> commit;
Commit complete.
scott@ORA92I>
scott@ORA92I> select *
2 from daily_rollup_t1;
A MAXIMUM MINIMUM AVERAGE SAMPLE_COUNT TIME_ROLLED_UP
---------- ---------- ---------- ---------- ------------ -------------------
a 10 3 5 2 2004-08-29 03:50:09
----------------
First table curr_t1 contains the current values corresponding to the column a( one row per unique value of a).Second table daily_rollup_t1 contains the rolled up value of acolumn.
-- Now I want to get the average max, min of all daily
-- rollup values including the value in curr_t1. The degenerate case would be when there are no records in the daily_rollup_t1 in which case I will get the value in curr_t1 as average, max, min etc. There will be only one value of "a" column and the latest
date in the table curr_t1.
In the above data, my output should be:
A AVERAGE MAXIMUM MINIMUM LATEST_VALUE
----------- ------- ------- ------- ------------
a 3.67 10 1 1
b 7 7 7 7
My attempt is given below - do you have any other better
alternatives in mind?
scott@ORA92I> select a,
2 sum_value/sum_sample_count as average,
3 maximum,
4 minimum,
5 last_known_value
6 from
7 (
8 select a,
9 sum( value ) sum_value,
10 sum( sample_count ) sum_sample_count,
11 max( maximum ) maximum,
12 min( minimum ) minimum,
13 max( last_known_value ) as last_known_value
14 from
15 (
16 select a, value as maximum,
17 value as minimum,
18 value as last_known_value,
19 1 as sample_count,
20 value as value
21 from curr_t1
22 union
23 select a, max( maximum), min(minimum),
24 max(0) as last_known_value,
25 sum(sample_count),
26 sum(average*sample_count) as value
27 from daily_rollup_t1
28 group by a
29 )
30 group by a
31 );
A AVERAGE MAXIMUM MINIMUM LAST_KNOWN_VALUE
---------- ---------- ---------- ---------- ----------------
a 3.66666667 10 1 1
b 7 7 7 7
August 30, 2004 - 7:35 pm UTC
using column "a" and value "a" and "b" in column "a" on a really long followup -- that we have to page up and down to read -- is really confusing :)
i don't see why you need to query anthing other then the daily rollup? why not just "refresh" the daily rollup and query it directly?
sorry Tom!
A reader, August 31, 2004 - 11:52 am UTC
I am posting the same schema with more meaningful
schema names. If you did not intend me to post
this again, please accept my apologies and ignore this
query.
Also, the reason we need to query the current value
is that the daily rollup happens once a day and this value is useful till usch time that the rollup has not happenned.
We dont use MVs because we need history and cant afford
(spacewise) to store all the data for the lowest level.
----
scott@ORA92I> set head on
scott@ORA92I> alter session set nls_date_format = 'YYYY-MM-DD HH:MI:SS';
Session altered.
scott@ORA92I> --schema
scott@ORA92I>
scott@ORA92I> drop table curr_t1;
Table dropped.
scott@ORA92I> create table curr_t1
2 (
3 info_type varchar2(10),
4 value number,
5 time_of_entry date
6 );
Table created.
scott@ORA92I>
scott@ORA92I> drop table daily_rollup_t1;
Table dropped.
scott@ORA92I> create table daily_rollup_t1
2 (
3 info_type varchar2(10),
4 maximum number,
5 minimum number,
6 average number,
7 sample_count number,
8 time_rolled_up date
9 );
Table created.
scott@ORA92I>
scott@ORA92I> insert into curr_t1( info_type, value, time_of_entry )
2 values( 'a', 1, sysdate );
1 row created.
scott@ORA92I> insert into curr_t1( info_type, value, time_of_entry )
2 values( 'b', 7, sysdate );
1 row created.
scott@ORA92I>
scott@ORA92I> commit;
Commit complete.
scott@ORA92I> select * from curr_t1;
INFO_TYPE VALUE TIME_OF_ENTRY
---------- ---------- -------------------
a 1 2004-08-31 08:33:04
b 7 2004-08-31 08:33:04
scott@ORA92I>
scott@ORA92I> insert into daily_rollup_t1( info_type, maximum, minimum, average, sample_count,
2 time_rolled_up )
3 values( 'a', 10, 3, 5, 2, sysdate-1 );
1 row created.
scott@ORA92I> commit;
Commit complete.
scott@ORA92I>
scott@ORA92I> select *
2 from daily_rollup_t1;
INFO_TYPE MAXIMUM MINIMUM AVERAGE SAMPLE_COUNT TIME_ROLLED_UP
---------- ---------- ---------- ---------- ------------ -------------------
a 10 3 5 2 2004-08-30 08:33:04
First table curr_t1 contains the current values corresponding
to the column info_type( one row per unique value of info_type).
Second table daily_rollup_t1 contains the rolled up value of info_type
column.
Now I want to get the average max, min of all daily
rollup values including the value in curr_t1. The degenerate
case would be when there are no records in the daily_rollup_t1
in which case I will get the value in curr_t1 as average, max
min etc. There will be only one value of "info_type" column and the latest
date in the table curr_t1.
In the above data, my output should be:
INFO_TYPE AVERAGE MAXIMUM MINIMUM LATEST_VALUE
----------- ------- ------- ------- ------------
a 3.67 10 1 1
b 7 7 7 7
scott@ORA92I> -- my attempt
scott@ORA92I>
scott@ORA92I> select info_type,
2 sum_value/sum_sample_count as average,
3 maximum,
4 minimum,
5 last_known_value
6 from
7 (
8 select info_type,
9 sum( value ) sum_value,
10 sum( sample_count ) sum_sample_count,
11 max( maximum ) maximum,
12 min( minimum ) minimum,
13 max( last_known_value ) as last_known_value
14 from
15 (
16 select info_type, value as maximum,
17 value as minimum,
18 value as last_known_value,
19 1 as sample_count,
20 value as value
21 from curr_t1
22 union
23 select info_type, max( maximum), min(minimum),
24 max(0) as last_known_value,
25 sum(sample_count),
26 sum(average*sample_count) as value
27 from daily_rollup_t1
28 group by info_type
29 )
30 group by info_type
31 );
INFO_TYPE AVERAGE MAXIMUM MINIMUM LAST_KNOWN_VALUE
---------- ---------- ---------- ---------- ----------------
a 3.66666667 10 1 1
b 7 7 7 7
August 31, 2004 - 1:31 pm UTC
looks ok to me then, use union all instead of union if you can (understand the fundemental differences between union and union all first!)
thanx!
A reader, August 31, 2004 - 1:59 pm UTC
cumulative resultset
Thiru, September 09, 2004 - 10:45 pm UTC
Hi Tom,
How do we get a cumulative resultset for a column. To be more clear, if I have:
create table t (dt date,id varchar2(6),amt number);
insert into t values('09-SEP-04','ABC',1000);
insert into t values('10-SEP-04','ABC',1000);
insert into t values('13-SEP-04','ABC',1000);
How would we get a result wherein the amt column is summed up for each day and also the days in the month does not have gaps ( assuming we have another table if required for the days of the month)
So the result would be:
dt id cumulative_amt
09-SEP-04 ABC 1000
10-SEP-04 ABC 2000
11-SEP-04 ABC 2000
12-SEP-04 ABC 2000
13-SEP-04 ABC 3000
Do we have to use analytic function for this query?
Thanks a million.
September 10, 2004 - 8:04 am UTC
you need analytics for the running total.
getting the "gap free" stuff will depend on "more information"
a) do we need to partition by ID -- that is, if we insert 'DEF' for sept 12th, are there
- 5 records, if so, then I'm confused
- 10 records
b) if we partition by id do you have version 10g? that has partitioned outer joins
c) if 9i and before, we have to cartesian product and outer join to that.
see:
</code>
https://www.oracle.com/technetwork/issue-archive/2014/14-jan/o14asktom-2079690.html <code>
"partitioned outer joins". it demonstrates what you would have to do in 9i and 10g.
select dt, id, sum(amt) over (partition by id order by dt) amt
from t;
is the inline view you would outer join to.
Thiru, September 10, 2004 - 10:12 am UTC
If the query is for 12-SEP-04, then it should return 4 rows with the same cumulative amount of 10th as there are no records for 11th and 12th. If I have a table with all dates for the month/year, then a join with this second cal_days table produce the required result. I am running on 9.2 and your query worked fine.
SQL> select dt, id, sum(amt) over (partition by id order by dt) amt
2 from t;
DT ID AMT
--------- ------ ----------
09-SEP-04 ABC 1000
10-SEP-04 ABC 2000
13-SEP-04 ABC 3000
Thiru, September 12, 2004 - 9:58 pm UTC
If I have a table with calendar days, then can I join this table for getting the gaps in days and the cumulative amount of the previous day carried over?
Thanks
September 13, 2004 - 7:33 am UTC
maybe, given the level of detail specified, I supposed anything is possible -- or not.
but in any case -- what does this have even remotely to do with "estimate the size of an IOT"???? (meaning, don't try to clarify the question here -- it isn't even remotely related to the original question)
A reader, September 13, 2004 - 11:42 am UTC
I am sorry but my question did not mention a word about "estimate the size of an IOT".
September 13, 2004 - 2:00 pm UTC
sorry that was my mistake. was looking at the wrong subject when I got that one.
SQL Query, Oracle 9i
Jaravam, November 09, 2004 - 2:29 pm UTC
Hi tom
I am using Oracle 9i. I have a table with column Dept and Empnum. This table will have multiple Empnum for a Dept.
For Example:
Dept Empnum
1 A1
1 A2
1 A3
2 B1
2 B2
3 C1
I need a single query which gives me the output as
Dept Empnum(s)
1 A1,A2,A3
2 B1,B2
3 C1
Can you please help me in this.
Thanks
Jaravam
November 09, 2004 - 3:26 pm UTC
search this site for
stragg
count the number of rows in the table.
Sean, November 10, 2004 - 9:49 am UTC
Hi Tom,
There are three ways to count the number of rows in the table. It seems that there are no performance differences even if it is a million rows table with 400 columns. Is it?
select (*) from t1;
select (1) from t1;
select (primary_key_column) from t1;
Oracle 9204. Solaris 5.9
Thanks so much for your help.
November 10, 2004 - 12:10 pm UTC
there is one correct way
select count(*) form t1;
select count(1) is optimized to be that already
select count(pk) would either be the same or less efficient, not worth testing -- count(*) is right.
Outer Join
A reader, November 15, 2004 - 12:43 pm UTC
Can you please explain diff between following queries?
SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+) and co.customer_no is null
SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+)
November 15, 2004 - 8:56 pm UTC
first one is known as an "anti join", probably better written as :
select * from cust
where customer_no not in ( select customer_no from consult );
it shows all rows in cust not in consult.
the other shows all rows in cust and if they have a match in consult, show that too.
Anti Join
A reader, November 16, 2004 - 8:25 am UTC
I executed following query
select * from cust where customer_no not in
(select customer_no from consult)
No rows returned
SELECT c.* FROM scott.cust c, scott.consult co
WHERE c.customer_no = co.customer_no(+) and co.customer_no is null
100000+ rows returned
Should't they return same number of rows?
OK
Esther, November 16, 2004 - 12:27 pm UTC
Hello Tom,
I need a query which calculates the avg(sal) for every three rows.Is there any straight SQL approach available or we have to go in for Analytic functions?Please do reply.
Bye
November 16, 2004 - 1:05 pm UTC
scott@ORA9IR2> select *
2 from ( select trunc(rownum/3-0.1) r, sal
3 from (select sal from emp order by empno)
4 )
5 /
R SAL
---------- ----------
0 800
0 1600
0 1250
1 2975
1 1250
1 2850
2 2450
2 3000
2 5000
3 1500
3 1100
3 950
4 3000
4 1300
14 rows selected.
scott@ORA9IR2>
scott@ORA9IR2> select r, count(*), avg(sal)
2 from ( select trunc(rownum/3-0.1) r, sal
3 from (select sal from emp order by empno)
4 )
5 group by r
6 /
R COUNT(*) AVG(SAL)
---------- ---------- ----------
0 3 1216.66667
1 3 2358.33333
2 3 3483.33333
3 3 1183.33333
4 2 2150
scott@ORA9IR2>
Smart
A reader, November 16, 2004 - 2:27 pm UTC
Thanks
Esther, November 17, 2004 - 7:25 am UTC
Hello Tom,
Thanks for your reply.I used a query like
sql>select avg(sal) from emp
group by mod(rownum,3)
having mod(rownum,3) = 0;
But this results in only one row.Can this query be modified
to achieve the result as you have done?
Please reply.
Bye!
November 17, 2004 - 10:53 am UTC
but that is not even remotely simlar to what I have. redo it the way I did it please.
mod(rownum,3) returns only 0, 1, 3
it will not group 3 records at a time, it'll create 3 groups period.
A reader, December 01, 2004 - 5:52 pm UTC
So far good. But If I use a ref cursor in JAVA or Oracle Forms, How would I display the result set? Is it something that the GUI tools has to deal with?
Thanks
December 01, 2004 - 7:59 pm UTC
to java -- it is not any different than a result set? there is nothing to compare it to in java -- it is just a result set and that is what java uses.
in developer (forms), you can build blocks on stored procedures that return result sets as well -- using ref cursors.
Another query
Scott, January 17, 2005 - 2:53 am UTC
Hello Tom,
I have some questions for you. I am planning to delete some records from one of the tables
Table structure is
DESC T
Name Null? Type
---------------- ------- --------
CHANGE_ID NOT NULL VARCHAR2(10)
CLAIM_CLAIM_ID NOT NULL VARCHAR2(10)
TABLE_NAME NOT NULL VARCHAR2(2)
FIELD_NAME NOT NULL VARCHAR2(2)
ACTION NOT NULL VARCHAR2(2)
USER_USER_ID NOT NULL VARCHAR2(10)
CHANGED_DATE NOT NULL DATE
OLD_VALUE VARCHAR2(40)
NEW_VALUE VARCHAR2(40)
I am trying to delete records on the basis of CHANGED_DATE column. Due to some policy we have to keep records for 7 years so for deleting data older than 7 years I wrote following query but I think I am doing something wrong here.
Query which I used for counting rows to be deleted was
select count(*) from t where changed_date<(select (sysdate-(365*7)) from dual);
COUNT(*)
----------
16330
Now when I was crosschecking the result I came to know that I have done something wrong> I selected min(changed_date)and result was as under.
select min(to_date(changed_date,'MM-DD-YYYY')) from t;
MIN(TO_DAT
----------
06-12-0001
So what I guess that my table has data which starts from 12 July 2001 so I cant delete any data as per our company's data retention policy of 7 years.
So please help me in solving my doubt that in which step I did mistake? I mean did I do any mistake while selecting data older than 7 years from the table? If so please give me correct SQL to featch data older than 7 years.
Thanks,
Scott
January 17, 2005 - 8:28 am UTC
beware implicit conversions.
changed_date is a date.
your default date format probably has YY or RR in it.
yoou to_date( changed_date , .... ) implicitly converts the changed_date into a string, for to-date takes a STRING.
Consider:
ops$tkyte@ORA9IR2> create table t ( dt date );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( sysdate );
1 row created.
ops$tkyte@ORA9IR2> select to_char(dt,'dd-mon-yyyy'),
2 to_date(dt,'dd-mon-yyyy'),
3 to_CHAR( to_date(dt,'dd-mon-yyyy') , 'dd-mon-yyyy' )
4 from t;
TO_CHAR(DT, TO_DATE(D TO_CHAR(TO_
----------- --------- -----------
17-jan-2005 17-JAN-05 17-jan-0005
the DT is really 2005, to_date( dt, .... ) converted it using my default date mask (dd-mon-yy) and then we applied YYYY to it -- that is the year "5"
So the DT data is OK, your implicit conversions -- NOT..
Try this:
select to_CHAR( min(changed_date), 'MM-DD-YYYY' ) from t;
and also -- there are not 365 days in a year, there are however 12 months. You might consider:
select count(*)
from t
where changed_date < ADD_MONTHS( sysdate, -7*12 );
instead.
another query
A reader, January 17, 2005 - 4:27 am UTC
Hello TOm,
It's me again. Sorry but I thought I didn't provide much information. I am on 8.1.7 and in my above question
select sysdate from dual;
sysdate
--------
17-JAN-2005
Thanks,
Scott
Attn :
Parag Jayant Patankar, January 17, 2005 - 9:42 am UTC
Hi Reader,
Regarding your question Tom is suggesting this error because of your date format ( not with data ) mask. Ideally you should use "RRRR" format instead of "YY" format. It is better if you put same test case here including table creation script, inserting records sql statement, and your deletion script.
So it will be easy for everybody.
regards & thanks
pjp
January 17, 2005 - 10:06 am UTC
it is better to avoid implicit conversions all together! period.
Thanks Tom
Scott, January 17, 2005 - 3:52 pm UTC
Thanks for your help Tom. It was very helpful.
Thanks,
Scott
tricky query can you help
Esg, January 18, 2005 - 6:46 pm UTC
our reports insert a row into the below table before running and after running.
CREATE TABLE testlog (
REPORTNAME CHAR (50) NOT NULL,
STARTEND CHAR (1) NOT NULL,
PARMSTRING CHAR (255) NOT NULL,
TIMESTAMP DATE NOT NULL,
ID CHAR (50) NOT NULL,
SOURCE CHAR (10)
/
Before the report runs
insert into test log values( 'Repname1', 'S' , 'name=ken, phone=234434', sysdate, user, 'US')
After the report runs
insert into test log values( 'Repname1', 'E' , 'name=ken, phone=234434', sysdate, user, 'US')
We have 100's of reports run by 100's of users.
Now we want to find out those report which are MOST run simultaneously. Our timestamp column
has even time portion inserted into the database.
Now the challenge is , in the existing data, there is no way we can tie a before report run inserted row to a after report inserted row.
A combination of id and timestamp helps in identyfying a before report insert row uniquely, but there is no way we can tie it the after report run row belonging to it.
ID above is not the unique identifier nor primary key but just the user name.
Now what is best possible way to find out those report which are MOST run simultaneously, i.e. those reports which are most often found running simulteneously as multiple users are running it at the same time.
January 19, 2005 - 10:14 am UTC
<quote>
Now the challenge is , in the existing data, there is no way we can tie a before
report run inserted row to a after report inserted row.
</quote>
sorry -- not too much anyone could do then.....
should have been:
insert
run report
UPDATE....
that is the way I do it myself.
if you cannot tie the start/stop records together, you cannot get to a "start time", "run time" single record -- hence you cannot get the answer to your question.
query
Menon, January 20, 2005 - 4:23 pm UTC
--first the schema
scott@ORA10GR1> drop table child_table;
Table dropped.
scott@ORA10GR1> drop table parent_table;
Table dropped.
scott@ORA10GR1> drop table data_table;
Table dropped.
scott@ORA10GR1> create table parent_table ( parent_name varchar2(10) primary key );
Table created.
scott@ORA10GR1> create table child_table ( parent_name references parent_table,
2 child_name varchar2(10) );
Table created.
scott@ORA10GR1> create table data_table( parent_or_child_name varchar(10),
2 data1 number,
3 data2 number,
4 data3 number
5 );
Table created.
scott@ORA10GR1>
scott@ORA10GR1> insert into parent_table( parent_name) values( 'p1' );
1 row created.
scott@ORA10GR1> insert into parent_table( parent_name) values( 'p2' );
1 row created.
scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p1', 'c11' );
1 row created.
scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p1', 'c12' );
1 row created.
scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p2', 'c21' );
1 row created.
scott@ORA10GR1> insert into child_table( parent_name, child_name ) values( 'p2', 'c22' );
1 row created.
scott@ORA10GR1>
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'p1', 1, 2, 3);
1 row created.
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c11', 1, 2, 3);
1 row created.
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c12', 1, 4, 3);
1 row created.
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'p2', 1, 2, 3);
1 row created.
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c21', 1, 2, 3);
1 row created.
scott@ORA10GR1> insert into data_table( parent_or_child_name, data1, data2, data3 )
2 values( 'c22', 2, 2, 3);
1 row created.
scott@ORA10GR1> commit;
Commit complete.
scott@ORA10GR1> select * from parent_table;
PARENT_NAM
----------
p1
p2
scott@ORA10GR1> select * from child_table;
PARENT_NAM CHILD_NAME
---------- ----------
p1 c11
p1 c12
p2 c21
p2 c22
scott@ORA10GR1> select * from data_table;
PARENT_OR_ DATA1 DATA2 DATA3
---------- ---------- ---------- ----------
p1 1 2 3
c11 1 2 3
c12 1 4 3
p2 1 2 3
c21 1 2 3
c22 2 2 3
6 rows selected.
parent_table contain parent records.
child_table contain child records.
data_table contains data values for either parent or
children. Each row either belongs to the parent or
to one of the child...
The data should be such that for each parent and
child, all data columns (data1, data2 etc.) should
contain the same value. The requirement is to create a report that violates this - e.e. in the above case the
report looks like:
Following data columns don't match for parent and child
values:
parent name: p1, child name: c12 mismatch columns: data2
child name: c21, child name: c22 mismatch columns: data1
Formatting the report as above is not important as
long as we get the answer...
I wrote the following to find out those
records in which parents dont match the child
data value...Before realizing each child
also has to be matched with other child records
for the same parent also...
scott@ORA10GR1> select parent_name, child_name,
2 max( decode( parent_data1, child_data1, null, 'data1') ) data1_compare,
3 max( decode( parent_data2, child_data2, null, 'data2')) data2_compare,
4 max( decode( parent_data3, child_data3, null, 'data3')) data3_compare
5 from
6 (
7 select p.parent_name, p.parent_data1, p.parent_data2, p.parent_data3,
8 c.child_name, c.child_data1, c.child_data2, c.child_data3
9 from
10 (
11 select d.parent_or_child_name parent_name,
12 d.data1 parent_data1,
13 d.data2 parent_data2,
14 d.data3 parent_data3
15 from data_table d, parent_table p
16 where d.parent_or_child_name = p.parent_name
17 ) p,
18 (
19 select d.parent_or_child_name child_name,
20 c.parent_name,
21 d.data1 child_data1,
22 d.data2 child_data2,
23 d.data3 child_data3
24 from data_table d, child_table c
25 where d.parent_or_child_name = c.child_name
26 ) c
27 where c.parent_name(+)= p.parent_name
28 )
29 group by parent_name, child_name;
PARENT_NAM CHILD_NAME DATA1 DATA2 DATA3
---------- ---------- ----- ----- -----
p1 c11
p1 c12 data2
p2 c21
p2 c22 data1
Procedural solution should be simple... Is there
a SQL solution you can think of?
Thanx.
January 20, 2005 - 7:37 pm UTC
how does this relate to the original thread?
does not - never mind
Menon, January 20, 2005 - 8:36 pm UTC
thought the original thread was about a query...
OK- I tried it out myself
Menon, January 24, 2005 - 3:07 pm UTC
Used lag() to solve it.
Posting it just in case someone else is interested...
Added parent_name column to the data_table above..
So the three tables now have following data
---
scott@ORA10GR1.US.ORACLE.COM> select * from parent_table;
PARENT_NAM
----------
p1
p2
scott@ORA10GR1.US.ORACLE.COM> select * from child_table;
PARENT_NAM CHILD_NAME
---------- ----------
p1 c11
p1 c12
p2 c21
p2 c22
scott@ORA10GR1.US.ORACLE.COM> select * from data_table;
PARENT_OR_ PARENT_NAM DATA1 DATA2 DATA3
---------- ---------- ---------- ---------- ----------
p1 p1 1 3 3
c11 p1 1 2 3
c12 p1 1 4 3
p2 p2 1 2 3
c21 p2 1 2 3
c22 p2 2 2 3
6 rows selected.
----
Note the new column parent_name in the data_table.
And the select that seems to work is:
scott@ORA10GR1.US.ORACLE.COM> select d.parent_name, d.data1, d.data2, d.data3,
2 case
3 when parent_name = lag_parent_name then
4 decode( data1, lag_data1, null, 'data1' )
5 end data1_diff,
6 case
7 when parent_name = lag_parent_name then
8 decode( data2, lag_data2, null, 'data2' )
9 end data2_diff,
10 case
11 when parent_name = lag_parent_name then
12 decode( data3, lag_data3, null, 'data3' )
13 end data3_diff
14 from
15 (
16 select d.*,
17 lag( parent_name ) over( order by parent_name) lag_parent_name,
18 lag( data1 ) over( order by parent_name) lag_data1,
19 lag( data2 ) over( order by parent_name) lag_data2,
20 lag( data3 ) over( order by parent_name) lag_data3
21 from
22 (
23 select *
24 from data_table
25 ) d
26 ) d;
PARENT_NAM DATA1 DATA2 DATA3 DATA1_DIFF DATA2_DIFF DATA3_DIFF
---------- ---------- ---------- ---------- ---------- ---------- ----------
p1 1 3 3
p1 1 2 3 data2
p1 1 4 3 data2
p2 1 2 3
p2 1 2 3
p2 2 2 3 data1
----------
Once again - sorry to have posted it on a wrong
thread earlier. Sometimes it can get confusing (e.g. many
of the follow ups in the above thread did not
*seem* related to the original thread when I glanced
through them earlier.)
Of course, in many cases you answer stuff just
because it is easy to do that instead of saying
anything else:) And of course I appreciate
your site regardless anyways...
how to write this in single query
A reader, January 25, 2005 - 5:12 pm UTC
Hi
I have a data like this
C1 D1
---- ----
1 1
1 23
2 1
3 23
I need to write a query which returns C1 with values D1, 1 and 23, D1 can only return 23 if it also have value 1, so the query run agaisnt above data should return
C1 D1
---- ----
1 1
1 23
3 23
It´s like the opposite of
select * from X b
where D1 = 1
and not exists (select null from X a where a.c1 = b.c1 and a.d1 = 23)
How can you write that in single query?
Thank you
January 25, 2005 - 7:18 pm UTC
that doesn't make sense, I've no idea why 2, 1 didn't show up?
or if i can figure out why 2,1 didn't show up, then I cannot figure out how 3,23 did.
got it
A reader, January 25, 2005 - 5:19 pm UTC
Hi I got it
select * from X b
where D1 in (1, 23)
and exists (select null from X a where a.c1 = b.c1 and a.d1 = 23)
cheers
using inline views
Thiru, January 28, 2005 - 4:56 pm UTC
Tom,
Trying to get to a query for the following using inline views. Shall appreciate your help.
drop table test_a;
drop table test_b;
drop table test_c;
create table test_a (snum number, status varchar2(20));
create table test_b(snum number, value number,details varchar2(20));
create table test_c( snum number);
insert into test_b values(100,1000,'testing100');
insert into test_b values(200,2000,'testing200');
insert into test_b values(300,3000,'testing300');
insert into test_b values(400,4000,'testing400');
insert into test_b values(500,5000,'testing500');
insert into test_c values(100);
insert into test_c values(200);
insert into test_c values(300);
insert into test_a values(100,'err100');
insert into test_a values(999,'err999');
select * from test_a;
SNUM STATUS
---------- --------------------
100 err100
999 err999
select * from test_b;
SNUM VALUE DETAILS
---------- ---------- --------------------
100 1000 testing100
200 2000 testing200
300 3000 testing300
400 4000 testing400
500 5000 testing500
select * from test_c;
SNUM
----------
100
200
300
how to write a query for the following result required with
conditon: snum column equates in all the three tables.
for the above data:
cnt(from test_b) cnt(from test_a) sumtest_b.value)
3 1 6000
January 28, 2005 - 7:24 pm UTC
not sure what you mean. "snum column equates in all the three tables"
given that -- I cannot rectify "count in test_b = 3".
It *appears* that you want to take the values in test_c, and count the rows in test_A that have that snum, count the rows in test_b that have that snum, and sum up the value column in test_b that have that value.
select *
from (select count(*) from a where snum in (select * from test_c)),
(select count(*),sum(value) from b where snum in (select * from test_c))
sorry missed an imp point
A reader, January 28, 2005 - 5:17 pm UTC
Here is what is required:
drop table test_a;
drop table test_b;
drop table test_c;
create table test_a (id number,snum number, status varchar2(20));
create table test_b(id number,snum number, value number,details varchar2(20));
create table test_c( id number,snum number);
insert into test_b values(1,100,1000,'testing100');
insert into test_b values(1,200,2000,'testing200');
insert into test_b values(2,300,3000,'testing300');
insert into test_b values(2,400,4000,'testing400');
insert into test_b values(3,500,5000,'testing500');
insert into test_c values(1,100);
insert into test_c values(2,200);
insert into test_c values(2,300);
insert into test_a values(1,100,'err100');
insert into test_a values(4,999,'err999');
SQL> select * from test_a;
ID SNUM STATUS
---------- ---------- --------------------
1 100 err100
4 999 err999
SQL> select * from test_b;
ID SNUM VALUE DETAILS
---------- ---------- ---------- --------------------
1 100 1000 testing100
1 200 2000 testing200
2 300 3000 testing300
2 400 4000 testing400
3 500 5000 testing500
SQL> select * from test_c;
ID SNUM
---------- ----------
1 100
2 200
2 300
how to write a query for the following result required with
conditon: snum and id column equates in all the three tables.
for the above data ( and id and snum found in test_c)
id cnt(from test_b) cnt(from test_a) sumtest_b.value)
1 1 1 1000
2 2 0 7000
January 28, 2005 - 7:26 pm UTC
well, given my small example above, can you extend it to do this..... you can join those two inline views together (i did, a cartesian join...)
add the relevant columns to the inline views and join them.
OK
James, January 29, 2005 - 12:52 am UTC
Hi Tom,
Any direct SQL Statement exists to retrieve data from
a table with sys.anydata type??
Also provide useful inputs for using sys.anydataset
SQL> create table t(x sys.anydata)
2 /
Table created.
SQL> insert into t values(sys.anydata.convertNumber(100))
2 /
1 row created.
SQL> insert into t values(sys.anydata.convertVarchar2('Hello World'))
2 /
1 row created.
SQL> insert into t values(sys.anydata.ConvertDate(sysdate))
2 /
1 row created.
SQL> commit;
Commit complete.
SQL> select * from t
2 /
X()
--------------------------------------------------------------------------------
ANYDATA()
ANYDATA()
ANYDATA()
How do i make up records, using previous record details
Ali, January 29, 2005 - 3:59 am UTC
Database version: 9.2
CREATE TABLE t_history(seq_no number, item_code varchar2(10), doc_date date, doc_no number, qty_bal number, rate number);
INSERT INTO t_history values(101, 'A', to_date('24-DEC-04'), 90, 5, 10.25);
INSERT INTO t_history values(201, 'A', to_date('04-JAN-05'), 101, 10, 10.25);
INSERT INTO t_history values(202, 'A', to_date('14-JAN-05'), 102, 12, 11.5);
INSERT INTO t_history values(203, 'A', to_date('24-JAN-05'), 103, 13, 11.25);
INSERT INTO t_history values(204, 'A', to_date('24-JAN-05'), 104, 11, 11.25);
INSERT INTO t_history values(111, 'B', to_date('30-DEC-04'), 97, 12, 1.75);
INSERT INTO t_history values(221, 'C', to_date('25-JAN-05'), 111, 1, 127);
INSERT INTO t_history values(222, 'C', to_date('27-JAN-05'), 112, 2, 130);
INSERT INTO t_history values(223, 'C', to_date('28-JAN-05'), 113, 1, 130);
DESIRED OUTPUT (start date = 01-Nov-04, end date = current date)
========================================================================
ITEM DATE QTY RATE
-------------------------------------
A 24-DEC-04 5 10.25
A 25-DEC-04 5 10.25
.....................................
.....................................
A 04-JAN-05 10 10.25
A 05-JAN-05 10 10.25
.....................................
.....................................
A 24-JAN-05 11 11.25
.....................................
.....................................
A 29-JAN-05 11 11.25
B 30-DEC-04 12 1.75
B 31-DEC-04 12 1.75
.....................................
.....................................
B 29-JAN-05 12 1.75
C 25-JAN-05 1 127
.....................................
.....................................
C 29-JAN-05 1 130
Thanks for your time.
January 29, 2005 - 8:51 am UTC
You need a result set with all of the dates...
You need a result set with all of the item_codes....
We need to cartesian product them together.... (get all dt/item codes)
then we can outer join with that to pick up the observed qty/rates...
then we can carry down. In 9iR2 this would look like this:
ops$tkyte@ORA9IR2> with dates
2 as
3 (
4 select to_date('01-nov-2004','dd-mon-yyyy')+rownum-1 dt
5 from all_objects
6 where rownum <= (sysdate-to_date('01-nov-2004','dd-mon-yyyy'))+1
7 ),
8 items
9 as
10 (
11 select distinct item_code
12 from t_history
13 ),
14 items_dates
15 as
16 (select * from dates, items )
17 select *
18 from (
19 select dt, item_code,
20 to_number( substr( max( rn || qty_bal ) over
(partition by item_code order by dt),11)) qty_bal,
21 to_number( substr( max( rn || rate )
over (partition by item_code order by dt),11)) rate
22 from (
23 select a.*, b.qty_bal, b.rate,
24 case when b.qty_bal is not null
25 then to_char(row_number()
over (partition by a.item_code order by a.dt),'fm0000000009')
26 end rn
27 from items_dates a left join t_history b
on (a.dt = b.doc_date and a.item_code = b.item_code)
28 )
29 )
30 where qty_bal is not null or rate is not null
31 order by item_code, dt
32 /
DT ITEM_CODE QTY_BAL RATE
--------- ---------- ---------- ----------
24-DEC-04 A 5 10.25
25-DEC-04 A 5 10.25
26-DEC-04 A 5 10.25
27-DEC-04 A 5 10.25
28-DEC-04 A 5 10.25
29-DEC-04 A 5 10.25
30-DEC-04 A 5 10.25
31-DEC-04 A 5 10.25
01-JAN-05 A 5 10.25
.......
In 10g, this would be simplified to:
ops$tkyte@ORA10G> with dates
2 as
3 (
4 select to_date('01-nov-2004','dd-mon-yyyy')+rownum-1 dt
5 from all_objects
6 where rownum <= (sysdate-to_date('01-nov-2004','dd-mon-yyyy'))+1
7 )
8 select *
9 from (
10 select b.dt, a.item_code,
11 last_value( a.qty_bal ignore nulls) over (partition by item_code order by dt) qty,
12 last_value( a.rate ignore nulls) over (partition by item_code order by dt) rate
13 from t_history a partition by (item_code) right join dates b on (b.dt = a.doc_date)
14 )
15 where qty is not null or rate is not null
16 order by item_code, dt
17 /
DT ITEM_CODE QTY RATE
--------- ---------- ---------- ----------
24-DEC-04 A 5 10.25
25-DEC-04 A 5 10.25
26-DEC-04 A 5 10.25
27-DEC-04 A 5 10.25
28-DEC-04 A 5 10.25
29-DEC-04 A 5 10.25
30-DEC-04 A 5 10.25
31-DEC-04 A 5 10.25
01-JAN-05 A 5 10.25
02-JAN-05 A 5 10.25
03-JAN-05 A 5 10.25
04-JAN-05 A 10 10.25
05-JAN-05 A 10 10.25
06-JAN-05 A 10 10.25
07-JAN-05 A 10 10.25
08-JAN-05 A 10 10.25
....
due to the addition of
a) partitioned outer joins
b) ignore nulls in analytics.
Thanks Tom, but ...
Ali, January 30, 2005 - 8:03 am UTC
If more than 1 record exists for a day then instead of using:
....
from items_dates a left join t_history b
on (a.dt = b.doc_date and a.item_code = b.item_code)
....
I have to use:
......
from items_dates a left join (
SELECT * FROM (
SELECT seq_no, item_code, doc_date, qty_bal, rate,
row_number() over (partition by item_code, doc_date order by seq_no desc) rr
FROM t_history
) WHERE rr = 1
)
b
on (a.dt = b.doc_date and a.item_code = b.item_code)
......
right? Or is there a better way?
Ali
January 30, 2005 - 9:39 am UTC
no, why? why wouldn't you report on all records? is there more information missing from the question?
if you want just the "highest sequence by day", what you have there is just fine though (sort of a missing requirement in the original question)
query for the inline view issue above
Thiru, January 31, 2005 - 10:36 am UTC
Tom,
With the help of the inline view example, I wrote this query.
select b.id,b.cnt,b.sum,a.cnt from
(select id,count(*) cnt, sum(value) sum
from test_b where id||snum in (select id||snum from test_c) group by id)b
left outer join
(select id,count(*) cnt from test_a where id||snum in (select id||snum from test_c) group by id)a on( a.id=b.id );
ID CNT SUM CNT
---------- ---------- ---------- ----------
1 1 1000 1
2 2 7000
The result looks good. Will the id||snum an appropriate way of using in the where clause or is it good to join the two columns separately? Is there any other way of writing this query? Thanks for the time.
January 31, 2005 - 11:33 am UTC
where (id, snum) in ( select id, snum from ....
think about id=5, snum = 55 and id = 55, snum = 5 for example.....
just use a column list.
Thiru, January 31, 2005 - 4:05 pm UTC
Tom,
The undernoted query has a defect ( actually related to the queries I thought had solved the issue). The second inline view actually returns only count of 1 for the condition. But in the cartesian product, it shows up in all the rows. How to avoid this?
select a.id,a.cnt,a.sum,b.cnt from
(select id, count(*) cnt ,sum(value) sum from test_b where(id,snum) in
(select id,snum from test_c) group by id) a,
(select count(*) cnt from test_a where(id,snum) in
(select id,snum from test_c)) b;
ID CNT SUM CNT
---------- ---------- ---------- ----------
1 1 1000 1
2 2 7000 1
Thanks
January 31, 2005 - 4:15 pm UTC
sorry, i don't know what the "question" is really.
You have this query:
(select id, count(*) cnt ,sum(value) sum
from test_b where(id,snum) in (select id,snum from test_c)
group by id) a
counts and sums by ID
You have this one row query:
(select count(*) cnt
from test_a
where(id,snum) in (select id,snum from test_c)) b
it is just a count, it will be assigned to every row.
I don't know what else you would "do" with it other than give it to every row, it doesn't "belong" to any single row (no keys)
Thiru, January 31, 2005 - 4:55 pm UTC
So with the given conditions, the following query was written to get the result. Will this turn out very complicated (provided it is the only way to get the result) if the number of rows in test_c are hugh ( in millions) ?
select c.id,sum(c.cnt1),sum(c.sum),sum(c.cnt) from
(
select a.id id,a.cnt cnt1 ,a.sum sum ,b.cnt cnt from
(select id, count(*) cnt ,sum(value) sum from test_b where(id,snum) in
(select id,snum from test_c) group by id) a
left outer join
(select id,count(*) cnt from test_a where(id,snum) in
(select id,snum from test_c) group by id) b
on b.id=a.id) c
group by c.id ;
ID SUM(C.CNT1) SUM(C.SUM) SUM(C.CNT)
--------- ----------- ---------- ----------
1 1 1000 1
2 2 7000
January 31, 2005 - 5:55 pm UTC
millions are ok, it'll do the right speedy thing. big bulk hash joins.
Cursor
Anil, February 02, 2005 - 11:13 am UTC
Hi Tom
Regarding the cursor query you mentioned in the thread
select mgr, cursor( select ename from emp emp2 where emp2.mgr = emp.mgr )
2* from ( select distinct mgr from emp ) emp
I thouhgt this query would be good to fetch records from a master and its child records when execute from a middle tier beacuse in terms of Total number of bytes tranasfered and number of round trips when compared to a normal join
But the result was exactly opposit.
@NGDEV1-SQL> select a.atp_id,AC_TYP_INHOUSE,AC_TYP_DESC,AC_TYP,
2 cursor(select deck,deck_desc from MST_AC_TYP_DECK b where b.atp_id=a.atp_id) Deck_Info,
3 cursor(select period_frm,period_to,brd_pnt,off_pnt from MST_AC_TYP_EXCP c where c.atp_id=a.atp_id) Exception
4 from mst_ac_typ a where a.atp_id=3
5 /
ATP_ID AC_TY AC_TYP_DESC AC_TYP DECK_INFO EXCEPTION
---------- ----- -------------------------------------------------- ---------- -------------------- --------------------
3 33H AIRBUS A330-300 CURSOR STATEMENT : 5 CURSOR STATEMENT : 6
CURSOR STATEMENT : 5
DECK DECK_DESC
---------- --------------------------------------------------
LOWERDECK
LD TEST
MD MIDDLE DECK
CURSOR STATEMENT : 6
PERIOD_FR PERIOD_TO BRD_P OFF_P
--------- --------- ----- -----
26-JAN-05 27-JAN-05
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=17)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_DECK' (TABLE) (Cost=1 Card=2 Bytes=36)
2 1 INDEX (RANGE SCAN) OF 'ATD_ATP_FK_I' (INDEX) (Cost=1 Card=2)
3 0 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_EXCP' (TABLE) (Cost=1 Card=1 Bytes=26)
4 3 INDEX (RANGE SCAN) OF 'ATE_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
5 0 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP' (TABLE) (Cost=1 Card=1 Bytes=17)
6 5 INDEX (UNIQUE SCAN) OF 'ATP_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)
Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
1826 bytes sent via SQL*Net to client
1211 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
NGCS_DEV@NGDEV1-SQL> select a.atp_id,AC_TYP_INHOUSE,AC_TYP_DESC,AC_TYP,
deck,deck_desc,period_frm,period_to,brd_pnt,off_pnt from MST_AC_TYP_DECK b, MST_AC_TYP_EXCP c,mst_ac_typ a
where b.atp_id=a.atp_id
and c.atp_id=a.atp_id
and a.atp_id=3; 2 3 4 5
ATP_ID AC_TY AC_TYP_DESC AC_TYP DECK
---------- ----- -------------------------------------------------- ---------- ----------
DECK_DESC PERIOD_FR PERIOD_TO BRD_P OFF_P
-------------------------------------------------- --------- --------- ----- -----
3 33H AIRBUS A330-300 LOWERDECK
26-JAN-05 27-JAN-05
3 33H AIRBUS A330-300 LD
TEST 26-JAN-05 27-JAN-05
3 33H AIRBUS A330-300 MD
MIDDLE DECK 26-JAN-05 27-JAN-05
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=2 Bytes=122)
1 0 MERGE JOIN (CARTESIAN) (Cost=3 Card=2 Bytes=122)
2 1 NESTED LOOPS (Cost=2 Card=1 Bytes=43)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP' (TABLE) (Cost=1 Card=1 Bytes=17)
4 3 INDEX (UNIQUE SCAN) OF 'ATP_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)
5 2 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_EXCP' (TABLE) (Cost=1 Card=1 Bytes=26)
6 5 INDEX (RANGE SCAN) OF 'ATE_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
7 1 BUFFER (SORT) (Cost=2 Card=2 Bytes=36)
8 7 TABLE ACCESS (BY INDEX ROWID) OF 'MST_AC_TYP_DECK' (TABLE) (Cost=1 Card=2 Bytes=36)
9 8 INDEX (RANGE SCAN) OF 'ATD_ATP_FK_I' (INDEX) (Cost=0 Card=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
1093 bytes sent via SQL*Net to client
511 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rows processed
NGCS_DEV@NGDEV1-SQL> /
In the above example my expectation was less byes transfered since master data is not repeating. Additional recursive call is understandable since Oracle has to do something to make the out put in object form.
Is this not a good way to execute queries..
Would you please explain
Rgds
Anil
February 03, 2005 - 12:49 am UTC
your thought process was a little off. first JOINS ARE NOT EVIL, in a rdbms in fact, JOINS ARE WHAT we do!
second, you'll increase the round trips - each cursor will "go back" ,each row with a cursor will "go back"
each cursor will have "some overhead"
joining is "good", just join
efficient way to get counts
Thriu, February 02, 2005 - 3:57 pm UTC
Hi Tom,
I had submitted this earlier, but looks like it did not go through ( some prob with my browser)..So again submitting.
What is an efficient way to get the counts from a table (around 10 million rows) based on varied conditions?
The actual table has around 50 columns.
drop table temp_del;
create table temp_del (c1 varchar2(3),c2 varchar2(3),c3 varchar2(3),c4 varchar2(3),flag number);
insert into temp_del values('abc','bcd','cde','def',0);
insert into temp_del values('abc','bcd','cde','def',1);
insert into temp_del values('abc','bcd','cde','def',2);
insert into temp_del values('bcd','cde','def','efg',0);
insert into temp_del values('bcd','cde','def','efg',1);
insert into temp_del values('bcd','cde','def','efg',2);
insert into temp_del values('cde','def','efg','fgh',0);
insert into temp_del values('cde','def','efg','fgh',1);
insert into temp_del values('cde','def','efg','fgh',2);
commit;
select count(*) from temp_del where c1='abc' and c2='bcd' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and flag=1;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=0;
select count(*) from temp_del where c1='abc' and c2='bcd' and c3='efg' and flag=1;
select count(*) from temp_del where c1='bcd' and c2='cde' and c3='def' and flag=2;
and so many other combinations similar to this..
Is there a way the table can be accessed once and get the varied counts like above?
Count in one Query
Anil Shafeeque, February 02, 2005 - 11:04 pm UTC
Hi
Will this query do that job,
select sum(case when carr_code= 'PP' and vol_unit = 'LTRE' then 1 else 0 end) LTRCOUNT,
sum(case when carr_code= 'PP' and vol_unit is null then 1 else 0 end) NONLTRCOUNT,
sum(case vol_unit when 'LTRE' then 1 else 0 end) C2,
count(*) TOt from TEST_TABLE
LTRCOUNT NONLTRCOUNT C2 TOT
---------- ---------- ---------- ----------
3 4 3 7
Rgds
Anil
What about this query?
Totu, February 03, 2005 - 1:25 am UTC
I have table x and y.
Rows in x:
x y
----------- -----------
8 3
45 2
10 3
43 1
Row in y:
x y
----------- -----------
8 3
10 2
43 3
I want query return records from x table where each table's x column values are different but y column values are same:
select x.* from x, y
where x.x <> y.x and x.y = y.y
It returns:
X Y
---------- ----------
45 2
10 3
8 3
10 3
I expected query return thesecond row from x table.
Thanks in advance.
February 03, 2005 - 1:51 am UTC
if only i had a create table and some insert intos, i could play with this one....
Difference
Vikas Khanna, February 03, 2005 - 8:41 am UTC
Hi Tom,
Is there any difference between two statements:
Where A is NOT NULL Vs NOT A is NULL, from performance point of view.
Regards,
February 03, 2005 - 2:17 pm UTC
easy enough to test (so i wonder why after seeing lots of tests here, people don't set them up?)
short answer, no.
test below....
ops$tkyte@ORA9IR2> create table t as select decode(rownum,0,0,null) x, rownum y, decode(mod(rownum,2), 1, rownum, null ) z from big_table.big_table;
Table created.
ops$tkyte@ORA9IR2> @trace
Session altered.
ops$tkyte@ORA9IR2> select count(*) from t where x is not null;
COUNT(*)
----------
0
ops$tkyte@ORA9IR2> select count(*) from t where NOT x is null;
COUNT(*)
----------
0
ops$tkyte@ORA9IR2> select count(*) from t where y is not null;
COUNT(*)
----------
9000000
ops$tkyte@ORA9IR2> select count(*) from t where NOT y is null;
COUNT(*)
----------
9000000
ops$tkyte@ORA9IR2> select count(*) from t where z is not null;
COUNT(*)
----------
4500000
ops$tkyte@ORA9IR2> select count(*) from t where NOT z is null;
COUNT(*)
----------
4500000
ops$tkyte@ORA9IR2>
select count(*) from t where x is not null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.14 1.14 18466 18477 0 1
select count(*) from t where NOT x is null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.13 1.16 18466 18477 0 1
select count(*) from t where y is not null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 13.99 13.69 18466 18477 0 1
select count(*) from t where NOT y is null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 13.89 13.60 18466 18477 0 1
select count(*) from t where z is not null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 7.65 7.51 18466 18477 0 1
select count(*) from t where NOT z is null
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 7.63 7.68 18466 18477 0 1
in response to Totu
A reader, February 03, 2005 - 10:02 am UTC
Those results are what I would expect.
Change your query to:
select x.*,y.*
from x, y
where x.y = y.y
order by 1,3
and you will see how the joins are being done with x.y = y.y, then it will be quite obvious with the added condition of x.x <> y.x that you get the results you did.
I think you are overlooking the fact that you are joining multiple rows in X to multiple rows in Y.
Cursor vs Join
Anil, February 04, 2005 - 1:33 am UTC
Hi Tom
<<first JOINS ARE NOT EVIL, in a rdbms in
fact, JOINS ARE WHAT we do!
>>
I am always convinced that Joints are not evil . Here I was thinking of the amount of data tranasfer to the java client. I thought this will reduce since master table data is not repeating. Also it reduces the coding at java since they get it as objects. But now I am convinced that they can do exta coding to get the performance. So we are going for a plain join.
Rgds
Anil
February 04, 2005 - 11:34 am UTC
but unless you are pulling TONS of data with each cursor, the overhead of being a cursor and the roundtrips involved in being a cursor are going to overwhelm any reduction.
here the volumes of data are just "too small", we are down in small "bytes", nothing to optimize there.
Efficient way to get counts
Thiru, February 04, 2005 - 11:24 am UTC
Hi Tom,
In my earlier issue with the same Title, I had given a case for varied count conditons from a single table and a response was given by Anil. My question is:
a) Is each row read once for each case statement or will all case statements be read for each row and grab the count? So if there are say 10 case statements, then hopefully oracle goes through all these cases for every row of the table for a FTS (or according to the conditions). Thanks for clarifying.
b)Is the query the only way to write with one single access to the table data?
Thanks again.
February 04, 2005 - 12:03 pm UTC
select sum(case when carr_code= 'PP' and vol_unit = 'LTRE' then 1 else 0 end)
LTRCOUNT,
sum(case when carr_code= 'PP' and vol_unit is null then 1 else 0 end)
NONLTRCOUNT,
sum(case vol_unit when 'LTRE' then 1 else 0 end) C2,
count(*) TOt from TEST_TABLE
that'll read each row in test_table once and sum of the results of the case statements applied to that row.
you could use decode if you like.
Thanks so much..
A reader, February 04, 2005 - 12:17 pm UTC
demo_pkg.get_query
A reader, February 11, 2005 - 3:37 am UTC
I used the package and got the output i wanted, thanks for that., I tried to alter the column heading to just Dd-Mon but my version of the altered pkg does compile with errors, also can i extend the where clause to filter come data here.
15-JUL 16-JUL 17-JUL 18-JUL
Ajay 500 0 100 0
Bhavani 700 0 0 0
Chakri 0 0 200 0
Also can we get data like for example ( all tuesdays of the two months)
thanks
February 11, 2005 - 8:05 pm UTC
don't know what to say here -- other than "yes, you can filter"
you can make this code do whatever you want.
but I cannot help you with a compilation error, without actually seeing anything
demo_pkg.get_query
A reader, February 12, 2005 - 12:56 am UTC
u r rt tom,
i should have give the code for you to see the full picture
the output i expected is something like
nameid cfg cabtype 01-Jan 02-Jan 03-Jan
---------------------------------------
abc 2/2 h1 20 19.56 22
xbc 1/2 h2 21 17.46 12
ybc 4/2 h1 22 14.66 22
zbc 2/1 h1 16 17.76 23
1 create or replace package crstb_tch
2 as
3 type rc is ref cursor;
4 procedure get_query( p_cursor in out rc, p_start date, p_end date );
5 end;
6 /
7 create or replace package body crstb_tch
8 as
9 procedure get_query( p_cursor in out rc, p_start date, p_end date )
10 is
11 l_query long := 'select Nameid, cfg, cabtype ';
12 begin
13 for i in 1 .. trunc(p_end)-trunc(p_start)+1
14 loop
15 l_query := l_query || ', min( decode( trunc(rd_dt), ' ||
16 'to_date( ''' || to_char(p_start+i-1,'dd-mon') ||
17 ''', ''dd-mon'' ), tch, 0 )) "' || to_char(p_start+i-1,'dd-mon') || '"';
18 end loop;
19 l_query := l_query || ' from jttbl ' ||
20 'where tch > 10 and bsc like '%Bangalore%' ||
21 'group by nameid, cfg, cabtype ' ||
22 'order by nameid' ;
23 open p_cursor for l_query;
24 end;
25* end;
26 /
Warning: Package created with compilation errors.
Also as a further extension how to modify the code to as for only tuesdays (or wednesdays) of a month (or two months.)?
thanks in advance
February 12, 2005 - 12:37 pm UTC
"u r rt tom,"
w h a t t h e h e c k i s t h a t
5 end;
6 /
7 create or replace package body crstb_tch
8 as
s h o u l d b e a r e d f l a g t o y o u.
it put the spec and body together there, seems that / is not submitting the code, fix that.
you probably did a "get" which puts a file into the buffer and then ran it. Use "start filename" or @filename, not get....
a n d p l e a s e, u s e r e a l w o r d s. I a m p r e t t y s u r e t h i s h a r d t o r e a d f o r y o u.
demo_pkg_get_query
A Reader,, February 17, 2005 - 12:09 am UTC
Hi TOM,
what the heck is wrong with this
"u r rt tom,"
you are having a complex. period so you cannot read it.,
if some thing was a red flag or green or whatever color i would not be visiting this page.
your answer did not leave me any wiser in oracle or in english
February 17, 2005 - 7:51 am UTC
sorry, I could not read this followup at all. It is gibberish as well.
your keyboard must be in a state of failure, maybe that is why the "/" went wrong (with lots of other letters)?
sorry if you cannot take a joke and get the point that in professional communication you should actually use full words. Do u use ths spk in ur resume? (I certainly hope not)
I did point out what you did wrong -- and:
5 end;
6 /
7 create or replace package body crstb_tch
8 as
should have been what we call a "red flag" -- sqlplus was treating the spec and body as ONE piece of code, and I guessed it was most likely due to "using get" instead of the standard @file or "start file"
and if you are not any wiser in Oracle -- that is not my failing here. I pointed out your issue (you used get) -- without even SEEING your command. I told you how to do it right (use @file or "start file"). I did that in normal text, nothing relevant to answering the question was obscured using techniques you used to write to me in the first place.
And this isn't "about english", this would apply in ANY language. In your native language, do you take proper representations of words and make up new ones to communicate to your neighbors?
on a instant message session, perhaps (even then, 1/2 the times I have to ask "what does FDAGFA mean exactly?")....
on a phone SMS message to a friend perhaps.....
but in written professional communication, nope, unacceptable (not to mention hard to read isn't it)
Answer To the impolite "Reader"
A reader, February 17, 2005 - 4:40 pm UTC
I won't be technically what I am without this site. And I won't mind paying even if Tom were to charge me by page. It is that valuable. But it is all free.
Tom spends most of his week-ends answering our questions. Let's try to be decent and polite even if not thankful.
February 17, 2005 - 8:17 pm UTC
It isn't about being polite to me really here. I do this mostly because I like to. Be rude of you want, but expect me to make fun of "im speak" -- why? because it is something to be made fun of.
To me, it is much more fundemental. It is about communication. It is about taking enough time to talk to the person on the other end of the wire.
We are in a technical profession (last time I looked). Preciseness is key -- I have enough challenges trying to figure out what the question is sometimes, let alone translating "im speak" into real words. The ability to phrase a complete question so that someone who hasn't actually been consumed by the problem for the last day or so can understand it is hard enough -- but when you add in the "static" (i don't know how else to describe it, it is like static on the radio, the sound drops out and I miss something) of "im speak" to the mix makes it unbearable.
To that end, I've modified my review banner -- I give fair warning, you use "im speak" and I will make fun of you and call it out. period.
Call it a crusade if you will, I'm still looking for that "u" person -- no clue what gender they are, where they live, what they do -- but people keep asking for them over and over.
Related to MAX salary in each dept
Ravi Kumar, February 18, 2005 - 3:27 am UTC
Hi.. Tom
I am a big fan of you :)
I was reading the followup of one of question related to get the maximum salary in each department.
I can see that every query you suggested to get this includes partition by clause, What about this one..
select ename,deptno,sal
from emp where (deptno,sal) in(select deptno,max(sal)
from emp
group by deptno)
and will it be slower than using partition by.
Thanks & Regards
Ravi
February 18, 2005 - 8:09 am UTC
on large sets, you will basically be scanning the table twice.
analytics -- once.
Thanks Tom
A reader, February 18, 2005 - 8:52 am UTC
I learnt a lot from this site.This is by far the best Oracle Forum I have ever been to.I learn something new everyday.We should really appreciate the fact that Tom is taking time to answer so many questions ( For godsake he is a Vice President).I hope you continue your service to the Oracle World
Tom..You Rock
A reader, February 22, 2005 - 9:53 am UTC
Hi Tom,
I read the follow up and got very disappointed from some people. But please just ignore them and continue helping Oracle World. I learned a lot from your great site.
I am using one of your magic SQL for parsing one column and split it to different parts. The query works fine. The problem that I have is that, I have to do it 1000 records at each time. It means if the query selects more than that e.g 10,000 records to process, the session will stay active forever and I have to kill the session and start it again with the limit of 1000. Could you please tell me what can be wrong?
SQL> desc nitstoryn
Name Null? Type
----------------------------------------- -------- ----------------
SRL NOT NULL NUMBER(12)
STORYFILE_NME NOT NULL VARCHAR2(1000)
AUTHOR VARCHAR2(42)
STORY_ORIGIN VARCHAR2(10)
CATEGORY VARCHAR2(30)
HEADLINE VARCHAR2(200)
SUBHEADLINE VARCHAR2(200)
KEYWORDS VARCHAR2(1000)
HISTORY VARCHAR2(200)
DATELINE VARCHAR2(1000)
HIGHLIGHT VARCHAR2(4000)
CREATION_TIME VARCHAR2(100)
LAST_MODIFIED_TIME VARCHAR2(100)
TIMEZONE VARCHAR2(25)
FOOTNOTES CLOB
BODY CLOB
CREATION_TIME_DATE DATE
SQL> desc nitielinkn
Name Null? Type
----------------------------------------- -------- ---------------------
SRL NOT NULL NUMBER(12)
NITSTORY_SRL NUMBER(12)
LINK_TYP VARCHAR2(1)
LINK VARCHAR2(4000)
VALID_FLG VARCHAR2(1)
LAST_VALIDATED_DATE DATE
INSERT INTO nitielinkn(srl,nitstory_srl,link_typ,link)
SELECT nitsielink.nextval, srl story_srl, 'E',
SUBSTR(to_char(body),INSTR(lower(to_char(body)),'href',1,rr.r),
INSTR(body,'"',INSTR(body,'href',1,rr.r),2)-INSTR(body,'href',1,rr.r)+1) link
FROM
(SELECT srl, ','||body||',' body, (LENGTH(body)-LENGTH(replace(body,'href','')))/4 cnt
FROM nitstoryn WHERE INSTR(body,'href',1,1) > 0 ) nitstory,
(SELECT rownum r
FROM all_objects
WHERE rownum <= 100) rr
WHERE rr.r <= nitstory.cnt
and srl > 11000
and srl <= 12000 "This is the part that I have to use to limit it to 1000"
and INSTR(lower(to_char(body)),'href',1,1) > 0
/
Thanks,
- Arash
February 22, 2005 - 10:01 am UTC
trace it with 1000, 2000, 5000, 10000 see what is happening.
I don't see any reasons why it would "stop", get slower maybe as it pages to temp -- sure.
OK
Catherine, February 22, 2005 - 12:01 pm UTC
Hi Tom,
Any other better way to put this query?
SQL> select deptno,sal,count(*) from emp
2 group by deptno,sal
3 having sal = any(select max(sal) from emp group by deptno)
4 /
DEPTNO SAL COUNT(*)
---------- ---------- ----------
10 5000 1
20 3000 2
30 2850 1
Please do reply.
Bye!
February 22, 2005 - 12:41 pm UTC
there are many ways to write that query.
drop the "any" and it'll work -- there is a "better" way. done.
search around for "top-n"
Sql Query
Ratnam, March 02, 2005 - 9:52 am UTC
Hi Tom,
Please help to write SQL query to list all records that was inserted between today 8am and previous day 8am.
example
Between 8am 2-Mar-2005 and 8am 1-Mar-2005.
Thanks a lot in advance.
-Ratnam
March 02, 2005 - 10:12 am UTC
trunc(sysdate)+8/24 it 8am today.
trunc(sysdate-1)+8/24 is 8am yesterday
between those two are the records of interest.
What commands are in @trace file ?
Parag Jayant Patankar, March 02, 2005 - 10:18 am UTC
Hi tom,
While replying to one of the questions you have used
ops$tkyte@ORA9IR2> @trace ( trace.sql ), Can you tell us what is the content of trace.sql ?
regards & thanks
pjp
March 02, 2005 - 10:57 am UTC
alter session set events '10046 trace name context forever, level 12';
Nice
Raju, March 02, 2005 - 10:32 am UTC
Hello Tom,
I use Oracle 9i Release 2.I would like to insert a single space
between each character of the string "Hello World"
SQL> select 'Hello World' from dual
2 /
'HELLOWORLD
-----------
Hello World
I would like to get the output as
H e l l o W o r l d
Is it possible in Oracle 8i or 9i?
March 02, 2005 - 11:07 am UTC
it would be hard using the builtins.
in 10g, easy
1* select regexp_replace( 'hello world', '(.)', '\1 ' ) from dual
ops$tkyte@ORA10G> /
REGEXP_REPLACE('HELLOW
----------------------
h e l l o w o r l d
in 8i/9i, either have the client do it or write a small plsql function to do it.
Oracle consultant
Dawar, March 03, 2005 - 3:01 pm UTC
Tom,
I need to write a query in the following form:
select * from table where table.field IN f(:bind-variable)
or perhaps
select * from table where table.field IN (select field from f(:bind-variable))
where f(:bind-variable) is a function or stored procedure that returns a rowset, ref cursor, table or whatever.
Regards,
Dawar
March 03, 2005 - 5:37 pm UTC
search for str2tbl on this site
'h e l l o w o r l d'
ant, March 07, 2005 - 12:37 pm UTC
if you're on 9i, pull the string apart then use sys_connect_by_path to put it together again:
SQL> select trim(replace(sys_connect_by_path(c,'_ '),'_')) msg
2 from (
3 select iter.pos, msg, substr(d.msg,iter.pos,1) as c
4 from (select 'hello world' as msg from dual) d,
5 (select rownum pos from emp) iter
6 where iter.pos <= length(d.msg)
7 )
8 where level = length(msg)
9 start with pos=1
10 connect by prior pos=pos-1;
MSG
----------------------
h e l l o w o r l d
still might be better of writing some plsql, but this works in a pinch. amazing how much nicer regex is for doing things like this - 10 lines of sql code compared to 1 using regex.
SQL Query
RS, March 18, 2005 - 3:28 am UTC
Hi Tom,
I have a table like this :
s_name varchar2,
s_type varchar2,
s_num1 number,
s_num2 number
Values :
'A','X',100,200
'A','Y',,100
'B','X',50,
'B','Z',,75
Output i need:
X1 X2 Y1 Y2 Z1 Z2
A 100 200 0 100 0 0
B 50 0 0 0 0 75
Can you please help by suggesting a simpler way
to achieve this?
Thanks and regards,
RS
March 18, 2005 - 7:03 am UTC
<quote from the page you had to use to put this here>
If your followup requires a response that might include a query, you had better supply very very simple create tables and insert statements. I cannot create a table and populate it for each and every question. The SMALLEST create table possible (no tablespaces, no schema names, just like I do in my examples for you)
</quote>
but -- unless you KNOW before hand the max number of X's and Y's and Z' (for you see, a sql query has a fixed number of columns), you'll need to read the chapter on dynamic SQL in Expert one on one Oracle where I demonstrate how to run a query to build a query with the right number of columns.
A reader, March 23, 2005 - 12:18 am UTC
Column Format
A reader, April 14, 2005 - 11:51 am UTC
Tom,
I know we can do this: COLUMN column_name FORMAT model
Is there a way to do that for all columns in the select statement?
Thanks.
April 14, 2005 - 11:56 am UTC
only for numbers
set numformat ....
but otherwise, you need to do it column by column pretty much.
A reader, April 14, 2005 - 12:00 pm UTC
Thanks. I was hoping that you had some tricks.
A reader, April 15, 2005 - 5:35 pm UTC
Hi Tom,
I have table like
create table t ( id number,
col1 varchar2(1),
col2 varchar2(1),
col3 varchar2(1)
);
insert into t values(1, 'a', 'b', null);
insert into t values(2, null, 'b', 'c');
insert into t values(3, 'a', null, 'c');
insert into t values(4, 'd', 'e', null);
insert into t values(5, 'd', 'f', null);
What I want to do is if any columns ( col1, 2, 3 ) have same value, then the records will be in one group, so records 1,2,3 will be in one group, records 4, 5 will be in other group.
How to do this with sql?
Thanks
April 15, 2005 - 6:21 pm UTC
select min(rowid) over ( partition by col1, col2, col3 ) grp,
id, col1, col2, col3
from t
/
grp is your "grouping" column.
A reader, April 15, 2005 - 9:02 pm UTC
Hi Tom,
Maybe I didn't describe it clearly, actually, I want the result looks like
ID C C C grp
---------- - - -
1 a b 1
2 b c 1
3 a c 1
4 d e 2
5 d f 2
Because record 1 and 3 have same value of col1('a'), so 1 and 3 are in same group, record 1 and 2 have same value of col2('b'), so 1 and 2 are in same group, record 2 and 3 have same value of col3('c'), so 2 and 3 are in same group, so 1,2,3 should be in same group. same thing to record 4 and 5.
grp number doesn't need to be continuous, gap between grp number is OK.
How can I do this?
Thanks
April 16, 2005 - 8:46 am UTC
and what if record 1 was
a b f
what "group" did it belong to then. I don't see this happening in general. What about
How to sort...
A reader, April 16, 2005 - 6:25 am UTC
Hi Tom,
I was pretty impressed the way you answer sql queries. I would appreciate, if you can also look into this.
I have to write a sql query where it has to retrieve all the columns of the table TEST1 with the given sorting order.
TABLE : TEST1
The raw data is as follows
SQL> Select dispnum, airline, org, des, mtype from TEST1;
DISPNUM AIRLINE ORG DES MTYPE
------- ------- --- --- -----
113 CX HKG DXB CP
13 AH ALG JED EMS
4146 CX CAN DXB EMS
50 BI BWN DXB LC
8 AZ MXP AKL CP
74 BG DAC DXB LC
1 LH FRA DXB LC
BAH EK BAH KWI CP
SN DXB LON CP
SN SV RUH EWR CP
7193 DXB CMN EMS
Sorting to be done on the columns (airline, dispnum, org, des, mtype) in ascending order. Ensure that (dispnum) starting with a character should come first in the list.
Your assistance will be highly appreciated.
Rgrds
April 16, 2005 - 8:58 am UTC
no create tables...
no insert into's....
no way for me to test....
but sounds like you might want to order by
order by airline,
case when upper(substr(dispnum),1,1) between 'A' and 'Z'
then 1
else 2
end,
dispnum, org, des, mtype
A reader, April 16, 2005 - 8:56 am UTC
If record 1 was
a b f
it should belong to group 1, the grouping criteria is if any column ( col1, 2, 3 ) has same value, then the records should belong to same group.
April 16, 2005 - 9:07 am UTC
and why not group 2 then? A makes it belong to group 1, F to group 2.
I don't see this happening in SQL efficiently (or even inefficiently maybe..)
A reader, April 16, 2005 - 9:22 am UTC
F doesn't make the first record to group 2, because col3 in group 2 is null.
April 16, 2005 - 9:36 am UTC
<quote>
What I want to do is if any columns ( col1, 2, 3 ) have same value, then the
records will be in one group, so records 1,2,3 will be in one group, records 4,
5 will be in other group.
</quote>
so, column 1 in record 2 was null, why does it make it into group 1 (first mention of null)
I don't see this happening in sql, sorry.....
A reader
A, April 17, 2005 - 3:02 am UTC
I have a table called a which has field like (rid,bname,city,rdate,share)
I have another table called b which has (bname,city,rdate,amount)
I want to find out average share(of the robbers participating in robberies) during the most recent year with any robberies and the year in which the total amount robbed was the highest.
April 17, 2005 - 8:41 am UTC
cool, too bad I see no fields named "robbers" or how A relates to B or anything.
no creates
no inserts
insufficient detail to say anything.
i want to make a question.
md. oli ullah., April 17, 2005 - 9:27 am UTC
i have installed oracle database sercer 9i. but when i had opend sql it require user name and password.in the user name i wrote SYSTEM/SCOTT and in the password field i wrote MANAGER/TIGER. none of them functioning and producing a error message tns protocol error.
how can i enter into sql.pls send user name and password into my email.
with thanks
md.oli ullah.
April 17, 2005 - 9:54 am UTC
when you installed the product, you setup the accounts and passwords (it was part of the installation, you had to supply a sys and system password).
as the Oracle software owner, log into the OS and you should be able to:
connect / as sysdba
the superuser, you can use this account to set scott's password, unlock that account, create a new account (that is the suggested thing to do). After you create that account, log out (you don't use sys or system to test or play with ever), and log back in as your newly created account.
A reader
A, April 17, 2005 - 12:31 pm UTC
Here you go
create table robbery(bankname varchar(100) not null,city varchar(50) not null,rdate date not null,amount integer not null,primary key (bankname,city,rdate));
This table stores information about robberies of banks that the gang has already performed,including how much was stolen.
create table accomplices(robberid integer ,bankname varchar(100) not null,city varchar(50) not null,rdate date not null,share integer not null,foreign key(bankname,city,rdate) references robbery(bankname,city,rdate) on update cascade,primary key(robberid,bankname,city,rdate));
This table stores information about which gang members participated in each bank robbery and what share of the money they got.
select * from robbery;
bankname city rdate amount
N Bank | xxx | 1999-01-08 | 32
Lk Bank | yyy | 1995-02-28 | 19
Lk Bank | xxx | 1995-03-30 | 21
select * from accomplices;
robberid bankname city rdate share
1 | LK Bank | xxx | 1995-03-30 | 42
1 | LK Bank | yyy | 1995-02-28 | 49
1 | N Bank | xxx | 1999-01-08 | 64
And the question remains same
write a sql that shows the average robbers share(of the robbers participating in robberries) during the most recent year with any robberies and the year in which the total amount robbed was the highest.In order to write this query I don't know if both these tables has to be used.
Hope this time it's clear.
Cheers
April 17, 2005 - 3:26 pm UTC
"on update cascade"
it is clear this isn't Oracle ;)
so, I hesitate to give you sql, since you are not using this database and hence it might not work.
no create tables
no insert intos
no looking by me....
(the foreign keys don't even match up)
but it sounds like simple SQL homework.
Avg share group by year, order by year desc, take the first row...
select * from (select avg(share), whatever_function_YOUR_db_uses(rdate) year
from robbery
group by that_function_whatever_it_is
order by 2 desc )
where rownum = 1 <<=== you'll find that probably doesn't work for you :)
(in fact, these sound exactly like the quiz problems I give to anyone that has SQL on their resume..)
use the same technique on the other table
A reader, April 22, 2005 - 4:36 pm UTC
Hi Tom,
I have a table T
create table test
(
id number,
t number,
t1 number,
t2 number
);
This table has values like the following :
insert into test values (1,1,0,0);
insert into test values (2,0,1,0);
insert into test values (3,0,1,0);
insert into test values (4,0,0,1);
commit;
I want to get a result as follows:
t t1 t2
1 2 1
That is, I want to see the count of all 1s for each of the columns t, t1 and t2 as a single row.
Thanks.
April 22, 2005 - 4:42 pm UTC
select count(decode(t,1,1)), count(decode(t1,1,1)), count(decode(t2,1,1)) from t;
small, tiny correction
Menon, April 22, 2005 - 4:56 pm UTC
"select count(decode(t,1,1)), count(decode(t1,1,1)), count(decode(t2,1,1)) from
t; "
should be "from test" instead of "from t" :)
April 22, 2005 - 5:03 pm UTC
I read "I have a table T"
:)
Thank You Tom just what I wanted.
A reader, April 22, 2005 - 5:02 pm UTC
query tuning
sreenivasa rao, April 23, 2005 - 2:23 am UTC
Dear Tom,
i have a OLTP environment database with following setup
hash_join_enabled=true
optimizer_feature_enable=9.2.0
optimizer_index_caching=90
optimizer_index_cost_adj=25
optimizer_max_permutations=2000
query_rewrite_enabled=TRUE
query_rewrite_integrity=TRUSTED
and all the tables are having almost updated statistics aside with hitstograms on all of their columns.
SELECT
DISTINCT A.COMP_APPL_ID AS "App No",
A.APPDATE AS "App Date",
B.CUSTOMERNAME AS "Customer Name",
D.DESCRIPTION AS "Product",
D.SUBTYPEDESCRIPTION AS "Sub-Product",
A.FILELOGINNO AS "Physical File No",
C.DESCRIPTION AS "Location",
E.BPNAME AS "Sourced By"
FROM
LOT_CUSTOMER_T B,
LOT_GLOBALLOAN_T A,
COMPANY_GENERIC C,
PRODUCT D,
BP E,
LOT_WORKFLOWSTAGE_DTL F,
WORKFLOWSTAGE ws,
COMPANY_EMPLOYEE_ROLE er,
COMP_EMP_AREA ea
WHERE
A.COMP_APPL_ID = B.COMP_APPL_ID(+) AND
A.SRC_BPID=E.BPID AND
C.GENERICID = A.CG_LOCATIONCODE AND
B.GUAR_COAP_FLAG(+)='P' AND
D.PRODUCTCODE = A.PRODUCTCODE AND
NVL(A.PRODUCTSUBTYPE,'NULL') = NVL(D.PRODSUBTYPECODE,'NULL') AND
A.COMPANYID=1000 AND
A.COMP_APPL_ID = F.comp_appl_id AND
ws.workflowstageid = f.workflowstageid AND
ws.rolecode = er.cg_rolecode AND
er.productcode = a.productcode AND
er.employeecode = ea.employeecode AND
ea.cg_locationcode = A.CG_LOCATIONCODE and F.STAGESTATUS='P' AND F.STAGE='STYMREPT' order by 1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 2.46 2.59 0 0 0 0
Execute 2 0.01 0.00 0 0 0 0
Fetch 1265 94.02 324.06 2 7393514 0 12649
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1269 96.49 326.65 2 7393514 0 12649
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (SCB)
Rows Row Source Operation
------- ---------------------------------------------------
20 SORT UNIQUE
2901425 NESTED LOOPS
3631139 HASH JOIN
4866 INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (object id 24087)
12657 NESTED LOOPS OUTER
12628 HASH JOIN
12628 HASH JOIN
12628 HASH JOIN
465 TABLE ACCESS FULL WORKFLOWSTAGE
12628 HASH JOIN
12628 TABLE ACCESS BY INDEX ROWID LOT_WORKFLOWSTAGE_DTL
12628 INDEX RANGE SCAN SREE_LTWORKFLOWDTL_IND6 (object id 31139)
264057 HASH JOIN
42 TABLE ACCESS FULL PRODUCT
264057 TABLE ACCESS FULL LOT_GLOBALLOAN_T
4944 TABLE ACCESS FULL BP
5718 TABLE ACCESS FULL COMPANY_GENERIC
12594 TABLE ACCESS BY INDEX ROWID LOT_CUSTOMER_T
18455 INDEX RANGE SCAN LOT_CUSTOMER_T_INDX1 (object id 24473)
2901425 INDEX RANGE SCAN CM_EMP_RL_INDX7 (object id 29462)
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
20 SORT (UNIQUE)
2901425 NESTED LOOPS
3631139 HASH JOIN
4866 INDEX GOAL: ANALYZED (FAST FULL SCAN) OF
'CM_EMP_AR_INDX3' (NON-UNIQUE)
12657 NESTED LOOPS (OUTER)
12628 HASH JOIN
12628 HASH JOIN
12628 HASH JOIN
465 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'WORKFLOWSTAGE'
12628 HASH JOIN
12628 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'LOT_WORKFLOWSTAGE_DTL'
12628 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'SREE_LTWORKFLOWDTL_IND6' (NON-UNIQUE)
264057 HASH JOIN
42 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PRODUCT'
264057 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'LOT_GLOBALLOAN_T'
4944 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'BP'
5718 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'COMPANY_GENERIC'
12594 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'LOT_CUSTOMER_T'
18455 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'LOT_CUSTOMER_T_INDX1' (NON-UNIQUE)
2901425 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'CM_EMP_RL_INDX7'
(NON-UNIQUE)
could you give me suggested indexes?
i dont bother even if the optimizer does not use those indexes,but i want to know where the exactly indexes need to be created.
thanks in advance.
April 23, 2005 - 9:14 am UTC
set autotrace traceonly explain
and run that query in sqlplus and lets see the autotrace "guess" at the cardinalities.
(and if you could set statistics_level = 'all' in your session and rerun the tkprof that'd be really great, we'd see:
TABLE ACCESS FULL OBJ#(222) (cr=3 r=0 w=0 time=43 us)
in the tkprof -- with the logical, physical io by step)
query tuning
sreenivasa rao, April 25, 2005 - 1:28 am UTC
Thanks for your quick response.
see the explain plan and tkprof(with statisitcs='all')
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3589 Card=33550 Byte
s=7381000)
1 0 SORT (UNIQUE) (Cost=2498 Card=33550 Bytes=7381000)
2 1 NESTED LOOPS (Cost=1407 Card=33550 Bytes=7381000)
3 2 HASH JOIN (Cost=1406 Card=362966 Bytes=74045064)
4 3 INDEX (FAST FULL SCAN) OF 'CM_EMP_AR_INDX3' (NON-UNI
QUE) (Cost=3 Card=4881 Bytes=53691)
5 3 NESTED LOOPS (OUTER) (Cost=1401 Card=1213 Bytes=2341
09)
6 5 HASH JOIN (Cost=1097 Card=1213 Bytes=201358)
7 6 HASH JOIN (Cost=1086 Card=1213 Bytes=167394)
8 7 HASH JOIN (Cost=1072 Card=1213 Bytes=133430)
9 8 TABLE ACCESS (FULL) OF 'WORKFLOWSTAGE' (Cost
=3 Card=465 Bytes=5580)
10 8 HASH JOIN (Cost=1068 Card=1213 Bytes=118874)
11 10 TABLE ACCESS (BY INDEX ROWID) OF 'LOT_WORK
FLOWSTAGE_DTL' (Cost=390 Card=10157 Bytes=213297)
12 11 INDEX (RANGE SCAN) OF 'SREE_LTWORKFLOWDT
L_IND6' (NON-UNIQUE) (Cost=39 Card=1)
13 10 HASH JOIN (Cost=665 Card=32163 Bytes=24765
51)
14 13 TABLE ACCESS (FULL) OF 'PRODUCT' (Cost=2
Card=42 Bytes=1554)
15 13 TABLE ACCESS (FULL) OF 'LOT_GLOBALLOAN_T
' (Cost=661 Card=263946 Bytes=10557840)
16 7 TABLE ACCESS (FULL) OF 'BP' (Cost=12 Card=4944
Bytes=138432)
17 6 TABLE ACCESS (FULL) OF 'COMPANY_GENERIC' (Cost=9
Card=5727 Bytes=160356)
18 5 TABLE ACCESS (BY INDEX ROWID) OF 'LOT_CUSTOMER_T'
(Cost=2 Card=1 Bytes=27)
19 18 INDEX (RANGE SCAN) OF 'LOT_CUSTOMER_T_INDX1' (NO
N-UNIQUE)
20 2 INDEX (RANGE SCAN) OF 'CM_EMP_RL_INDX7' (NON-UNIQUE)
SQL> alter session set statistics_level='all';
Session altered.
SQL> alter session set sql_trace=true;
Session altered.
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 2.58 2.77 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 900 65.66 107.06 5887 3977886 0 13482
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 902 68.24 109.84 5887 3977886 0 13482
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (SCB)
Rows Row Source Operation
------- ---------------------------------------------------
13482 SORT UNIQUE (cr=3977886 r=5887 w=0 time=107007846 us)
3097218 NESTED LOOPS (cr=3977886 r=5887 w=0 time=93139700 us)
3906063 HASH JOIN (cr=51085 r=5887 w=0 time=18144270 us)
4882 INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (cr=24 r=18 w=0 time=23439 us)(object id 24087)
13507 NESTED LOOPS OUTER (cr=51061 r=5869 w=0 time=11004008 us)
13478 HASH JOIN (cr=9527 r=5678 w=0 time=10308218 us)
13478 HASH JOIN (cr=9451 r=5676 w=0 time=10104237 us)
13478 HASH JOIN (cr=9344 r=5676 w=0 time=10001398 us)
465 TABLE ACCESS FULL WORKFLOWSTAGE (cr=16 r=0 w=0 time=918 us)
13478 HASH JOIN (cr=9328 r=5676 w=0 time=9945039 us)
13478 TABLE ACCESS BY INDEX ROWID LOT_WORKFLOWSTAGE_DTL (cr=2390 r=0 w=0 time=120576 us)
13478 INDEX RANGE SCAN SREE_LTWORKFLOWDTL_IND6 (cr=145 r=0 w=0 time=23638 us)(object id 31139)
264973 HASH JOIN (cr=6938 r=5676 w=0 time=8599523 us)
42 TABLE ACCESS FULL PRODUCT (cr=7 r=0 w=0 time=245 us)
264973 TABLE ACCESS FULL LOT_GLOBALLOAN_T (cr=6931 r=5676 w=0 time=7296217 us)
4944 TABLE ACCESS FULL BP (cr=107 r=0 w=0 time=6384 us)
5727 TABLE ACCESS FULL COMPANY_GENERIC (cr=76 r=2 w=0 time=86230 us)
13442 TABLE ACCESS BY INDEX ROWID LOT_CUSTOMER_T (cr=41534 r=191 w=0 time=585595 us)
19495 INDEX RANGE SCAN LOT_CUSTOMER_T_INDX1 (cr=27019 r=0 w=0 time=322852 us)(object id 24473)
3097218 INDEX RANGE SCAN CM_EMP_RL_INDX7 (cr=3926801 r=0 w=0 time=52575212 us)(object id 29462)
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
13482 SORT (UNIQUE)
3097218 NESTED LOOPS
3906063 HASH JOIN
4882 INDEX GOAL: ANALYZED (FAST FULL SCAN) OF
'CM_EMP_AR_INDX3' (NON-UNIQUE)
13507 NESTED LOOPS (OUTER)
13478 HASH JOIN
13478 HASH JOIN
13478 HASH JOIN
465 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'WORKFLOWSTAGE'
13478 HASH JOIN
13478 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'LOT_WORKFLOWSTAGE_DTL'
13478 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'SREE_LTWORKFLOWDTL_IND6' (NON-UNIQUE)
264973 HASH JOIN
42 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PRODUCT'
264973 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'LOT_GLOBALLOAN_T'
4944 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'BP'
5727 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'COMPANY_GENERIC'
13442 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'LOT_CUSTOMER_T'
19495 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'LOT_CUSTOMER_T_INDX1' (NON-UNIQUE)
3097218 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'CM_EMP_RL_INDX7'
(NON-UNIQUE)
********************************************************************************
alter session set sql_trace=false
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (SCB)
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 2.58 2.77 0 0 0 0
Execute 3 0.00 0.01 0 0 0 0
Fetch 900 65.66 107.06 5887 3977886 0 13482
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 905 68.24 109.85 5887 3977886 0 13482
Misses in library cache during parse: 2
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 11 0.00 0.00 0 0 0 0
Execute 11 0.00 0.00 0 0 0 0
Fetch 11 0.00 0.00 0 24 0 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 33 0.00 0.00 0 24 0 11
Misses in library cache during parse: 1
3 user SQL statements in session.
11 internal SQL statements in session.
14 SQL statements in session.
1 statement EXPLAINed in this session.
********************************************************************************
Trace file: rfes_ora_11054.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
11 internal SQL statements in trace file.
14 SQL statements in trace file.
4 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
SCB.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
1078 lines in trace file.
Best regards
sreenivas
April 25, 2005 - 7:11 am UTC
It was doing fine until here:
1 0 SORT (UNIQUE) (Cost=2498 Card=33550 Bytes=7381000)
2 1 NESTED LOOPS (Cost=1407 Card=33550 Bytes=7381000)
3 2 HASH JOIN (Cost=1406 Card=362966 Bytes=74045064)
4 3 INDEX (FAST FULL SCAN) OF 'CM_EMP_AR_INDX3' (NON-UNIQUE) (Cost=3 Card=4881 Bytes=53691)
5 3 NESTED LOOPS (OUTER) (Cost=1401 Card=1213 Bytes=234109)
Rows Row Source Operation
------- ---------------------------------------------------
13482 SORT UNIQUE (cr=3977886 r=5887 w=0 time=107007846 us)
3097218 NESTED LOOPS (cr=3977886 r=5887 w=0 time=93139700 us)
3906063 HASH JOIN (cr=51085 r=5887 w=0 time=18144270 us)
4882 INDEX FAST FULL SCAN CM_EMP_AR_INDX3 (cr=24 r=18 w=0 time=23439
us)(object id 24087)
13507 NESTED LOOPS OUTER (cr=51061 r=5869 w=0 time=11004008 us)
the time jumped -- massive nested loops join of 3+ million rows. it was expecting about 33,000.
What is cm_emp_ar_indx3 on, what are we joining at that step.
QUERY TUNING
sreenivasa rao, April 25, 2005 - 9:52 am UTC
The index CM_EMP_AR_INDX3 contains below columns
EMPLOYEECODE
PRODUCTCODE
CG_LOCATIONCODE
AND tablename is COMP_EMP_AREA "ea"
and joins are as follows
er.employeecode = ea.employeecode AND
ea.cg_locationcode = A.CG_LOCATIONCODE
Thanks in advance
regards
sreenivas
April 25, 2005 - 10:00 am UTC
and how about CM_EMP_RL_INDX7
(is cg_locationcode skewed? does it look like that is the part of the join being processed there with that fast full index scan, the one on cg_locationcode, all I can see are index and some table names -- hard for me to tell)
Query
Karthick, April 26, 2005 - 4:04 am UTC
Hi tom
i was trying to write a query that will return all the tables in a schema and total number of column in that table and number of index in that table and constrains if any and total number of records in each table. but i beleve it is not possible with a normal sql.
the output is some thing like....
TABLE_NAME COL INDEX CONSTRAIN ROW
---------------------------------------------------
TABLE1 10 0 0 250
TABLE2 15 2 1 10000
i was planing to have this as a base query and planing to have drill down queries which will show the column detail,
index detail,constrain detail. etc.
can you help me with this.
thank you.
April 26, 2005 - 7:37 am UTC
select table_name, num_rows,
(select count(*) from user_indexes where table_name=x.table_name) idx,
(select count(*) from user_tab_columns where table_name=x.table_name) col,
(select count(*) from user_constraints where table_name=x.table_name) con
from user_tables x;
I would use num_rows as an approximation -- it is filled in after an analyze. counting the rows in a table would be prohibitively expensive.
Table Detail
P.Karthick, April 27, 2005 - 8:29 am UTC
Thank you for your answer.
can you please tell me where could i get the informatin about all the table that oracle uses to manage the database.
thank you.
P.Karthick.
Whats the logic behind....?
VIKAS SANGAR, April 27, 2005 - 12:03 pm UTC
Dear Mr. Kyte,
I would like to know that, What sorting & seraching (Quick sort / Bubble sort / Binary search etc) technique/s does Oracle applies in the background while sorting or searching databases?
Secodly, whenever we query a Table/View etc (select... from Tab1), we always see only the names of the columns along with records present therein. But, why dont we see the name of the table itself by default? Whats the logic behind this? Is there any way using which the name of Table/View can also be displayed automatically when querried (other than using Reports)?
April 27, 2005 - 12:24 pm UTC
the details of the sort are not documented -- however, it is definitely not a bubble sort :)
it is a complex thing, it must be able to sort many gigabytes of data on a machine with less than gigabytes of memory.
In a relational database, a query is a table....
but consider:
select * from emp, dept, bonus, project ......
what "table" is that. don't forget unions, minus, intersect, scalar subqueries, etc...
Better way??
A reader, May 13, 2005 - 4:43 pm UTC
Hi Tom,
select id from a
where tt = (select max(tt) from a
where id1 in (select id from b
where res = 10)
and col1 = 'D');
Is there a better way to rewrite this query?
Followup to previous question.....
A reader, May 13, 2005 - 7:54 pm UTC
Hi Tom,
TT is a column of date datatype and it is not unique. Is it possible for you to demonstrate how to rewrite that query?
Thanks.
May 13, 2005 - 8:18 pm UTC
select id from a
where tt = (select max(tt) from a
where id1 in (select id from b
where res = 10)
and col1 = 'D');
actually, now that I note that is not a correlated subquery, that looks fine.
I see no issues with that query as written, given the question it answers -- it looks fine.
Nice
Ramesh, May 14, 2005 - 3:17 am UTC
Hello Tom,
Could you please provide the documentation link
for TKPROF??
A reader, May 19, 2005 - 2:41 pm UTC
Hi Tom,
I need to get results as follows:
I have a table t which has about 9 mil rows
create table t
(
lname varchar2(10),
mname varchar2(10),
fname varchar2(10),
empid number,
id2 varchar2(10)
);
I need to get the following results:
I need a query which will return rows which have same mname, fname, empid, id2 but different lname. What is the easiest way to do this?
Thanks.
May 19, 2005 - 2:49 pm UTC
select *
from ( select a.*,
count(distinct lname) over (partition by mname, fname, empid, id2 ) cnt
from t a )
where cnt > 1;
SQL help for parent child
Parag Jayant Patankar, May 20, 2005 - 6:06 am UTC
Hi Tom,
I required your expertise help for writing a SQL statement in Oracle 8.1.7 and onwards
create table c ( c1 number(3), c2 varchar2(10));
create table a ( a1 number(3), a2 number(3), a3 number, a4 number(1))
insert into c values (1, 'parag');
insert into c values (2, 'tom');
insert into c values (3, 'jonathan');
insert into c values (4, 'neville');
insert into c values (5, 'sam');
commit;
insert into a values (1, 1, 100, 1);
insert into a values (1, 2, 100, 1);
insert into a values (1, 3, 400, 1);
insert into a values (2, 1, 500, 3);
insert into a values (2, 2, 600, 1);
insert into a values (2, 3, 7000,1);
insert into a values (3, 1, 500, 3);
insert into a values (5, 1, 300, 3);
insert into a values (5, 1, 400, 3);
insert into a values (5, 1, 700, 3);
I have two tables c and a. C is client master while A is accounts master. A -> C is many to one relationship. These two tables linked to each other by using column c1 and a1. Column a4 in table is account modification indicator. if value of a4 is > 2 then account is closed.
I want to report all clients who have no accounts or whose ALL accounts are closed.
I want output in following way
3 jonathan ALL ACCOUNTS CLOSED
4 neville NO ACCOUNTS
5 sam ALL ACCOUNTS CLOSED
Can you show me how to write this SQL ? ( In acutal system I have huge no of records in client and accounts master )
regards & thanks
pjp
May 20, 2005 - 8:05 am UTC
ops$tkyte@ORA817DEV> select a.*, decode( a.cnt1, 0, 'no accts', 'all closed' )
2 from (
3 select c.c1, c.c2, count(a.a1) cnt1, count(case when a.a4>2 then 1 end) cnt2
4 from c, a
5 where c.c1 = a.a1(+)
6 group by c.c1, c.c2
7 ) a
8 where cnt1=cnt2;
C1 C2 CNT1 CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
3 jonathan 1 1 all closed
4 neville 0 0 no accts
5 sam 3 3 all closed
A reader, May 20, 2005 - 12:33 pm UTC
Hi Tom,
I have a table t
(
id1 number,
id2 number
);
I want to get results which will return rows such that they have same id2 but different id1. What is the easiest way to do this?
Thanks.
May 20, 2005 - 6:37 pm UTC
select * from (
select id2, id1, count(distinct id1) over (partition by id2) cnt
from t
) where cnt > 1;
SQL Help
Parag Jayant Patankar, May 24, 2005 - 11:01 am UTC
Hi Tom
Thanks for your brilliant answer to my SQL question to you in this thread.
I need one more experte help in Oracle 9.2.
Suppose I have following tables
create table c ( c1 number(3), c2 varchar2(10));
create table a ( a1 number(3), a2 number(3), a3 number, a4 number(1));
create table ac( ac1 number(3), ac2 number(3));
insert into c values (1, 'parag');
insert into c values (2, 'tom');
insert into c values (3, 'jonathan');
insert into c values (4, 'neville');
insert into c values (5, 'sam');
insert into c values (6, 'joe');
insert into c values (7, 'barbie');
insert into c values (8, 'benoit');
insert into c values (9, 'jack');
insert into a values (1, 1, 100, 1);
insert into a values (1, 2, 100, 1);
insert into a values (1, 3, 400, 1);
insert into a values (2, 1, 500, 3);
insert into a values (2, 2, 600, 1);
insert into a values (2, 3, 7000,1);
insert into a values (3, 1, 500, 3);
insert into a values (5, 1, 300, 3);
insert into a values (5, 1, 400, 3);
insert into a values (5, 1, 700, 3);
insert into ac values (6, 7);
insert into ac values (6, 8);
commit;
For my question in this thread you have provided following excellent answer
select a.*, decode( a.cnt1, 0, 'no accts', 'all closed' )
from (
select c.c1, c.c2, count(a.a1) cnt1, count(case when a.a4>2 then 1 end) cnt2
from c, a
where c.c1 = a.a1(+)
group by c.c1, c.c2
) a
where cnt1=cnt2
/
Which is giving following output
C1 C2 CNT1 CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
3 jonathan 1 1 all closed
4 neville 0 0 no accts
5 sam 3 3 all closed
6 joe 0 0 no accts
7 barbie 0 0 no accts
8 benoit 0 0 no accts
9 jack 0 0 no accts
Now I want to modify this query, that client appearing in table "ac" should not get reported, as clients 7 and 8 linked to client 6. clients 6, 7, 8 should not get reported in sql output.
For e.g.
C1 C2 CNT1 CNT2 DECODE(A.C
---------- ---------- ---------- ---------- ----------
3 jonathan 1 1 all closed
4 neville 0 0 no accts
5 sam 3 3 all closed
9 jack 0 0 no accts
Most of the times I can arrive the correct answer, but I do not know where I am arriving an answer to right method or not. ( Performance wise ). For e.g. required output I can generate using "SET/MINUS" operator or subquery but I do not know is it correct way or not ? I am always having feeling that this kind output can be generated using some simple principle.
Kindly guide me
1. Can you show me SQL query for generating required output ? ( in my actual ac table having few thousands record )
2 How to arrive the correct principle for writing such SQLs ?
regards & thanks
pjp
May 24, 2005 - 1:10 pm UTC
and c1 not in (select ac1 from ac union all select ac2 from ac )
Query
A reader, July 07, 2005 - 10:35 am UTC
Hi Tom,
I have two tables
CREATE TABLE t1
(
CLM_ID NUMBER,
HUB_LOAD_SEQUENCE_NO NUMBER);
CREATE TABLE t2
(
HUB_LOAD_SEQUENCE_NO NUMBER,
EDW_LOAD_SEQUENCE_NO NUMBER);
and index on t1 table
CREATE INDEX t1_idx01 ON t1
(HUB_LOAD_SEQUENCE_NO);
and I am writing the following query to get all the rows within the range identified in table t2. We may have millions of rows in t1 table but t2 always has one row. HUB_LOAD_SEQUENCE_NO is integer incremented in t2 before load data in t1 and then populated in t1.
select *
from t1 a
where exists
(select 1 from t2 b
where a.HUB_LOAD_SEQUENCE_NO > b.EDW_LOAD_SEQUENCE_NO and a.HUB_LOAD_SEQUENCE_NO <= b.HUB_LOAD_SEQUENCE_NO)
Do you think from performance point of view the above query is okay?
Thanks
July 07, 2005 - 1:04 pm UTC
why wouldn't you do the more straight forward:
select * from t1
where HUB_LOAD_SEQUENCE_NO > (select EDW_LOAD_SEQUENCE_NO from t2)
and HUB_LOAD_SEQUENCE_NO <=(select HUB_LOAD_SEQUENCE_NO from t2)
especially if you wanted indexes to be useful on T1?
A reader, July 07, 2005 - 2:30 pm UTC
Hi Tom,
Thanks a million!!!
Query Logic Help
denni50, July 07, 2005 - 3:07 pm UTC
Hi Tom
I have the following query below where what I need to do is match one column to another column on a different record,in same table,and if there is a match combine(aggregate).
For example...there's is a special campaign where records will have an acknowledgement code(yearly aggregate sums) and default_acknowledgement code(monthly aggregate sums). On this one report mgt wants the sums/totals broken down by month(05FS) not yearly(05S)...'F' represents June 2005(see below).
SQL> select acknowledgement,default_acknowledgement,campaigncode
2 from appeals
3 where campaigncode='D05FS';
ACKN DEFA CAMPAI
---- ---- ------
05S 05FS D05FS
05S 05FS D05FS
05S 05FS D05FS
I only need to work with the first 3 characters from both columns...however what I need to accomplish is:
Find where substr(default_acknowledgement,1,3)=substr(acknowledgement,1,3)then group those sums together and output.
if you look at the output below the two '05F' are separate(because of the UNION)I need those two 05F combined with a sum=619763.
SQL> select substr(a.acknowledgement,1,3) as Project,sum(a.pieces_mailed) as Dropped,
2 sum(a.total#) as Response, sum(a.total$) as Income, avg(usernumber1) as Cost_Per_Unit
3 from appeals a
4 where a.acknowledgement in('05AA',
5 '05AB',
6 '05AC',
7 '05AD',
8 '05AF',
9 '05AG',
10 '05BA',
11 '05BB',
12 '05BC',
13 '05BD',
14 '05BF',
15 '05BG',
16 '05CA',
17 '05CB',
18 '05CC',
19 '05CD',
20 '05CF',
21 '05CG',
22 '05CH',
23 '05DA',
24 '05DB',
25 '05DC',
26 '05DD',
27 '05DF',
28 '05DG',
29 '05EA',
30 '05EC',
31 '05ED',
32 '05EF',
33 '05EG',
34 '05FA',
35 '05FB',
36 '05FC',
37 '05FD',
38 '05FF',
39 '05FG',
40 '05Y')
41 group by substr(a.acknowledgement,1,3)
42 UNION
43 select substr(a.default_acknowledgement,1,3) as Project,sum(a.pieces_mailed) as Dropped
44 sum(a.total#) as Response,sum(a.total$) as Income,avg(usernumber1) as Cost_Per_Unit
45 from appeals a
46 where a.default_acknowledgement in('05FS')
47 group by substr(a.default_acknowledgement,1,3)
48 /
PRO DROPPED RESPONSE INCOME COST_PER_UNIT
--- ---------- ---------- ---------- -------------
05A 843388 33439 887844.01 .471597418
05B 914806 32619 1055638.96 .468912784
05C 991906 46599 1829657.49 .56123302
05D 814946 24964 735433.5 .464850449
05E 526517 9956 332143.4 .452991228
05F 22533 136 4619.9 .56
05F 597230 11241 351794.76 .524829932
05Y 163066 12457 539362.25 .881592357
8 rows selected.
thanks for any tips/help!
July 07, 2005 - 3:56 pm UTC
real quick is:
select project, sum(dropped), sum(response), ....
from (Q)
group by project;
where Q is your query with UNION changed to UNION ALL (you do know your union is a DISTINCT operation - you could "lose" rows with it!!!)
but I'm curious, if
substr(default_acknowledgement,1,3)=substr(acknowledgement,1,3)
on the SAME ROW, haven't you double counted them?
we could probably get this down to a single pass, but if the simple quick fix above is faster than fast enough, it is the path of least resistance.
thanks Tom
denni50, July 07, 2005 - 4:20 pm UTC
....I'll try your solution and get back with you.
No...the match can never be on the same row, since the
same record will have '05S' and '05FS'...the first
three characters of each is 05S,05F respectively.
The default_acknowledgement( first 3 characters) has to
match other records(rows) that has acknowledgement(first 3
characters) equaling '05F'
so that:
row 1 substr(def_ack,1,3)= row 10(substr(ack,1,3)
thanks!
UNION ALL
denni50, July 07, 2005 - 5:00 pm UTC
Tom
if I use a union all it pretty much does the same thing
except it places the second 05F at the bottom.
SQL> select substr(a.acknowledgement,1,3),sum(a.pieces_mailed),
2 sum(a.total#),sum(a.total$), avg(usernumber1)
3 from appeals a
4 where a.acknowledgement in('05AA',
5 '05AB',
6 '05AC',
7 '05AD',
8 '05AF',
9 '05AG',
10 '05BA',
11 '05BB',
12 '05BC',
13 '05BD',
14 '05BF',
15 '05BG',
16 '05CA',
17 '05CB',
18 '05CC',
19 '05CD',
20 '05CF',
21 '05CG',
22 '05CH',
23 '05DA',
24 '05DC',
25 '05DD',
26 '05DF',
27 '05DG',
28 '05EA',
29 '05EC',
30 '05ED',
31 '05EF',
32 '05EG',
33 '05FA',
34 '05FB',
35 '05FC',
36 '05FD',
37 '05FF',
38 '05FG',
39 '05Y')
40 group by substr(a.acknowledgement,1,3)
41 UNION ALL
42 select substr(a.default_acknowledgement,1,3) as
43 Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) a
44 avg(usernumber1) as Cost_Per_Unit
45 from appeals a
46 where a.default_acknowledgement in('05FS')
47 group by substr(a.default_acknowledgement,1,3);
SUB SUM(A.PIECES_MAILED) SUM(A.TOTAL#) SUM(A.TOTAL$) AVG(USERNUMBER1)
--- -------------------- ------------- ------------- ----------------
05A 843388 33439 887844.01 .471597418
05B 914806 32619 1055638.96 .468912784
05C 991906 46599 1829657.49 .56123302
05D 302423 8242 275749.03 .483467742
05E 526517 9956 332143.4 .452991228
05F 597230 11241 351794.76 .524829932
05Y 163068 12457 539362.25 .880668081
05F 22533 136 4619.9 .56
8 rows selected.
SQL>
not sure how to code the from(Q) query...keep getting alot
of identifier errors..etc
1 select substr(acknowledgement,1,3) as Project,sum(pieces_mailed) as Dropped,
2 sum(total#) as Response, sum(total$) as Income, avg(usernumber1) as Cost_Per_Unit
3 from(select substr(a.acknowledgement,1,3),sum(a.pieces_mailed),
4 sum(a.total#),sum(a.total$), avg(usernumber1)
5 from appeals a
6 where a.acknowledgement in('05AA',
7 '05AB',
8 '05AC',
9 '05AD',
10 '05AF',
11 '05AG',
12 '05BA',
13 '05BB',
14 '05BC',
15 '05BD',
16 '05BF',
17 '05BG',
18 '05CA',
19 '05CB',
20 '05CC',
21 '05CD',
22 '05CF',
23 '05CG',
24 '05CH',
25 '05DA',
26 '05DC',
27 '05DD',
28 '05DF',
29 '05DG',
30 '05EA',
31 '05EC',
32 '05ED',
33 '05EF',
34 '05EG',
35 '05FA',
36 '05FB',
37 '05FC',
38 '05FD',
39 '05FF',
40 '05FG',
41 '05Y')
42 group by substr(a.acknowledgement,1,3)
43 UNION ALL
44 select substr(a.default_acknowledgement,1,3) as
45 Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) as Inc
46 avg(usernumber1) as Cost_Per_Unit
47 from appeals a
48 where a.default_acknowledgement in('05FS')
49 group by substr(a.default_acknowledgement,1,3))
50* group by substr(acknowledgement,1,3)
SQL> /
group by substr(acknowledgement,1,3)
*
ERROR at line 50:
ORA-00904: "ACKNOWLEDGEMENT": invalid identifier
thanks again!
July 07, 2005 - 5:26 pm UTC
select project, sum(response), sum(inc), ...
from ( YOUR_QUERY_AS_IT_EXISTS_GOES_HERE )
group by project;
THANKS TOM!!!
denni50, July 08, 2005 - 8:22 am UTC
it worked!!!...exactly what I needed...at first I didn't
understand how to apply the first select statement to the
from(select statement)....learned something new...yeayyy!
;~)
SQL> select Project,sum(Dropped),
2 sum(Response),sum(Income),avg(Cost_Per_Unit)
3 from(select substr(a.acknowledgement,1,3) as project,sum(a.pieces_mailed) as Dropped,
4 sum(a.total#) as Response,sum(a.total$) as Income, avg(a.usernumber1) as Cost_Per_Unit
5 from appeals a
6 where a.acknowledgement in('05AA',
7 '05AB',
8 '05AC',
9 '05AD',
10 '05AF',
11 '05AG',
12 '05BA',
13 '05BB',
14 '05BC',
15 '05BD',
16 '05BF',
17 '05BG',
18 '05CA',
19 '05CB',
20 '05CC',
21 '05CD',
22 '05CF',
23 '05CG',
24 '05CH',
25 '05DA',
26 '05DC',
27 '05DD',
28 '05DF',
29 '05DG',
30 '05EA',
31 '05EC',
32 '05ED',
33 '05EF',
34 '05EG',
35 '05FA',
36 '05FB',
37 '05FC',
38 '05FD',
39 '05FF',
40 '05FG',
41 '05Y')
42 group by substr(a.acknowledgement,1,3)
43 UNION ALL
44 select substr(a.default_acknowledgement,1,3) as
45 Project,sum(a.pieces_mailed) as Dropped,sum(a.total#) as Response,sum(a.total$) as Income,
46 avg(usernumber1) as Cost_Per_Unit
47 from appeals a
48 where a.default_acknowledgement in('05FS')
49 group by substr(a.default_acknowledgement,1,3))
50 group by project
51 /
PRO SUM(DROPPED) SUM(RESPONSE) SUM(INCOME) AVG(COST_PER_UNIT)
--- ------------ ------------- ----------- ------------------
05A 843388 33439 887844.01 .471597418
05B 914806 32619 1055638.96 .468912784
05C 991906 46599 1829657.49 .56123302
05D 302423 8242 275749.03 .483467742
05E 526517 9956 332143.4 .452991228
05F 619763 11377 356414.66 .542414966
05Y 163068 12457 539362.25 .880668081
OK
Kumar, July 12, 2005 - 1:28 pm UTC
Hi Tom,
For this code snippet Can I do a periodic commit??
I would like to do a commit for every 500 rows.
for x in (select * from big_temp) loop
... do some validations with temp data
... and insert into base tables if validation
... succeeds,else raise different types of exceptions.
insert into t1 values(..);
insert into t2 values(...);
end loop;
Inside the loop I would like to do a commit for every 500 rows.
Is that possible?
Could you please help me with some code snippet??
July 13, 2005 - 10:29 am UTC
the answer is "well of course you CAN, but SHOULD you"
and the answer to that is "no"
what happens when after you committed 1000 records and have many more to go, the system fails.
where is your restart logic to be able to pick up at some meaningful point and complete your work.
(you want to BULK UP too -- forall processing if you are going to do this painful slow by slow processing)
???
Kumar, July 12, 2005 - 11:57 pm UTC
Hi Tom,
I expected your reply.
Please do reply.
July 13, 2005 - 11:00 am UTC
I sleep sometimes.
I even go to dinner with friends occasionally.
Another Query Logic
denni50, July 15, 2005 - 8:43 am UTC
Tom...need your help again.
it's almost along the same lines with the other query that you helped me with earlier in this thread...the difference is this query involves two separate tables and columns that I need to combine into one result set. I've been testing a myriad of sql statements, scalar subqueries...etc, trying to get the correct result output with no luck, keep getting the below garbage:(sample of one variation)
(r.ack_code will always = '05Y')
SQL> select to_char(r.ack_date,'MMYYYY') as project, sum(r.receipt_count) as Dropped,
2 count(payamount) as Response,sum(p.payamount) as Income, avg(p.usernumber1) as Cost_Per_Unit
3 from payment p,receipting r
4 where p.acknowledgement=r.ack_code
5 and to_char(p.paydate,'MMYYYY')=to_char(r.ack_date,'MMYYYY')
6 group by to_char(r.ack_date,'MMYYYY');
PROJEC DROPPED RESPONSE INCOME COST_PER_UNIT
------ ---------- ---------- ---------- -------------
012005 12969500 1000 51081.84 51.08184
022005 37469619 3284 142051.48 43.2556273
032005 68955792 7032 309690.2 44.0401308
042005 159119680 12160 485689.24 39.9415493
052005 98539248 12576 527281.88 41.9276304
062005 88548030 13320 617276.36 46.3420691
072005 1805072 4848 186652.5 38.5009282
7 rows selected.
SQL>
below is the correct results when I run the queries on each table separately:
(excluded the cost_per_unit column for testing purposes only)
SQL> select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
2 from receipting r
3 group by to_char(r.ack_date,'MMYYYY');
PROJEC DROPPED
------ ----------
012005 51878
022005 45639
032005 39224
042005 52342
052005 31342
062005 26591
072005 1117
7 rows selected.
SQL> select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
2 sum(payamount) as Income
3 from payment p
4 where p.acknowledgement='05Y'
5 group by to_char(p.paydate,'MMYYYY');
PROJEC RESPONSE INCOME
------ ---------- ----------
012005 250 12770.46
022005 821 35512.87
032005 1758 77422.55
042005 3040 121422.31
052005 3144 131820.47
062005 3330 154319.09
072005 1616 62217.5
7 rows selected.
this is what I need:
PROJEC DROPPED RESPONSE INCOME
------ ---------- ---------- ----------
012005 51878 250 12770.46
022005 45639 821 35512.87
032005 39224 1758 77422.55
042005 52342 3040 121422.31
052005 31342 3144 131820.47
062005 26591 3330 154319.09
072005 1117 1616 62217.5
....can't seem to get the grouping to work correctly, if analytics can achieve what I need that would be great.... my neverending thanks!
July 15, 2005 - 5:55 pm UTC
just
select ....
from (q1), (q2)
where q1.projec = q2.projec;
using an outer join or full outer join if need be.
Different solution to previous request
denni50, July 15, 2005 - 4:26 pm UTC
Tom
unable to figure out how to combine multi table aggregates into one result set, after exhaustive research, and having to get this done I came up with a simple solution to create view's of each table's aggregate results and then combine the two views like below:
SQL> select decode(r.project,'012005','Jan 2005',
2 '022005','Feb 2005',
3 '032005','Mar 2005',
4 '042005','Apr 2005',
5 '052005','May 2005',
6 '062005','Jun 2005',
7 '072005','Jul 2005',
8 '082005','Aug 2005',
9 '092005','Sep 2005',
10 '102005','Oct 2005',
11 '112005','Nov 2005',
12 '122005','Dec 2005') Month,
13 r.dropped,p.response,p.income,p.cost_per_unit
14 from tku_receipt r,tku_payment p
15 where p.project=r.project;
MONTH DROPPED RESPONSE INCOME COST_PER_UNIT
-------- ---------- ---------- ---------- -------------
Jan 2005 51878 250 12770.46 51.08184
Feb 2005 45639 821 35512.87 43.2556273
Mar 2005 39224 1758 77422.55 44.0401308
Apr 2005 52342 3040 121422.31 39.9415493
May 2005 31342 3144 131820.47 41.9276304
Jun 2005 26591 3330 154319.09 46.3420691
Jul 2005 2961 1717 65886 38.3727432
7 rows selected.
...however I would still like to learn how to combine multi table aggregates with joins into one result set if that is at all possible.
thx...enjoy the w/e!
procedural code vs sql
a reader, July 16, 2005 - 5:46 am UTC
Tom,
I am trying to convert a procedural logic into sql statement. Please find the details below:
create table t1 (acc_no number (5), c2 date , c3 number);
insert into t1 values (100, sysdate -10, null);
insert into t1 values (101, sysdate -9, null);
insert into t1 values (200, null, null);
insert into t1 values (300, sysdate -1, null);
insert into t1 values (400, sysdate -1, null);
create table t2 (acc_no number (5), cycle number, type varchar2(1));
insert into t2 values (100, 3, 'A');
insert into t2 values (101, 1, 'B');
insert into t2 values (200, 0, 'D');
insert into t2 values (300, 0, 'C');
insert into t2 values (400, 5, 'E');
create table t3 (type varchar2(1), cost number);
insert into t3 values ('A', 10);
insert into t3 values ('B', 20);
insert into t3 values ('C', 30);
insert into t3 values ('D', 40);
insert into t3 values ('E', 50);
SQL> select * from t1;
ACC_NO C2 C3
---------- --------------- ----------
100 06-JUL-05
101 07-JUL-05
200
300 15-JUL-05
400 15-JUL-05
SQL> select * from t2;
ACC_NO CYCLE T
---------- ---------- -
100 3 A
101 1 B
300 0 C
200 0 D
400 5 E
SQL> select * from t3;
T COST
- ----------
A 10
B 20
C 30
D 40
E 50
t1 and t2 has a relationship based on acc_no field. However t1 may have accounts that are not present in t2.
t3 stores cost corresponding to each type.
purpose : update column c3 of table t1 based on the following logic:
if t2.type in ('A', 'B', 'C') then
if t2.cycle > o or t1.c2 is not null then
set t1.c3 = t3.cost (corresponding to matching type)
else
set t1.c3 =0
elsif t2.type ='D' then
if t1.c2 > (sysdate -1) then
if t2.cycle =1 then
set t1.c3 =t3.cost (corresponding to matching type)
elsif t2.cycle >1
set t1.c3 =100
endif
elsif t2.type ='E' then
if t1.c2 > (sysdate -1) then
set t1.c3 =t3.cost (corresponding to matching type)
end if
else
set t1.c3=0
end
Many thanks in advance for your help.
July 16, 2005 - 9:32 am UTC
assuming I can make the following very reasonable (for they must already be true) changes:
create table t3 (type varchar2(1) primary key, cost number);
create table t2 (acc_no number (5) primary key, cycle number, type varchar2(1) not null references t3 );
then I believe (you'll need to test the logic, especially the UPPER CASE stuff, because you don't say what happens if type='D' for example and t1.c2 is NOT > sysdate-1, what should t1.c3 be set to? null or itself?
ops$tkyte@ORA9IR2> update (
2 select t2.type t2_type,
3 t2.cycle t2_cycle,
4 t1.c2 t1_c2,
5 t1.c3 t1_c3,
6 t3.cost t3_cost
7 from t1, t2, t3
8 where t1.acc_no = t2.acc_no(+)
9 and t2.type = t3.type(+)
10 )
11 set t1_c3 = case when t2_type in ( 'A', 'B', 'C' )
12 then case when t2_cycle > 0 or t1_c2 is not null
13 then t3_cost
14 else 0
15 end
16 when t2_type = 'D'
17 then case when t1_c2 > (sysdate-1)
18 then case when t2_cycle = 1
19 then t3_cost
20 when t2_cycle > 1
21 then 100
22 ELSE T1_C3
23 end
24 ELSE T1_C3
25 end
26 when t2_type = 'E'
27 then case when t1_c2 > (sysdate-1)
28 then t3_cost
29 ELSE T1_C3
30 end
31 else 0
32 end
33 /
5 rows updated.
thanks Tom...
denni50, July 16, 2005 - 9:02 am UTC
naturally you always make things seem so simple(I guess because they are)...never thought to put both queries as
scalar subqueries in the 'FROM' clause.
I'm just now beginning to learn the power of these queries
in 9i and that you can put them anywhere.
just an fyi...during my research ended up on the(initials
only)DB website where he had some pretty good examples
of using scalar subqueries, of which I applied several of
the techniques presented(still with no luck)...and then
read(further on down the page) that scalar subqueries don't
work with GROUP BY and HAVING clauses(along with several
other clause restrictions)...I'll find the link and send
it to you.
DB link
denni50, July 16, 2005 - 9:20 am UTC
Tom
here's the link...
</code>
http://builder.com.com/5100-6388_14-1051694-2.html <code>
maybe I'm misinterpreting what he's explaining.
in reading it over again I believe it now means
that the scalar subquery result set cannot be used
in a GROUP BY...correct me if I'm wrong.
thanks
July 16, 2005 - 9:59 am UTC
... Scalar subqueries are a powerful enhancement to Oracle9i SQL. ....
Very first sentence starts off wrong. bummer.
Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
ops$tkyte@ORA815> select (select count(*) from dual) from dual;
(SELECTCOUNT(*)FROMDUAL)
------------------------
1
and the examples he uses, they *should* be analytics, doesn't really show where scalar subqueries rule, why they are good.
and if you run his sql, you'll see why I always cut and paste from sqplus :) At least that way, I'm pretty sure they are syntactically correct. Nothing more frustrating than having to debug the example before using it (of a,b,c,d -- A, which is not even a scalar subquery example, is the only sql that runs ;)
His suggested use in a single row insert is sort of strange for a DW. I don't know about you, but I would certainly not do what is suggested in
http://builder.com.com/5110-6388-1051712.html do you really want to full scan the table 4 times to get 4 aggregates? I don't
I don't think you misinterpreted anything, it just doesn't give any really good examples. Even if you hack the sql to make it work, the examples are not representative of why you would use a scalar subquery.
Here is an excerpt from Effective Oracle by Design on Scalar subqueries
<quote>
Scalar Subqueries
The scalar subquery is a neat capability in SQL that can provide the easiest way to get excellent performance from a query. Basically, since Oracle8i Release 1, you have been able to use a subquery that returns at most a single row and column anywhere you could use a character string literal before. You can code this:
Select 'Something'
From dual
Where 'a' = 'b'
So, you can also code this:
Select (select column from where )
From dual
Where (select column from where ) = (select from . )
I mainly use this capability for the following tasks:
Removing the need for outer joins
Aggregating information from multiple tables in a single query
Selectively selecting from a different table/row in a single query
We'll take a look at each of these uses in turn.
Remove an Outer Join
When you remove an outer join, not only is the resulting query usually easier to read, but many times, the performance can be improved as well. The general idea is you have a query of this form:
Select , outer_joined_to_table.column
From some_table, outer_joined_to_table
Where = outer_joined_to_table(+)
You can code that as follows:
Select , (select column from outer_joined_to_table where )
From some_table;
In many cases, there is a one-to-one relationship from the driving table to the table being outer-joined to, or an aggregate function is applied to the outer-joined-to column. For example, consider this query:
select a.username, count(*)
from all_users a, all_objects b
where a.username = b.owner (+)
group by a.username;
Its results are equivalent to running this query:
select a.username, (select count(*)
from all_objects b
where b.owner = a.username) cnt
from all_users a
But somehow, the second query is more efficient. TKPROF shows us the efficiency, but this time, it lets us down. It isn't useful for seeing why this is more efficient.
<b>NOTE: The somehow is related to an entirely different plan - see Alberto's message below.
</b>
select a.username, count(*)
from all_users a, all_objects b
where a.username = b.owner (+)
group by a.username
call count cpu elapsed disk query current rows
------- ------ ----- ------- ------- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 1.90 2.22 0 144615 0 44
------- ------ ----- ------- ------- ------- ------- -------
total 6 1.90 2.22 0 144615 0 44
Rows Row Source Operation
------- ---------------------------------------------------
44 SORT GROUP BY
17924 MERGE JOIN OUTER
44 SORT JOIN
44 NESTED LOOPS
44 NESTED LOOPS
44 TABLE ACCESS FULL USER$
44 TABLE ACCESS CLUSTER TS$
44 INDEX UNIQUE SCAN I_TS# (object id 7)
44 TABLE ACCESS CLUSTER TS$
44 INDEX UNIQUE SCAN I_TS# (object id 7)
17916 SORT JOIN
30581 VIEW
30581 FILTER
31708 TABLE ACCESS BY INDEX ROWID OBJ$
31787 NESTED LOOPS
78 TABLE ACCESS FULL USER$
31708 INDEX RANGE SCAN I_OBJ2 (object id 37)
1035 TABLE ACCESS BY INDEX ROWID IND$
1402 INDEX UNIQUE SCAN I_IND1 (object id 39)
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
1 FIXED TABLE FULL X$KZSPR
Now, let's compare this to the second version:
select a.username, (select count(*)
from all_objects b
where b.owner = a.username) cnt
from all_users a
call count cpu elapsed disk query current rows
------- ------ ----- ------- ------- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 1.63 1.98 0 135594 0 44
------- ------ ----- ------- ------- ------- ------- -------
total 6 1.63 1.98 0 135594 0 44
Rows Row Source Operation
------- ---------------------------------------------------
44 NESTED LOOPS
44 NESTED LOOPS
44 TABLE ACCESS FULL OBJ#(22)
44 TABLE ACCESS CLUSTER OBJ#(16)
44 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
44 TABLE ACCESS CLUSTER OBJ#(16)
44 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
We see it did less logical I/O, but all references to the ALL_OBJECTS part of the query are missing from the plan. In fact, it is not possible to see the plan for these scalar subqueries as of Oracle9i Release 2. This is unfortunate, and we can only hope that an upcoming version will show scalar subqueries.
What if you need more than one column from the related table? Suppose we needed not only the COUNT(*), but also the AVG(OBJECT_ID). We have four choices:
Go back to the outer join.
Use two scalar subqueries.
Use a trick with a single scalar subquery.
Use an object type.
Since the first option is pretty obvious, we won't demonstrate that. We will take a look at the other choices, and demonstrate why the third and fourth options may be worthwhile.
Use Two Scalar Subqueries
First, we'll look at using two scalar subqueries:
select a.username, (select count(*)
from all_objects b
where b.owner = a.username) cnt,
(select avg(object_id )
from all_objects b
where b.owner = a.username) avg
from all_users a
call count cpu elapsed disk query current rows
------- ------ ----- ------- ------- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 3.18 3.25 0 271036 0 44
------- ------ ----- ------- ------- ------- ------- -------
total 6 3.18 3.25 0 271036 0 44
That effectively doubled the work (look at the QUERY column and compare its values to the previous results). We can get back to where we were, however, just by using a small trick.
Use a Single Scalar Subquery
Instead of running two scalar subqueries, we will run one that will encode all of the data of interest in a single string. We can use SUBSTR then to pick off the fields we need and convert them to the appropriate types again.
select username,
to_number( substr( data, 1, 10 ) ) cnt,
to_number( substr( data, 11 ) ) avg
from (
select a.username, (select to_char( count(*), 'fm0000000009' ) ||
avg(object_id)
from all_objects b
where b.owner = a.username) data
from all_users a
)
call count cpu elapsed disk query current rows
------- ------ ----- ------- ------- ------- ------- -------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 1.66 1.73 0 135594 0 44
------- ------ ----- ------- ------- ------- ------- -------
total 6 1.68 1.75 0 135594 0 44
So, in the inline view, we formatted the COUNT(*) in a ten-character wide, fixed-width field. The format modifier (FM) in the TO_CHAR format suppressed the leading space that a number would have, since we know the count will never be negative (so we do not need a sign). We then just concatenate on the AVG() we want. That does not need to be fixed width, since it is the last field. I prefer to use fixed-width fields in all cases because it makes the SUBSTR activity at the next level much easier to perform. The outer query then just must SUBSTR off the fields and use TO_NUMBER or TO_DATE as appropriate to convert the strings back to their native type. As you can see, in this case, it paid off to do this extra work.
One note of caution on this technique though: Beware of NULLs. On fields that allow NULLs, you will need to use NVL. For example, if COUNT(*) could have returned a NULL (in this case, it cannot), we would have coded this way:
nvl( to_char(count(*),'fm0000000009'), rpad( ' ', 10 ) )
That would have returned ten blanks, instead of concatenating in a NULL, which would have shifted the string over, destroying our results.
Use an Object Type
Lastly, we can use an object type to return a "scalar" value that is really a complex object type. We need to start by creating a scalar type to be returned by our subquery:
ops$tkyte@ORA920> create or replace type myScalarType as object
2 ( cnt number, average number )
3 /
Type created.
That maps to the two numbers we would like to return: the count and the average. Now, we can get the result using this query:
select username, a.data.cnt, a.data.average
from (
select username, (select myScalarType( count(*), avg(object_id) )
from all_objects b
where b.owner = a.username ) data
from all_users a
) A
call count cpu elapsed disk query current rows
------- ------ ----- ------- ------- ------- ------- -------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 1.56 1.63 0 135594 0 44
------- ------ ----- ------- ------- ------- ------- -------
total 6 1.58 1.65 0 135594 0 44
Here, we get the same results without needing to encode the data using TO_CHAR and decode the data using SUBSTR and TO_NUMBER. Additionally, the presence of NULLs would not further complicate the query.
Using the object type is convenient to reduce the query complexity, but it does involve the extra step of creating that type, which some people are hesitant to do. So, while this technique is easier to use, I find most people will use the encode/decode technique rather than the object type approach. The performance characteristics are very similar with either technique.
Aggregate from Multiple Tables
Suppose you are trying to generate a report that shows by username, the username, user ID, created date, number of tables they own, and the number of constraints they own for all users created within the last 50 days. This would be easy if ALL_OBJECTS had both TABLES and CONSTRAINTS, but it doesn't. You need to count rows in two different tables. If you just joined, you would end up with a Cartesian join, so that if a user owned six tables and had three constraints, you would get 18 rows.
I'll demonstrate two queries to retrieve this information: one with and one without scalar subqueries. Without scalar queries, there are many ways to achieve this. One technique is to use a Cartesian join. We could also use multiple levels of inline views and join ALL_USERS to ALL_CONSTRAINTS, aggregate that, and then join that to ALL_TABLES (or reverse the two) as well. We could join ALL_USERS to inline views that aggregate ALL_CONSTRAINTS and ALL_TABLES to the same level of detail. We'll compare the implementation of those last two methods to the scalar subquery here. The second inline view solution would look like this:
ops$tkyte@ORA920> select a.username, a.user_id, a.created,
2 nvl(b.cons_cnt,0) cons, nvl(c.tables_cnt,0) tables
3 from all_users a,
4 (select owner, count(*) cons_cnt
5 from all_constraints
6 group by owner) b,
7 (select owner, count(*) tables_cnt
8 from all_tables
9 group by owner) c
10 where a.username = b.owner(+)
11 and a.username = c.owner(+)
12 and a.created > sysdate-50
13 /
USERNAME USER_ID CREATED CONS TABLES
------------------------------ ---------- --------- ---------- ----------
A 511 04-JUL-03 3 1
A1 396 20-JUN-03 0 1
B 512 04-JUL-03 3 1
C 470 21-JUN-03 0 0
D 471 21-JUN-03 0 1
OPS$TKYTE 513 05-JUL-03 17 6
6 rows selected.
Elapsed: 00:00:01.94
We had to use outer joins from ALL_USERS to the two inline views - otherwise we would "lose" rows for users that did not have any tables or had tables but no constraints. The performance of this query - about 2 seconds on my system - is not the best. Using scalar subqueries instead, we see a query that looks very similar - yet the performance characteristics are very different:
ops$tkyte@ORA920> select username, user_id, created,
2 (select count(*)
3 from all_constraints
4 where owner = username) cons,
5 (select count(*)
6 from all_tables
7 where owner = username) tables
8 from all_users
9 where all_users.created > sysdate-50
10 /
USERNAME USER_ID CREATED CONS TABLES
------------------------------ ---------- --------- ---------- ----------
A 511 04-JUL-03 3 1
A1 396 20-JUN-03 0 1
B 512 04-JUL-03 3 1
C 470 21-JUN-03 0 0
D 471 21-JUN-03 0 1
OPS$TKYTE 513 05-JUL-03 17 6
6 rows selected.
Elapsed: 00:00:00.06
It is true that we can "tune" that first query - we can see that when using the inline views, Oracle is producing the aggregations for every user in the database and then outer joining these results to ALL_USERS. Well, most of our users are not in this report - only the recently created one - so we are computing aggregates for lots of data we are not going to use. So, we can manually push the predicate down into these inline views:
ops$tkyte@ORA920> select a.username, a.user_id, a.created,
2 nvl(b.cons_cnt,0) cons, nvl(c.tables_cnt,0) tables
3 from all_users a,
4 (select all_constraints.owner, count(*) cons_cnt
5 from all_constraints, all_users
6 where all_users.created > sysdate-50
7 and all_users.username = all_constraints.owner
8 group by owner) b,
9 (select all_tables.owner, count(*) tables_cnt
10 from all_tables, all_users
11 where all_users.created > sysdate-50
12 and all_users.username = all_tables.owner
13 group by owner) c
14 where a.username = b.owner(+)
15 and a.username = c.owner(+)
16 and a.created > sysdate-50
17 /
USERNAME USER_ID CREATED CONS TABLES
------------------------------ ---------- --------- ---------- ----------
A 511 04-JUL-03 3 1
A1 396 20-JUN-03 0 1
B 512 04-JUL-03 3 1
C 470 21-JUN-03 0 0
D 471 21-JUN-03 0 1
OPS$TKYTE 513 05-JUL-03 17 6
6 rows selected.
Elapsed: 00:00:00.10
Here, it is not just the performance boost you may achieve that makes this approach attractive, but also its simplicity.
Select from Different Tables
Using scalar subqueries for selecting from different tables is one of the neater tricks by far. This is useful in two areas:
Joining rows in a table/view to some set of other tables-using data in the query itself to pick the table to join to
Looking up data in an INSERT statement or getting SQLLDR to do code conversions without needing to call PL/SQL
We'll demonstrate each in turn.
Join Rows to a Set of Tables
One of my core scripts is a script I call DBLS (for database ls, or database dir for Windows users). The query is as follows:
ops$tkyte@ORA920> select object_type, object_name,
2 decode( status, 'INVALID', '*', '' ) status,
3 decode( object_type,
4 'TABLE', (select tablespace_name
5 from user_tables
6 where table_name = object_name),
7 'TABLE PARTITION', (select tablespace_name
8 from user_tab_partitions
9 where partition_name =
subobject_name),
10 'INDEX', (select tablespace_name
11 from user_indexes
12 where index_name = object_name),
13 'INDEX PARTITION', (select tablespace_name
14 from user_ind_partitions
15 where partition_name =
subobject_name),
16 'LOB', (select tablespace_name
17 from user_segments
18 where segment_name = object_name),
19 null ) tablespace_name
20 from user_objects a
21 order by object_type, object_name
22 /
This generates a report for the current schema of all of the objects, including their type and status. For many things that are segments (consume space), it reports the tablespace in which they reside. Now, you might wonder why I didn't just code the following:
select b.object_name, b.object_type,
decode( b.status, 'INVALID', '*', '' ),
a.tablespace_name
from user_segments a, user_objects b
where a.segment_name(+) = b.object_name
and a.segment_type(+) = b.object_type;
It is more terse and seems like a better choice. AUTOTRACE will help to explain why it doesn't work as well.
ops$tkyte@ORA920> select object_type, object_name,
2 decode( status, 'INVALID', '*', '' ) status,
3 decode( object_type,
21 order by object_type, object_name
22 /
86 rows selected.
Statistics
----------------------------------------------------------
820 consistent gets
ops$tkyte@ORA920> select b.object_name, b.object_type,
2 decode( b.status, 'INVALID', '*', '' ),
3 a.tablespace_name
4 from user_segments a, user_objects b
5 where a.segment_name(+) = b.object_name
6 and a.segment_type(+) = b.object_type;
86 rows selected.
Statistics
----------------------------------------------------------
12426 consistent gets
And the larger your schemas, the worse it gets. I discovered that if I did selective lookups to the less complex views using the DECODE row by row, the query would constantly perform in milliseconds, even on a large schema. Using USER_SEGMENTS, which is a very general-purpose view-sort of the kitchen sink of views-and outer-joining to that could be a killer in certain schemas, to the point where the script was useless since it would take so long to run.
This example shows a technique for joining each row in a result set with a different table. This is also useful when you have a design that uses a single field as a foreign key to N different tables. (Perhaps this is not a good idea, since you cannot use database-integrity constraints, but people do it.) In those cases, a construct such as this is key to pulling the data back together in a fairly efficient manner (in a single query).
Perform Lookups
Lastly, this scalar subquery technique is useful when used in conjunction with SQLLDR to perform lookups of data. Suppose you are given an input file where the fields are all specified using lookup codes, but you need to have the data decoded in your database tables. Rather than load the raw data into staging tables, performing joins, and inserting into the real tables, you can use a scalar subquery in the control files directly to load the data. For example, a control file could look like this:
LOAD DATA
INFILE *
INTO TABLE t
REPLACE
FIELDS TERMINATED BY '|'
(
username "(select username
from all_users where user_id = :username)"
)
BEGINDATA
0
5
11
19
21
30
That would automatically convert the USER_ID in the data stream into the USERNAME by doing that lookup for you.
Note: In Oracle8i, there is a product issue, whereby if you use this technique, you must also use rows=1 on the SQLLDR command line. Otherwise, you will find the subquery is executed only once and will insert the same value over and over. There are patches that can be applied to various 8174 releases to correct this.
</quote>
Immense thanks Tom......
denni50, July 16, 2005 - 12:09 pm UTC
there are no words to express the immense gratitude for
your taking the time to explain and illustrate the differences in performance utilizing ss vs outerjoins.
I've copied and printed your entire illustration and explanation and will add to my collection of your scripts and other solutions I use and have learned from.
I'm sure there are many others out there that will also
benefit from this.
As for testing what DB had on his website, was in a real
crunch to get the data so I could generate a report for
the 'big honcho' in Atlanta and was scrambling from website
to website trying to find info on multi table aggregates
and scalar subqueries to combine the tables.
I have your book EOBD and sifted through it and don't recall
any chapter on SS...I'll have to look at it again.
however...another kazillion thanks!....definitely have
learned a great deal from this challenge.
;~)
July 16, 2005 - 1:07 pm UTC
The chapter on Effective SQL :)
about the "somehow"
Alberto Dell'Era, July 18, 2005 - 7:23 am UTC
> NOTE: Scalar subquery caching is the answer to the "somehow"
But are you sure that scalar subquery *caching* is what is making the difference?
all_users.username is obviously unique, so i would expect the subquery to be "probed" with unique values, thus the cached values will never be reused, hence caching is useless here.
It can be confirmed by looking at the plans.
If you explain the query in 9.2.0.6, and then explain the scalar subquery:
select a.username, (select count(*)
from all_objects b
where b.owner = a.username) cnt
from all_users a;
--------------------------------------------------------
| Id | Operation | Name | R
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS BY INDEX ROWID| USER$ |
|* 5 | INDEX UNIQUE SCAN | I_USER1 |
| 6 | TABLE ACCESS BY INDEX ROWID| OBJ$ |
|* 7 | INDEX RANGE SCAN | I_OBJ2 |
|* 8 | TABLE ACCESS BY INDEX ROWID | IND$ |
|* 9 | INDEX UNIQUE SCAN | I_IND1 |
|* 10 | TABLE ACCESS BY INDEX ROWID | OBJAUTH$ |
| 11 | NESTED LOOPS | |
| 12 | FIXED TABLE FULL | X$KZSRO |
|* 13 | INDEX RANGE SCAN | I_OBJAUTH2 |
|* 14 | FIXED TABLE FULL | X$KZSPR |
|* 15 | FIXED TABLE FULL | X$KZSPR |
|* 16 | FIXED TABLE FULL | X$KZSPR |
|* 17 | FIXED TABLE FULL | X$KZSPR |
|* 18 | FIXED TABLE FULL | X$KZSPR |
|* 19 | FIXED TABLE FULL | X$KZSPR |
|* 20 | FIXED TABLE FULL | X$KZSPR |
|* 21 | FIXED TABLE FULL | X$KZSPR |
|* 22 | FIXED TABLE FULL | X$KZSPR |
|* 23 | FIXED TABLE FULL | X$KZSPR |
|* 24 | FIXED TABLE FULL | X$KZSPR |
|* 25 | FIXED TABLE FULL | X$KZSPR |
|* 26 | FIXED TABLE FULL | X$KZSPR |
|* 27 | FIXED TABLE FULL | X$KZSPR |
|* 28 | FIXED TABLE FULL | X$KZSPR |
|* 29 | FIXED TABLE FULL | X$KZSPR |
|* 30 | FIXED TABLE FULL | X$KZSPR |
| 31 | NESTED LOOPS | |
| 32 | NESTED LOOPS | |
|* 33 | TABLE ACCESS FULL | USER$ |
| 34 | TABLE ACCESS CLUSTER | TS$ |
|* 35 | INDEX UNIQUE SCAN | I_TS# |
| 36 | TABLE ACCESS CLUSTER | TS$ |
|* 37 | INDEX UNIQUE SCAN | I_TS# |
--------------------------------------------------------
(snip)
5 - access("U"."NAME"=:B1) <-- interesting ...
(snip)
select count(*)
from all_objects b
where b.owner = 'DELLERA'
-------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
|* 2 | FILTER | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS BY INDEX ROWID| USER$ |
|* 5 | INDEX UNIQUE SCAN | I_USER1 |
| 6 | TABLE ACCESS BY INDEX ROWID| OBJ$ |
|* 7 | INDEX RANGE SCAN | I_OBJ2 |
|* 8 | TABLE ACCESS BY INDEX ROWID | IND$ |
|* 9 | INDEX UNIQUE SCAN | I_IND1 |
|* 10 | TABLE ACCESS BY INDEX ROWID | OBJAUTH$ |
| 11 | NESTED LOOPS | |
| 12 | FIXED TABLE FULL | X$KZSRO |
|* 13 | INDEX RANGE SCAN | I_OBJAUTH2 |
|* 14 | FIXED TABLE FULL | X$KZSPR |
|* 15 | FIXED TABLE FULL | X$KZSPR |
|* 16 | FIXED TABLE FULL | X$KZSPR |
|* 17 | FIXED TABLE FULL | X$KZSPR |
|* 18 | FIXED TABLE FULL | X$KZSPR |
|* 19 | FIXED TABLE FULL | X$KZSPR |
|* 20 | FIXED TABLE FULL | X$KZSPR |
|* 21 | FIXED TABLE FULL | X$KZSPR |
|* 22 | FIXED TABLE FULL | X$KZSPR |
|* 23 | FIXED TABLE FULL | X$KZSPR |
|* 24 | FIXED TABLE FULL | X$KZSPR |
|* 25 | FIXED TABLE FULL | X$KZSPR |
|* 26 | FIXED TABLE FULL | X$KZSPR |
|* 27 | FIXED TABLE FULL | X$KZSPR |
|* 28 | FIXED TABLE FULL | X$KZSPR |
|* 29 | FIXED TABLE FULL | X$KZSPR |
|* 30 | FIXED TABLE FULL | X$KZSPR |
-------------------------------------------------------
(snip)
5 - access("U"."NAME"='DELLERA') <-- interesting ...
(snip)
If you diff the two plans, you can see that lines 1-30 are the plan for the subquery, and so the plan for the main query is:
| 31 | NESTED LOOPS | |
| 32 | NESTED LOOPS | |
|* 33 | TABLE ACCESS FULL | USER$ |
| 34 | TABLE ACCESS CLUSTER | TS$ |
|* 35 | INDEX UNIQUE SCAN | I_TS# |
| 36 | TABLE ACCESS CLUSTER | TS$ |
|* 37 | INDEX UNIQUE SCAN | I_TS# |
So it is full-scanning user$ and looking up some infos on unique columns; that means that the subquery will be "probed" (re-executed with another bind value for :B1) with unique values.
July 18, 2005 - 8:08 am UTC
indeed. the scalar subquery caching is applicable for the remaining examples but not that particular one. jumped to a conclusion.
Scalar Subqueries vs Views Performance
denni50, July 18, 2005 - 11:53 am UTC
Tom
decided to compare the performance with the solution you provided using the 2 scalar subqueries in the 'FROM' clause against the 2 views of each tables aggregate results that I came up...they are virtually identical!
so essentially ssq are nothing more than 'on the fly' views with no ddl (as shown in the execution plan "VIEW (Cost=113 Card=1 Bytes=31)" from the ssq).
I would have thought a view(being a stored query) would give a better performance rate....hmmm, interesting.
SQL> set autotrace traceonly explain
SQL> select r.project,r.dropped,p.response,p.income
2 from (select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
3 from receipting r group by to_char(r.ack_date,'MMYYYY')) r,
4 (select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
5 sum(payamount) as Income from payment p
6 where p.paydate between to_date('01-JAN-2005','DD-MON-YYYY') and to_date('31-DEC-2005',
'DD-MON-YYYY')
7 and p.appealcode like '%TY%'
8 group by to_char(p.paydate,'MMYYYY'))p
9 where r.project=p.project;
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=121 Card=11 Bytes=53
9)
1 0 HASH JOIN (Cost=121 Card=11 Bytes=539)
2 1 VIEW (Cost=113 Card=1 Bytes=31)
3 2 SORT (GROUP BY) (Cost=113 Card=1 Bytes=21)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'PAYMENT' (Cost=112
Card=107 Bytes=2247)
5 4 INDEX (RANGE SCAN) OF 'PDATE_INDEX' (NON-UNIQUE) (
Cost=8 Card=2131)
6 1 VIEW (Cost=7 Card=1062 Bytes=19116)
7 6 SORT (GROUP BY) (Cost=7 Card=1062 Bytes=23364)
8 7 TABLE ACCESS (FULL) OF 'RECEIPTING' (Cost=2 Card=106
2 Bytes=23364)
******Trace from 2Views***************
SQL> select r.project,r.dropped,p.response,p.income
2 from gui.tku_receipt r,gui.tku_payment p
3 where p.project=r.project
4 /
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=121 Card=11 Bytes=53
9)
1 0 HASH JOIN (Cost=121 Card=11 Bytes=539)
2 1 VIEW OF 'TKU_PAYMENT' (Cost=113 Card=1 Bytes=31)
3 2 SORT (GROUP BY) (Cost=113 Card=1 Bytes=21)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'PAYMENT' (Cost=112
Card=107 Bytes=2247)
5 4 INDEX (RANGE SCAN) OF 'PDATE_INDEX' (NON-UNIQUE) (
Cost=8 Card=2131)
6 1 VIEW OF 'TKU_RECEIPT' (Cost=7 Card=1062 Bytes=19116)
7 6 SORT (GROUP BY) (Cost=7 Card=1062 Bytes=23364)
8 7 TABLE ACCESS (FULL) OF 'RECEIPTING' (Cost=2 Card=106
2 Bytes=23364)
SQL>
July 18, 2005 - 12:19 pm UTC
a view is just a stored query though, I would not expect a view to have any inherit performance boost over just "a query" of any sort.
scalar subqueries might
a) go faster
b) go slower
c) go the same speed
as an equivalent query without them - pretty much like everything
however Tom....
denni50, July 18, 2005 - 2:01 pm UTC
in looking at the tkprof output...my interpretation is that the 2 ssq were converted (more or less) to views behind the scenes: (I am assuming to obtain the obj# and obj_id)
select text
from
view$ where rowid=:1
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
I understand in this simplistic case the elapsed_cpu is insignificant...however if I had to combine 5,6,7+ multi_table aggregates it appears that views would be more productive and beneficial especially if the query was going to be run every week or month.
I also noticed disk=12 with ssq, while disk=0 with views..what does that represent?
as always comments/feedback appreciated.
*********************************************************************************
TKPROF: Release 8.1.7.0.0 - Production on Mon Jul 18 12:02:42 2005
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Trace file: xxx_ora_892.trc
Sort options: default
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
alter session set sql_trace=true
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.93 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.93 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 42
********************************************************************************
select r.project,r.dropped,p.response,p.income
from (select to_char(r.ack_date,'MMYYYY') as project,sum(r.receipt_count) as Dropped
from receipting r group by to_char(r.ack_date,'MMYYYY')) r,
(select to_char(p.paydate,'MMYYYY') as project,count(p.payamount) as Response,
sum(payamount) as Income from payment p
where p.paydate between to_date('01-JAN-2005','DD-MON-YYYY') and to_date('31-DEC-2005','DD-MON-YYYY')
and p.appealcode like '%TY%'
group by to_char(p.paydate,'MMYYYY'))p
where r.project=p.project
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 38.32 0 1 0 0
Execute 1 0.00 0.86 0 0 0 0
Fetch 2 18437.50 18819.92 12 30373 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 18437.50 18859.10 12 30374 0 7
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 42
Rows Row Source Operation
------- ---------------------------------------------------
7 HASH JOIN
7 VIEW
7 SORT GROUP BY
24952 TABLE ACCESS BY INDEX ROWID PAYMENT
656843 INDEX RANGE SCAN PDATE_INDEX (object id 40375)
7 VIEW
7 SORT GROUP BY
46 TABLE ACCESS FULL RECEIPTING
********************************************************************************
select text
from
view$ where rowid=:1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 156.25 4.32 0 0 0 0
Execute 2 0.00 3.36 0 0 0 0
Fetch 2 0.00 0.66 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 156.25 8.34 0 4 0 2
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$
********************************************************************************
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,
o.dataobj#,o.flags
from
obj$ o where o.obj#=:1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 4.51 0 0 0 0
Execute 1 0.00 2.67 0 0 0 0
Fetch 1 0.00 0.42 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 7.60 0 3 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID OBJ#(18)
1 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
********************************************************************************
select r.project,r.dropped,p.response,p.income
from gui.tku_receipt r,gui.tku_payment p
where p.project=r.project
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 156.25 56.07 0 4 0 0
Execute 1 0.00 0.98 0 0 0 0
Fetch 2 18281.25 18379.11 0 30373 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 18437.50 18436.16 0 30377 0 7
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 42
Rows Row Source Operation
------- ---------------------------------------------------
7 HASH JOIN
7 VIEW
7 SORT GROUP BY
24952 TABLE ACCESS BY INDEX ROWID OBJ#(29492)
656843 INDEX RANGE SCAN OBJ#(40375) (object id 40375)
7 VIEW
7 SORT GROUP BY
46 TABLE ACCESS FULL OBJ#(52143)
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 156.25 94.39 0 5 0 0
Execute 3 0.00 2.77 0 0 0 0
Fetch 4 36718.75 37199.03 12 60746 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 36875.00 37296.19 12 60751 0 14
Misses in library cache during parse: 2
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 156.25 8.83 0 0 0 0
Execute 3 0.00 6.03 0 0 0 0
Fetch 3 0.00 1.08 0 7 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 156.25 15.94 0 7 0 3
Misses in library cache during parse: 2
3 user SQL statements in session.
3 internal SQL statements in session.
6 SQL statements in session.
********************************************************************************
Trace file: xxx_ora_892.trc
Trace file compatibility: 8.00.04
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
3 internal SQL statements in trace file.
6 SQL statements in trace file.
5 unique SQL statements in trace file.
23858 lines in trace file.
July 18, 2005 - 2:12 pm UTC
it is not turned into a view, a query like:
select ..., (scalar subquery1), (scalar subquery2)
from ...
is executed like this:
for x in ( select .... from .... ) -- without scalar subqueries
loop
if (scalar subquery1) is not in the cache,
then
execute scalar subquery1 using the values from X
put into cache
end if;
if (scalar subquery2) is not in the cache,
then
execute scalar subquery2 using the values from X
put into cache
end if;
output row
end loop
scalar subquery1 and 2 are optimized as "standalone" queries.
Tom am definitely looking forward...
denni50, July 18, 2005 - 2:45 pm UTC
to reading more about scalar subqueries in your new book
and did go back and re-read chapter 8 of EOBD(p.504-514).
yep!!..there's a whole segment on multi table aggregates..
didn't see it..was even looking in index for aggregates,
grouping...etc.
...could have saved myself alot of wasted time.
Alberto Dell'Era, July 18, 2005 - 3:49 pm UTC
To be more precise, adding the info Jonathan Lewis gave us in the newsgroup [sorry I can't quote - if I add the URL and press "preview", the submit button disappear ..], wouldn't it be (additions in uppercase):
for x in ( select .... from .... ) -- without scalar subqueries
loop
if (scalar subquery) is not in the cache,
then
execute scalar subquery using the values from X
IF (NO HASH COLLISIONS OCCURS) THEN
put into cache
END IF;
end if;
output row
end loop
That's very important to remember if you rely on subquery caching and you are going to fetch a number of rows comparable or greater than the cache size - 1024 in 10g, 256 earlier. It's not an LRU cache, so it's going to be full after the first hundreds of rows have been fetched.
July 18, 2005 - 4:58 pm UTC
It is a little more "subtle" than that.
run this script passing in say "4" or "10" (to control the initial size of the table) and see what you see.
the package just maintains a count of how many times called and the "callees" in a string.
Table T starts with N rows (you control N, N should be >= 4). For example, it might start with:
R DATA
---- -----
1 2
2 1
3 2
4 1
The loop after that finds two values that cause a collision. So, we'll have two that definitely cause the hash collisions.
Then we insert 100 more rows of these values in order (using R to sequence them).
When totally out of sequence (every other one is different), we get maximum calls
But when we update most to 1 (1 is the second guy in always), and the last one to whatever collides with it -- we see not so many calls, even though in theory the rows with 1 should call evertime. They get "maxed out".
with N=4 (in 9i)
cnt = 3 str = ,179,1,1
so 179 and 1 collide and 1 called twice, that was expected. Add 100 more rows varying 1, 179, 1, 179....
cnt = 52 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1
That was unexpected sort of. 179 and 1 are in there 52 times each. I would have thought "53". Now do the updates and:
cnt = 4 str = ,179,1,1,1
hmmm.
N=10
cnt = 6 str = ,179,1,1,1,1,1
cnt = 55 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 7 str = ,179,1,1,1,1,1,1
N=100
cnt = 51 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 100 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
cnt = 52 str =
,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1
So, I'll just say "you want best hit -- order the data in the inline view by the parameters you pass to the function, else, it'll be variable number of hits and somewhat unpredicable"
If anyone sees some fundemental "whoops, you have a problem in the code", I'd love to have it pointed out (really)
set echo on
create or replace package demo_pkg
as
function f( x in number ) return number;
procedure reset;
procedure show;
function get_cnt return number;
end;
/
create or replace package body demo_pkg
as
g_cnt number;
g_str long;
function get_cnt return number
is
begin
return g_cnt;
end;
procedure reset
is
begin
g_cnt := 0;
g_str := null;
end;
procedure show
is
begin
dbms_output.put_line( substr
( 'cnt = ' || demo_pkg.g_cnt || ' str = ' || demo_pkg.g_str,
1, 255 ) );
end;
function f( x in number ) return number
is
begin
g_cnt := g_cnt+1;
g_str := g_str || ',' || x;
return x;
end;
end;
/
drop table t;
create table t
as
select rownum r, decode(mod(rownum,2),0,1,2) data
from dual
connect by level <= &1;
begin
for i in 2 .. 1024
loop
update t set data = i where mod(r,2) = 1;
commit;
demo_pkg.reset;
for x in ( select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
exit when demo_pkg.get_cnt <> 2;
end loop;
demo_pkg.show;
end;
/
insert into t
select l, (select data from t where r = 1+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;
exec demo_pkg.reset
set autotrace traceonly statistics
select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r)
/
set autotrace off
exec demo_pkg.show
update t set data = 1 where r > &1+2;
update t set data = (select data from t where r=1) where r=(select max(r) from t);
commit;
exec demo_pkg.reset
set autotrace traceonly statistics
select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r)
/
set autotrace off
exec demo_pkg.show
Alberto Dell'Era, July 18, 2005 - 6:33 pm UTC
>cnt = 52 str =
>,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
>That was unexpected sort of. 179 and 1 are in there 52 times each. I would
>have thought "53".
This fragment:
insert into t
select l, (select data from t where r = 1+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;
Generates this sequence:
R DATA
---------- ----------
1 179
2 1
3 179
4 1
5 1 <-- same as before
6 179
(snip)
If you c/1+mod(l,2)/2+mod(l,2)/g
insert into t
select l, (select data from t where r = 2+mod(l,2))
from (
select level+&1 l
from dual connect by level <= 100
);
commit;
Thus generating:
R DATA
---------- ----------
1 179
2 1
3 179
4 1
5 179
6 1
(snip)
You'll find:
cnt = 53 str = ,179,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
(same per N=10 and N=100 - one more function call).
I've gone so far in reproducing your test case - since it's 12:30PM now, see you tomorrow ...
(why do you post addictive things always at around midnight Italy time ? ;)
July 18, 2005 - 6:42 pm UTC
very good -- one mystery down (sort of, how'd cache that extra one), one to go :)
Alberto Dell'Era, July 18, 2005 - 6:55 pm UTC
I think everything can be explained if we admit a two-level cache:
level 1: remembers the last value only, if no hit, passes the value to level 2 cache
level 2: an hash cache as described by Jonathan lewis.
So the 1-1-1-1-1-1-1-1... sequences always get an hit by level 1 cache (N "1" - N-1 hits), regardless if they are or not in level 2 (they aren't since they collide with 179 [btw a prime number]).
If you order the values in the inline view as you suggest, you greatly increase the chance of hitting the level 1.
But now, definitely, I must hit the bed - 1PM ...
July 18, 2005 - 9:17 pm UTC
:)
Makes sense, we can develop a full test by finding a couple of colliding pairs and seeing what happens later
but if we order the set by the input parameters to the scalar subquery, we'll always get the cache hits on the 2nd iteration (that is probably why I sort of thought it might actually replace the cache value, I was hitting that 1st level "last time we did this" cache and it made it look like it was replacing the value)
full test case
Alberto Dell'Era, July 19, 2005 - 6:02 am UTC
We know that 1 and 179 collide (as can be shown by running your test case above).
Procedure "test" just prints the sequence, and then performs your sql query using your demo_pkg:
create or replace procedure test
is
begin
-- show sequence
for x in (select r, data from t order by r)
loop
dbms_output.put_line (to_char(x.r,'99') || ']' || to_char (x.data,'999') );
end loop;
-- perform test
demo_pkg.reset;
for x in ( select r, data, (select demo_pkg.f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
demo_pkg.show;
end test ;
/
--- cache 179, then an infinite sequence of 1
drop table t;
create table t
as
select rownum r, decode(rownum,1,179,1) data
from dual
connect by level <= 20;
exec test;
1] 179 <- miss
2] 1 <- miss
3] 1
4] 1
5] 1
6] 1
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1
cnt = 2 str = ,179,1
-- break the 1-1-1-1 sequence:
drop table t;
create table t
as
select rownum r, decode(rownum,1,179,5,100,1) data
from dual
connect by level <= 20;
exec test;
1] 179 <- miss
2] 1 <- miss
3] 1
4] 1
5] 100 <- miss
6] 1 <- miss <=== note this
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1
cnt = 4 str = ,179,1,100,1
100 evicts 1 from the level-2 cache, hence the additional function call immediately after.
-- cache 1 in level-2 cache
drop table t;
create table t
as
select rownum r, decode(rownum,1,1,2,179,5,100,1) data
from dual
connect by level <= 20;
exec test;
1] 1 <- miss
2] 179 <- miss
3] 1
4] 1
5] 100 <- miss
6] 1 <== HIT!
7] 1
8] 1
9] 1
10] 1
11] 1
12] 1
13] 1
14] 1
15] 1
16] 1
17] 1
18] 1
19] 1
20] 1
cnt = 3 str = ,1,179,100
Since now 1 is in the level-2 cache, no miss.
-- level-2 is not an LRU cache (as Jonathan Lewis said):
drop table t;
create table t
as
select rownum r, decode(rownum,1,179, decode (mod(rownum,2),0,1,100) ) data
from dual
connect by level <= 20;
exec test;
1] 179 <- miss
2] 1 <- miss
3] 100 <- miss
4] 1 <- miss
5] 100
6] 1 <- miss
7] 100
8] 1 <- miss
9] 100
10] 1 <- miss
11] 100
12] 1 <- miss
13] 100
14] 1 <- miss
15] 100
16] 1 <- miss
17] 100
18] 1 <- miss
19] 100
20] 1 <- miss
cnt = 12 str = ,179,1,100,1,1,1,1,1,1,1,1,1
179 prevents 1 from entering the level-2 cache.
July 19, 2005 - 7:43 am UTC
Excellent
Keep this up and I won't have to anymore :)
That enforces the "if you want to call the function F, or more generally the scalar subquery S, as little as possible, you can sort the data"
select ...., (scalar subquery referencing columns a, b, c)
from T1, T2, T2, ..
where ...
will call the scalar subquery an unknown number of times depending on how a,b,c arrive (what order they arrive in)
whereas
select x.*, (scalar subquery references columns a,b,c)
from (select ....
from t1,t2,t2,....
where
order by a,b,c )
will invoke the scalar subquery on the order of the distinct count of a,b,c
A tradeoff, if the scalar subquery is "expensive" and the sort considered "cheap", definitely something to do.
Alberto Dell'Era, July 19, 2005 - 8:02 am UTC
And if ordering is very expensive, but the "distinct count of a,b,c" is "less" than the level-2 cache size ("less" and not "less or equal" because we must account for hash collisions), you can avoid the ordering, as the results will be cached as well (but someone not, because of hash collisions -> less predictable).
DBLS Update
Andrew Rye, July 19, 2005 - 12:18 pm UTC
Hi Tom -
Just a little fix. . .I had the following problem running the DBLS query:
ORA-01427: single-row subquery returns more than one row
Had to add in the base table/index name in the partition queries. Took the liberty of concatenating in the subobject_name, where applicable, to the object name field in the result. Here's the updated version:
select object_type,
object_name ||
decode( subobject_name, null, '',
' / ' || subobject_name
) object_name,
decode( status, 'INVALID', '*',
''
) status,
decode( object_type,
'TABLE', ( select tablespace_name
from user_tables
where table_name = object_name
),
'TABLE PARTITION', ( select tablespace_name
from user_tab_partitions
where table_name = object_name
and partition_name = subobject_name
),
'INDEX', ( select tablespace_name
from user_indexes
where index_name = object_name
),
'INDEX PARTITION', ( select tablespace_name
from user_ind_partitions
where index_name = object_name
and partition_name = subobject_name
),
'LOB', ( select tablespace_name
from user_segments
where segment_name = object_name
),
null
) tablespace_name
from user_objects a
order by
object_type,
object_name
/
Thanks for this discussion on yet another handy tool for the development belt.
July 19, 2005 - 5:25 pm UTC
(i knew that -- that it didn't work with partitions ;)
subquery caching cache size on 9i and 10g
Alberto Dell'Era, July 19, 2005 - 3:50 pm UTC
I've tried to measure the cache size of the subquery caching process, and i've noticed that Jonathan Lewis was right - it's 256 on 9i and 1024 on 10g (but for numbers only, see below).
create or replace function f(p number)
return number
is
begin
dbms_application_info.set_client_info( userenv('client_info')+1 );
return p;
end;
/
-- create a table with a sequence of unique values
-- this disables the level-1 cache
drop table t;
create table t
as
with value_source as (
select rownum r, dbms_random.random data
from dual
connect by level <= 100000
),
dup_detector as (
select value_source.*,
row_number() over (partition by data order by r) dup_number
from value_source
)
-- dump non-duplicated values only
select r, data as data
from dup_detector
where dup_number = 1;
-- append the same sequence at the end of t
insert /*+ append */ into t (r,data)
select 100000 + r, data
from t;
commit;
-- test
exec dbms_application_info.set_client_info(0);
begin
for x in ( select r, data, (select f(data) from dual) ff
from (select * from t order by r) )
loop
null;
end loop;
end;
/
-- cached values = #values in table - #calls to f
select count(*) - userenv('client_info') as cache_size from t;
9.2.0.6:
CACHE_SIZE
----------
256
10.1.0.3:
CACHE_SIZE
----------
1024
I've tried also variants: I've used "rownum" instead of "dbms_random.random" - same results.
BUT - when i tried using VARCHAR2 instead of NUMBER for the DATA column, i've got anything between 16 and 512 for CACHE_SIZE, so the results are not to be generalized. Probably it's safe to avoid relying too much on the cache size when designing queries (or investigate the varchar2 case further ;).
Alberto
A reader, July 19, 2005 - 3:58 pm UTC
"To be more precise, adding the info Jonathan Lewis gave us in the newsgroup
[sorry I can't quote - if I add the URL and press "preview", the submit button
disappear ..]"
try </code>
http://tinyurl.com <code>
does it still happen ?
termoutput off
Parag Jayant Patankar, July 20, 2005 - 4:14 am UTC
Hi Tom,
I do not want to display output on screen. For this reason in Oracle 9iR2 I have written following script in unix which is executable from command line
sqlplus -s / <<!
set termout off
select sysdate from dual
/
!
But still it is showing display on screen. Can you tell me why it is not working ?
regards & thanks
pjp
July 20, 2005 - 6:27 am UTC
either
a) run a script, not simulated interactive commands.
cat >> x.sql <<EOF
set termout off
select sysdate from dual;
exit
EOF
sqlplus -s / @x.sql
b) redirect the above to /dev/null
set termout works for scripts, not interactive sessions.
Is this possible?
Arangaperumal, July 20, 2005 - 8:58 am UTC
Hi TOM,
Below is one of the question asked for me in one interview. Can you
please help me in getting the answer.
Table is having two columns. Table data:
col1 col2
3 8
1 6
4 7
2 5
Output should be each column should be sorted individually
col1 col2
1 5
2 6
3 7
4 8
And also the query should be generic i.e., it should work fine even
the table is having more than two columns.
(i got this from our yahoo groups)
July 20, 2005 - 12:30 pm UTC
I don't see how it could be 'generic' for N columns.
could I do it? sure, but the query would have to change for each additional column
my question to them would involve tilting my head to the side and asking "why" :)
ops$tkyte@ORA10GR1> select * from t;
C1 C2 C3
---------- ---------- ----------
1 8 11
2 7 10
3 6 12
4 5 13
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select c1, c2, c3
2 from (select c1, row_number() over (order by c1) rn from t)
3 natural join
4 (select c2, row_number() over (order by c2) rn from t)
5 natural join
6 (select c3, row_number() over (order by c3) rn from t)
7 /
C1 C2 C3
---------- ---------- ----------
1 5 10
2 6 11
3 7 12
4 8 13
sql query
AD, July 20, 2005 - 4:23 pm UTC
Hi Tom,
Could you please help with the following sql query.
I have a table with acc_no, year_month, cycle which stores data for every month. For every account, starting from the current month I have to go backwards and retrieve the latest year month for which cycle=<5. This needs to be retrieved only if the cycle >=5 for the current month.
create table tab1(acc number(10), year_month number(6), cycle number(2));
acc, year_month constitutes the key.
insert into tab1 values (100, 200507, 7);
insert into tab1 values (100, 200506, 6);
insert into tab1 values (100, 200505, 5);
insert into tab1 values (100, 200504, 4);
insert into tab1 values (100, 200503, 3);
insert into tab1 values (100, 200502, 6);
insert into tab1 values (100, 200501, 6);
insert into tab1 values (100, 200412, 7);
insert into tab1 values (200, 200507, 7);
insert into tab1 values (200, 200506, 6);
insert into tab1 values (200, 200505, 3);
insert into tab1 values (200, 200504, 4);
insert into tab1 values (200, 200503, 3);
insert into tab1 values (200, 200502, 2);
insert into tab1 values (200, 200501, 1);
insert into tab1 values (200, 200412, 0);
select * from tab1 order by acc, year_month desc
acc year_month cycle
--- -------------- -------
100 200507 7 <==== current month (cycle >=5, so travel backwards and find the
100 200506 6 earliest record for which
100 200505 5 <======= expected result
100 200504 4
100 200503 3
100 200502 6
100 200501 6
100 200412 7
200 200507 7 <=====current month (cyccle >=5, so need to travel backwards)
200 200506 6
200 200505 3 <======= expected result
200 200504 4
200 200503 3
200 200502 2
200 200501 1
200 200412 0
Thanks in advance,
one possible solution
miquel, July 20, 2005 - 6:47 pm UTC
select acc, year_month, cycle from
( SELECT acc, year_month , cycle, max(year_month) over( partition by acc) max_ym
FROM tab1
WHERE acc in (select x.acc from tab1 x
WHERE x.year_month = to_char(sysdate,'yyyymm') AND x.CYCLE >= 5)
and year_month < to_char(sysdate,'yyyymm') and cycle <= 5)
where year_month = max_ym
/
ACC YEAR_MONTH CYCLE
----- ---------- ----------
100 200505 5
200 200505 3
Scalar Subquery Caching
Jonathan Lewis, July 21, 2005 - 7:51 am UTC
This is the way I've summarised the caching mechanism for scalar subqueries and filter subqueries in Chapter 9: Transformations of my new (not yet published) book:
if this is the first row selected from the driving table
execute the subquery with this driving value
retain the driving (input) and return (output)values as current values
set the current values status to not yet stored.
else
if the driving value matches the input value from the current values
return the output value from the current values
else
if the status of the current values is not yet stored
attempt to store the current values in an in-memory hash-table
if a hash collision occurs
do not store the current values
end if
end if
probe the hash table using the new driving value
if the new driving value is in the hash table
retrieve the stored return value from the in-memory hash table
retain these values as the current values
set the current values status to previously stored
else
execute the subquery with the new driving value
retain the driving and return (output) values as current values
set the current values status to not yet stored.
end if
return the output value from the current values
end if
end if
July 21, 2005 - 8:05 am UTC
Jonathan,
Thanks! And looking forward to the book.
would be interesting
A reader, July 21, 2005 - 9:08 am UTC
to hear one of the developers of Scalar Subquery Caching:
is Jonathan's guess correct ?
July 21, 2005 - 4:10 pm UTC
I doubt you'll see an answer. The internal algorithms, which are not only subject to change, are not published like that.
Alex, July 21, 2005 - 9:19 am UTC
Can we get some more info about this book? What's the subject?
Alberto Dell'Era, July 21, 2005 - 9:57 am UTC
> This is the way I've summarised the caching mechanism
> for scalar subqueries and filter subqueries
"and filter subqueries" ... very interesting :)
Another Query in Need of Help
denni50, July 26, 2005 - 11:27 am UTC
Tom
I'm working on another one of those engaging queries.
I've achieved correct results for other reports using scalar subqueries..however I'm faced with a new dilemma on this one.
I need to combine the result sets of two distinct queries that don't need to relate to one another and don't link to one another.
I'm generating an income based report that mgt wants to see income by month for 2005 regardless of source(mailing campaigns from prior years), and they want the income results to be calculated against 2005 mailing campaigns
only.
ex( if $200,000 of this years income comes from appeals back in 2004,2003..>they want that income to be calculated against appeals mailed in 2005 only)
below are the correct results when running the queries separately:
1 select to_char(p.paydate,'MMYYYY'),count(p.payamount) as response,sum(p.payamount) as income
2 from payment p
3 where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
4 and p.payamount > 0 and p.acctno='HOUSE'
5* group by to_char(p.paydate,'MMYYYY')
SQL> /
TO_CHA RESPONSE INCOME
------ ---------- ----------
012005 39562 1958762.76
022005 32133 1156852.83
032005 40108 1620443.9
042005 34582 1306551.47
052005 27300 954046.48
062005 19614 768746.78
072005 21653 969498.04
SQL> select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as Dropped
2 from appeals a
3 where accountno='HOUSE'
4 and campaigncode LIKE 'D05%'
5 group by substr(a.appealcode,4,3);
PRO DROPPED
--- ----------
05A 888065
05B 916756
05C 997435
05D 824555
05E 821171
05F 600912
05G 1207396
7 rows selected.
I need to group the payment results by paydate and the appeals results by appealcode,
then they need to be combined as:
PRO DROPPED RESPONSE INCOME
------ ---------- --------- --------
05A 888065 39562 1958762.76
05B 916756 32133 1156852.83
05C 997435 40108 1620443.9
05D 824555 34582 1306551.47
05E 821171 27300 954046.48
05F 600912 19614 768746.78
05G 1207396 21653 969498.04
I've been testing all different ways, below is one sample query that generates errors:
had to split the count and sum functions to avoid the 'too many values' error
and now I get the:
ERROR at line 2:
ORA-01427: single-row subquery returns more than one row
I need to group by paydate but I don't want it in the select statement.
*******************************************************************************
Select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as dropped,
(select count(p.payamount)
from payment p
where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
and p.payamount > 0 and p.acctno='HOUSE'
group by to_char(p.paydate,'MMYYYY'))response ,(select sum(pa.payamount) from payment pa where paydate>=to_date('01-JAN-2005','DD-MON-YYYY')and pa.payamount > 0 and pa.acctno='HOUSE'group by to_char(pa.paydate,'MMYYYY'))income
from appeals a
where a.accountno='HOUSE'
and a.campaigncode LIKE 'D05%'
group by substr(a.appealcode,4,3);
hope this doesn't sound to far-fetched or confusing...if I join both tables through the appealcode the payment results
exclude the income that has come in from appealcodes representing prior years.
thanks for any help/tips!
July 26, 2005 - 3:10 pm UTC
you lost me when you said you needed to group payments by date and then showed a merged result that has no date.
I don't see how those two sets are related.
I know Tom...
denni50, July 26, 2005 - 3:53 pm UTC
that's the problem is that the two sets are not related
but I need to relate them somehow.
In regards to the payment grouping this is what I meant:
SQL> select count(p.payamount),sum(payamount)
2 from payment p
3 where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
4 and p.payamount > 0 and p.acctno='HOUSE'
5 group by to_char(p.paydate,'MMYYYY');
COUNT(P.PAYAMOUNT) SUM(PAYAMOUNT)
------------------ --------------
39562 1958762.76
32133 1156852.83
40108 1620443.9
34582 1306551.47
27300 954046.48
19614 768746.78
21653 969498.04
7 rows selected.
....I need the result set to be grouped by paydate without
including paydate in the select clause, now I need to
find a way to combine this result with the result set
from the other query.
The only column that links the two tables is "appealcode".
I can't use that column to join because on one side of the
join I only want appealcodes that represent 2005...on the
other side of the join I need all appealcodes(regardless
of what year they were created)we received as income in 2005.
Our system of coding for appealcodes always references the
year and alpha code for month..so that appealcode:
DCH05A represents January 2005
DCH05B " February 2005
DCH04D " April 2004
a donor sends a donation with the DCH04D receipt chad
on 7/20/2005...the income is reported in 2005, however
the appealcode is NOT a 2005 appealcode...therefore I
do not it selected in the grouping from the appeals table.
I've come up with a solution to update a non-used column
in the appeals table with the 'year' that an appealcode
is created. The payment table already has a year column
that gets updated with "2005" every time a payment gets
posted(regardless of what year the appealcode denotes)..
this way I can join the two "year" columns and that should
work.
Please feel free to offer any other suggestions if you
think a there's a better way to accomplish this.
thanks
July 26, 2005 - 4:17 pm UTC
sorry -- I don't think I'll be able to work this one through my head here
why cannot you just decode the payment date to an appeal code to join? seems that should work?
Already tried something to that effect....
denni50, July 26, 2005 - 4:39 pm UTC
joining substr(appealcode,4,2)= to_char(paydate,'RR')
which is '05'='05'....query took off with the shuttle
Discovery and never returned.
thanks again!
July 26, 2005 - 5:34 pm UTC
(remember inline views, if you have two inline views -- join them.... they looked SMALL didn't they?)
Your mention of Decode
denni50, July 27, 2005 - 8:41 am UTC
...don't really understand your comment about something
looking SMALL...however you mentioned something about
using decode, it didn't register with me yesterday
afternoon until after I got home(and gave my zapped out
brain a little rest) and thought that may possibly work.
question on that possibility:
can you reference the decode value in a join..ex.
select decode(to_char(paydate,'MMYYYY'))'012005','05A,'022005','05B'...etc)
....then join
where substr(appealcode,4,3)=decode(to_char(paydate,'MMYYYY'))'012005','05A'..etc
that would make the value coming from the appeals table
'05A'=the decode paydate value coming from payment.
...I'm getting ready to test that possibility.
thanks for that tip!
July 27, 2005 - 10:21 am UTC
you had two inline views with like 7 rows each. they looked small - or was that just an example
Tom....Perfect!
denni50, July 27, 2005 - 9:21 am UTC
SQL> select ap.project,ap.dropped,py.response,py.income
2 from(select decode(to_char(p.paydate,'MMYYYY'),'012005','05A',
3 '022005','05B',
4 '032005','05C',
5 '042005','05D',
6 '052005','05E',
7 '062005','05F',
8 '072005','05G') as project,
9 count(p.payamount) as Response,
10 sum(payamount) as Income from payment p
11 where p.paydate>=to_date('01-JAN-2005','DD-MON-YYYY')
12 and p.payamount > 0 and p.acctno='HOUSE'
13 group by to_char(p.paydate,'MMYYYY'))py,
14 (select substr(a.appealcode,4,3) as project, sum(a.pieces_mailed) as Dropped
15 from appeals a
16 where a.accountno='HOUSE'
17 and a.campaigncode LIKE 'D05%'
18 and a.campaigncode NOT LIKE 'D05H%'
19 group by substr(a.appealcode,4,3))ap
20 where py.project=ap.project;
PRO DROPPED RESPONSE INCOME
--- ---------- ---------- ----------
05A 888065 39562 1958762.76
05B 916756 32133 1156852.83
05C 997435 40108 1620443.9
05D 824555 34582 1306551.47
05E 821171 27300 954046.48
05F 600912 19614 768746.78
05G 1207396 21653 969498.04
7 rows selected.
Format numbers in cursor/query
Laxman Kondal, August 03, 2005 - 4:31 pm UTC
Hi Tom
I used Forms and there I could format numbers and I am trying to find a way to format numbers when fetch by cursor/query.
Most of numbers are 6-8 digits and 0-8 decimals and its really difficult to read easily.
Decimals are controllable but can I put commas like what's in Forms - like 45,678,912.08 and 45,678,912.00
Thanks and regards
August 03, 2005 - 6:10 pm UTC
select to_char(x,'999,999,999.00') x from t;
please help me out
narayan rao sallakonda, August 10, 2005 - 2:00 pm UTC
how can we write a statement that gives me the no of columns in a particular table.
August 11, 2005 - 8:39 am UTC
select count(*) from user_tab_columns where table_name = :x
please help me out
narayan rao sallakonda, August 10, 2005 - 2:00 pm UTC
how can we write a statement that gives me the no of columns in a particular table.
help on the query
A reader, August 18, 2005 - 10:34 pm UTC
Tom,
i have the tab:
user_id user_desc
U5CJD JOHN DOO (JD78687)
G3CSW SAM WANG (SW5678888)
M2XJG JULIE GAYLE (JG90)
I want a query to return:
U5CJD : JOHN DOO
G3CSW : SAM WANG
M2XJG : JULIE GAYLE
I have the trouble to make it happens.
Thanks a lot
August 18, 2005 - 11:27 pm UTC
so do I since I don't have any create tables or inserts to populate them to test with!
select user_id || ' : ' || substr( user_desc, 1, instr( user_desc, '(' )-1 )
from t;
Please help
A reader, August 19, 2005 - 11:06 am UTC
Tom,
I am trying to work this query out. I would like to get the
following results for a school. Many students come in and
out of school. How can I get when the first left and when
they came in. For example for student: 214115
SELECT * FROM SCHOOL_DAYS
WHERE STUDENT_ID = '214115'
FIRST_OUT SCHOOL_IN STUDENT_ID
3/22/2005 2:35:23 PM 3/29/2005 8:49:28 AM 214115
select * from school_days
where student_id = '201048'
first_out school_in student_id
3/29/2005 5:05:04 PM 3/30/2005 3:08:39 PM 201048
--
--SQL Statement which produced this data:
-- SELECT * FROM SCHOOL_DAYS
-- WHERE STUDENT_ID = '214115'
--
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/13/2004 00:55:04', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/19/2004 09:05:26', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/19/2004 14:35:35', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/19/2004 14:53:33', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('07/22/2004 21:01:19', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('07/25/2004 02:24:46', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('01/13/2005 00:58:14', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/13/2005 14:25:17', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('03/19/2005 02:15:27', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/22/2005 14:35:23', 'MM/DD/YYYY HH24:MI:SS'), '214115');
Insert into SCHOOL_DAYS
(SCHOOL_IN, FIRST_OUT, STUDENT_ID)
Values
(TO_DATE('03/29/2005 08:49:28', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('03/29/2005 09:01:18', 'MM/DD/YYYY HH24:MI:SS'), '214115');
COMMIT;
August 20, 2005 - 4:21 pm UTC
You'll really have to explain the data a bit better? I don't fully understand what I'm looking at or how to determine "first in" and "first out"
More explanation needed
A reader, August 22, 2005 - 9:51 am UTC
Tom,
Thanks again for comment on my query. I don't know if
this can be done to be honest. But here it goes.
We are trying to answer the question HOW long
a particular student was out of school. In other words, how LONG DID HE/SHE stayed in since he/she went out and came back in.
for example...
SCHOL IN FIRST_OUT
3/22/2005 8:27:03 PM 3/29/2005 5:05:04 PM
3/30/2005 3:08:39 PM 4/16/2005 1:40:44 PM
SCHOOL IN first_OUT
3/22/2005 8:27:03 PM 3/29/2005 5:05:04 PM
AND THEN CAME BACK IN
3/30/2005 3:08:39 PM
WE are looking for:
This student left the school 3/29 and came back in 3/30 therefore,the student was at out of the school for 1 day. in other words, we would like to begin counting when the student left the school until he came back in. It seems to me it's like criss-cross type of query.
If the student left 2/10 and came back in 2/15. The student was out for 5 days. What is throwing me off is that the data
may look like this..
school in first out
2/05 2/15
back in first_out
2/20 3/1
and I need to have it like this
first out back in the student was out for 5 days...in Feb. for example.
2/15 2/20
AGain the query should be FIRST OUT.....WHEN he/she went out and when she/he came back in. Sort of like criss cross in and out.
August 23, 2005 - 3:57 am UTC
select school_in, first_out,
LEAD(school_in)
over (partition by student_id order by school_in) next_school_in
from table;
run that and see if it isn't fairly easy to see how to get what you need (next_shool_in-first_out) = number of days "out", sum these up.
Thanks!!! for guiding me
A reader, August 23, 2005 - 2:43 pm UTC
Tom,
Thanks for putting me in the right direction. The only thing
I have to figure it out is when the student leave let's say
in one month and come come back in another. For example,
if the student left in Feb and came back in March. Right now I can't get that "pair". Any ideas???
select out_date, back_day_in, round(avg ( next_activity_date - out_date)) days_between
from ( select student_id,
lead(in_date) over (partition by student_id order by in_date) back_day_in,
out_date,
lead(out_date) over (partition by student_id order by in_date) next_activity_date
from t
where student_id = '214820'
and ((out_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS'))
or (in_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS')))
)
where back_day_in is not null
group by student_id,out_date,back_day_in
order by 1,2
August 24, 2005 - 8:42 am UTC
select *
from
(
select out_date, back_day_in, round(avg ( next_activity_date - out_date))
days_between
from ( select student_id,
lead(in_date) over (partition by student_id order by in_date)
back_day_in,
out_date,
lead(out_date) over (partition by student_id order by in_date)
next_activity_date
from t
where student_id = '214820'
where back_day_in is not null
)
where ((out_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS'))
or (in_date between to_date('01-MAR-05 00:00:00', 'DD-MON-RR
HH24:MI:SS')
and to_date('31-MAR-05 23:59:59', 'DD-MON-RR HH24:MI:SS')))
)
group by student_id,out_date,back_day_in
order by 1,2
friend, August 24, 2005 - 3:34 am UTC
hi tom, could u pls send me oracle 9i documentation link
August 24, 2005 - 10:55 am UTC
"U" isn't available, they didn't come to Iceland with me. Is it OK if I send it to you?
</code>
http://otn.oracle.com/ <code>-> documentation
The documentation for friend
A reader, August 24, 2005 - 9:03 am UTC
sql query
AD, September 11, 2005 - 5:30 pm UTC
Hi Tom,
Could you please help with the following sql query.
I have a table with acc_no, year_month, cycle which stores data for every month. From the current month I like to travel backwards and retrieve the first record where the cycle change from <6 to 6
create table tab1(acc number(10), year_month number(6), cycle number(2));
acc, year_month constitutes the key.
insert into tab1 values (100, 200507, 7);
insert into tab1 values (100, 200506, 6);
insert into tab1 values (100, 200505, 5);
insert into tab1 values (100, 200504, 4);
insert into tab1 values (100, 200503, 3);
insert into tab1 values (100, 200502, 6);
insert into tab1 values (100, 200501, 6);
insert into tab1 values (100, 200412, 7);
insert into tab1 values (200, 200507, 7);
insert into tab1 values (200, 200506, 6);
insert into tab1 values (200, 200505, 3);
insert into tab1 values (200, 200504, 4);
insert into tab1 values (200, 200503, 3);
insert into tab1 values (200, 200502, 2);
insert into tab1 values (200, 200501, 1);
insert into tab1 values (200, 200412, 0);
select * from tab1 order by acc, year_month desc
acc year_month cycle
--- -------------- -------
100 200507 7
100 200506 6 <== expected result
100 200505 5
100 200504 4
100 200503 3
100 200502 6
100 200501 6
100 200412 7
200 200507 7
200 200506 6 <== expected result
200 200505 3
200 200504 4
200 200503 3
200 200502 2
200 200501 1
200 200412 0
Thanks for your time,
Regards
September 11, 2005 - 6:59 pm UTC
ops$tkyte@ORA10G> select *
2 from (
3 select tab1.*,
4 lag(cycle) over (partition by acc order by year_month) last_cycle
5 from tab1
6 )
7 where cycle = 6 and last_cycle < 6
8 order by acc, year_month desc;
ACC YEAR_MONTH CYCLE LAST_CYCLE
---------- ---------- ---------- ----------
100 200506 6 5
200 200506 6 3
one more query...
sd, September 14, 2005 - 5:40 am UTC
I have to tables A and B with only one column N.
A B
--- ----
1 1
2 2
3 4
5 6
I want to display only rows with values 3,4,5 and 6 I've tried it with full outer join and then minus , is there any other way(better way)??
TIA
September 14, 2005 - 8:37 am UTC
I don't get the logic behind this at all. why 3,4,5 and 6.
and I see a single table with two columns A and B??
which query will be fast?
Alay, September 15, 2005 - 9:24 am UTC
Hi Tom,
If i have two tables, tableA with 1 crore rows and tableB with 10 rows. Now i perform join operation as follows:
(1) select * from tableA, tableB;
(2) select * from tableB, tableA;
Which query will execute fast and why?
I have one another question also. What is the fastest way to know number of rows of a table?
September 15, 2005 - 9:50 am UTC
neither, both are a cartesian product and will result in ( 10 * 1 crore) rows.
They are the same query. Using the CBO - there isn't really a difference
the way to know the number of rows in a table -- select count(*) from table;
How to replace null with some value ?
San, September 19, 2005 - 3:15 am UTC
Hello Tom,
I'm in the process of developing a "VALIDATION" routine that would cross check an existing schema with the base schema. The base schema is the schema which would be created at the time of installation of the application. At any time the client can execute this "VALIDATION" Routine to check the current status.
For simplicity, the existing schema is "IA_USER" and the validating schema is "VERIFY_USER".
I have written the following code and there are 2 questions i'd request you to brief.
1> I intend to display "VALIDATE INDEXES/ CONSTRAINT.." in case the query doesn't return any rows.
2> Is there a better alternative to the code ?
-- SET TERM OFF
SET FEEDBACK OFF
SET VERIFY OFF
REPHEADER PAGE CENTER 'PERFORMING TABLE/COLUMN VALIDATIONS'
TTITLE CENTER ==================== SKIP 1
col USER format a15
col TABLE format a15
col COLUMN format a40
col "DATA TYPE" format a40
set line 132
set pages 100
break on today skip 1 on user skip 0 on table skip 1 on column
spool c:\1.txt
select to_char(sysdate,'fmMONTH DD, YYYY') TODAY from dual;
repheader off
ttitle off
select a.owner "USER" , a.table_name "TABLE",
a.column_name ||' -> FAILED VALIDATION' "COLUMN",
a.data_type ||' -> FAILED VALIDATION' "DATA TYPE"
from dba_tab_columns a
where not exists (select 'x' from dba_tab_columns b
where a.table_name = b.table_name
and b.owner = 'VERIFY_USER'
and (b.column_name = a.column_name and b.data_type = a.data_type))
and exists (select 'x' from dba_tables c
where a.table_name = c.table_name
and c.owner = 'VERIFY_USER')
and a.owner = 'IA_USER'
and a.table_name not like '%BIN$%'
UNION
select a.owner "USER" , a.table_name "TABLE",
a.column_name ||' -> FAILED VALIDATION' "COLUMN",
a.data_type ||' -> FAILED VALIDATION' "DATA TYPE"
from dba_tab_columns a
where not exists (select 'x' from dba_tab_columns b
where a.table_name = b.table_name
and b.owner = 'IA_USER'
and (b.column_name = a.column_name and b.data_type = a.data_type))
and exists (select 'x' from dba_tables c
where a.table_name = c.table_name
and c.owner = 'IA_USER')
and a.owner = 'VERIFY_USER'
and a.table_name not like '%BIN$%'
/
spool off
ttitle off
repheader off
clear breaks
clear columns
set verify on
set feedback on
-- set term on
forgot to mention
San, September 19, 2005 - 3:16 am UTC
Tom, I forgot to mention. We are on 10g release 1.
Regards
San
replace no rows with some string
San, September 20, 2005 - 2:05 am UTC
Tom, thanks for the link. There too the question remains that how do i display some string if the query doesn't return any rows. One way i know is
select nvl(max(tablespace_name),'INVALID')
from dba_tablespaces
where tablespace_name = 'SOME_TABLESPACE';
select nvl(tablespace_name,'INVALID') doesn't return anything if the query doesn't return any rows.
I can replace my code with nvl(max...) or nvl(min..) but in case of multiple columns/ indexes failing validation, the query would return just 1 value (either max or min).
Thanks
San
September 20, 2005 - 10:05 am UTC
sure it does -- an aggregate without any group by always returns
o at least one row
o at most one row
ops$tkyte@ORA10GR2> set feedback 1
ops$tkyte@ORA10GR2> select max(dummy) from dual where 1=0;
M
-
1 row selected.
ops$tkyte@ORA10GR2> select nvl(max(dummy),'Y') from dual where 1=0;
N
-
Y
1 row selected.
so that is returning a single row.
FIFO in SQL
Parag Jayant Patankar, September 23, 2005 - 9:19 am UTC
Hi Tom,
I am having following table
create table t
(
indicator varchar2(1),
dt date,
amt number
);
insert into t values ('p', sysdate-100, 20);
insert into t values ('p', sysdate-90, 30);
insert into t values ('p', sysdate-85, 70);
insert into t values('s', sysdate-85, 60);
insert into t values('p', sysdate-83, 100);
insert into t values('s', sysdate-84, 40);
insert into t values('s', '01-aug-05', 80);
commit;
So my table having following records
18:32:51 SQL> select * from t;
I DT AMT
- --------- ----------
p 14-JUN-05 20
p 24-JUN-05 30
p 29-JUN-05 70
s 29-JUN-05 60
p 01-JUL-05 100
s 30-JUN-05 40
s 01-AUG-05 80
7 rows selected.
Indicator is purchase or sale. I want to deduct sale amt from purchase by FIFO method and want following output ( something similar )
Indicator Dt Amt Dt Sale Amt Bal of Pur Bal of Sale
p 14-Jun-05 20 29-Jul-05 60 0 40
p 24-Jun-05 30 29-Jul-05 40 0 10
p 29-Jun-05 70 29-Jul-05 10 60 0
30-Jun-05 40 20 0
01-Aug-05 80 0 60
p 01-Jul-05 100 160
Is it possible to generate this kind output in SQL ?
regards & thanks
pjp
SQL Query
Parag Jayant Patankar, September 26, 2005 - 10:55 am UTC
Hi Tom,
Is it possible to have FIFO calculation from SQL for the question I have asked in this thread ?
regards & thanks
pjp
September 27, 2005 - 9:24 am UTC
i didn't understand the output desired and there were no details about it supplied.
SQL Query Help
A reader, October 17, 2005 - 10:40 am UTC
Hi Tom,
I have a tree that looks like this:
A(1)
___________|__________
| |
B1(2) B2(3)
________|_______ ________|________
| | | |
C1(4) (5)C2 C3(6) C4(7)
| |
D1(8) D2(9)
__________|__________
| | |
E1(10) E2(11) E3(12)
|
F1(13)
And a table below. The key is the number in perenthesis, and the source_key is the key where it comes from:
NODE KEY SOURCE_KEY
---- ---------- ----------
A 1
B1 2 1
B2 3 1
C1 4 2
C2 5 2
C3 6 3
C4 7 3
D1 8 6
D2 9 7
E1 10 8
E2 11 8
E3 12 8
F1 13 12
I'd like a query that returns the following or somewhat similar. My objective is from any given node, give me all the nodes where it comes from and all the nodes that come from it. I've had a hard time and cann't seem to figure it out. Could you please help?
Example starting from node E2:
NODE KEY FROM_NODE FROM_NODE_KEY
---- --- ---------- -------------
E2 11 D1 8
D1 8 C3 6
C3 6 B2 3
B2 3 A1 1
A1 1
F1 13 E2 11
Example starting from node D1:
D1 8 C3 6
C3 6 B2 3
B2 3 A1 1
A1 8 E1 6
E2 11 D1 8
F1 13 E2 11
I'm using Oracle8i Release 8.1.7.4.1 - Production.
Thanks.
reader
A reader, October 18, 2005 - 7:16 pm UTC
Hi Tom
please see this query:
select department_id,count(employee_id) employee_count,last_name
from employees
group by department_id,last_name
order by department_id,employee_count;
getting output as:
DEPARTMENT_ID EMPLOYEE_COUNT LAST_NAME
------------- -------------- ----------
10 1 Whalen
20 1 Fay
Hartstein
30 1 Khoo
Tobias
Colmenares
Raphaely
Himuro
Baida
but how can i get the output to be:
DEPARTMENT_ID EMPLOYEE_COUNT LAST_NAME
------------- -------------- ----------
10 1 Whalen
20 2 Fay
Hartstein
30 6 Khoo
Tobias
Colmenares
Raphaely
Himuro
Baida
thanks
sachin
October 19, 2005 - 6:45 am UTC
select distinct
department_id,
count(employee_id) over (partition by department_id) employee_count,
last_name
from employees
order by 1, 2;
SQL
giri, October 19, 2005 - 1:40 pm UTC
Tom,
I am writing a pl/sql block..
where i am inserting the data from one table to another table
say i am having X table which contains more then 1 million rows and which contains non distinct values for empno i.e empno can be duplicate. i am inserting the values in the table Y which has a constraint on empno as distinct i.e. unique constraint. now in the cursor select query i am not giving the distinct clause as it the business logic don't need .
but when i am inserting the values in the Y i need inserting the distinct values because of the constraints so when ever there is second same record / duplicate record I want to skip that and go to the next record. And insert it without ant interruption
can u please gide me how to do this?
October 19, 2005 - 3:44 pm UTC
insufficient data.
you say in the source table the empno isn't distinct, has duplicates. Ok, say you have
EMPNO ENAME
-------- --------
1 frank
1 mary
1 bob
which one goes into the other table and why?
SQL Query Help
A reader, October 19, 2005 - 3:49 pm UTC
Hi Tom,
I'd really appreciate if you could look at the problem I had posted on 10/17/05 about the tree nodes under the title: SQL Query Help.
Thanks in advance.
October 19, 2005 - 4:40 pm UTC
if I don't see an obvious very fast answer and there are no create tables and no insert intos (eg: you expect me to take lots of time building your example)
I simply skip it.
And even if they are there no promises.
(yours looks like a union all of two connect bys -- one that connects down, one that connects up)
Here ya go (albeit 9iR2)....
Philip, October 19, 2005 - 4:40 pm UTC
Hi "A reader from VA",
I love these little teasers - so I thought would try to take a shot at it.
Please note you had a couple of errors - one in the insert for node F1 (it didn't follow the diagram - so I corrected it for this example...).
Here goes:
phil@ORA9iR2 -> @c:\asktom.sql
phil@ORA9iR2 -> DROP TABLE hier
2 /
Table dropped.
phil@ORA9iR2 ->
phil@ORA9iR2 -> CREATE TABLE hier (node VARCHAR2(2)
2 , KEY INT
3 , source_key INT
4 )
5 /
Table created.
phil@ORA9iR2 ->
phil@ORA9iR2 -> INSERT INTO hier VALUES ('A', 1, NULL);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('B1', 2, 1);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('B2', 3, 1);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('C1', 4, 2);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('C2', 5, 2);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('C3', 6, 3);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('C4', 7, 3);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('D1', 8, 6);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('D2', 9, 7);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('E1', 10, 8);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('E2', 11, 8);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('E3', 12, 8);
1 row created.
phil@ORA9iR2 -> INSERT INTO hier VALUES ('F1', 13, 11);
1 row created.
phil@ORA9iR2 -> COMMIT;
Commit complete.
phil@ORA9iR2 ->
phil@ORA9iR2 -> CREATE OR REPLACE PROCEDURE to_from_node( p_start_node IN hier.node%TYPE
2 , p_ref_cursor OUT sys_refcursor
3 )
4 AS
5 BEGIN
6 IF NOT p_ref_cursor%isopen THEN
7 OPEN p_ref_cursor FOR SELECT node
8 , KEY
9 , (SELECT node FROM hier WHERE KEY = main.source_key) AS from_node
10 , source_key AS from_node_key
11 FROM hier main
12 CONNECT BY KEY = PRIOR source_key
13 START WITH node = p_start_node
14 UNION ALL
15 SELECT *
16 FROM (SELECT node
17 , KEY
18 , PRIOR node AS from_node
19 , source_key AS from_node_key
20 FROM hier main
21 CONNECT BY PRIOR KEY = source_key
22 START WITH node = p_start_node
23 )
24 WHERE node <> p_start_node
25 ;
26 END IF ;
27 END;
28 /
Procedure created.
phil@ORA9iR2 ->
phil@ORA9iR2 -> VARIABLE b_start_node VARCHAR2(2) ;
phil@ORA9iR2 -> VARIABLE b_ref_cursor refcursor ;
phil@ORA9iR2 ->
phil@ORA9iR2 -> BEGIN
2 :b_start_node := 'E2' ;
3 to_from_node(p_start_node => :b_start_node, p_ref_cursor => :b_ref_cursor) ;
4 END;
5 /
PL/SQL procedure successfully completed.
phil@ORA9iR2 ->
phil@ORA9iR2 -> PRINT :b_ref_cursor ;
NO KEY FR FROM_NODE_KEY
-- ---------- -- -------------
E2 11 D1 8
D1 8 C3 6
C3 6 B2 3
B2 3 A 1
A 1
F1 13 E2 11
6 rows selected.
phil@ORA9iR2 ->
phil@ORA9iR2 -> BEGIN
2 :b_start_node := 'D1' ;
3 to_from_node(p_start_node => :b_start_node, p_ref_cursor => :b_ref_cursor) ;
4 END;
5 /
PL/SQL procedure successfully completed.
phil@ORA9iR2 ->
phil@ORA9iR2 -> PRINT :b_ref_cursor ;
NO KEY FR FROM_NODE_KEY
-- ---------- -- -------------
D1 8 C3 6
C3 6 B2 3
B2 3 A 1
A 1
E1 10 D1 8
E2 11 D1 8
F1 13 E2 11
E3 12 D1 8
8 rows selected.
phil@ORA9iR2 -> spool off
------------------------
Fair warning: this was done on Oracle 9i Release 2, I don't even have any 8i databases insalled anymore - so I didn't test it in 8i. The SYS_REFCURSOR stuff won't work - you'll have to create a REF CURSOR Type or something like that for 8i (please upgrade that if you can - 8i is unsupported).
One last thing: WHO-DEY, WHO-DEY, WHO-DEY think gonna beat dem Bengals...
reader
A reader, October 19, 2005 - 5:05 pm UTC
Hi Tom
is it possible to write this in single query i.e no nested query..for example can we use joins in this
Show the department number and the lowest salary of the department with the highest average
salary.
thanks
sachin
October 19, 2005 - 7:42 pm UTC
define "nested query" first, and be very very precise in your definition.
SQL
giri, October 20, 2005 - 6:05 am UTC
Tom,
sorry for insufisent data.
Say i have a table X which is loaded from the file with values
No Dept S_Date(Sig Date)
-------
10 A 10/05/2005(DD/MM/YYYY)
10 A 10/08/2005
10 A 10/10/2005
10 A 10/11/2005
10 B 10/11/2005
from this table i am inserting into Y
cursor is select no,Dept from X
and inserting in to Y where in Y we are inserting only
10 A
If i do distinct it will 10 A and 10 B
here it is some thing like first come fisrt go..
So can u please guide me the logic??
October 20, 2005 - 8:29 am UTC
no idea what you are asking....
but to put data from table1 into table2, you would just use insert into table1 select from table2, there would be no code.
and remember, rows in a table really don't have any "order" to them.
SQL Query Help
A reader, October 20, 2005 - 11:31 am UTC
Tom - I'm sorry I didn't know I have to give you create tables and insert intos. Now I do and I'll make sure I do that next time.
Hello Philip from Cincinnati, OH USA - Thank you very much. I still don't know how to make REF CURSOR works yet but your query works beautifully.
I really appreciate all the help that I've gotten from this web site.
October 20, 2005 - 4:46 pm UTC
(it says that on the page you read to put this text on the forum!!)
<quote>
If your followup requires a response that might include a query, you had better supply very very simple create tables and insert statements. I cannot create a table and populate it for each and every question. The SMALLEST create table possible (no tablespaces, no schema names, just like I do in my examples for you)
</quote>
SQL Query
A Reader, October 20, 2005 - 11:56 am UTC
Hi Tom,
I have to write a query that will store columns from one table as seperate rows in another table. Please see below.
CREATE TABLE tab1
( col1 NUMBER(2),
col2 NUMBER(2),
col3 NUMBER(2),
col4 NUMBER(2),
col5 NUMBER(2),
col6 NUMBER(2),
col7 NUMBER(2),
col8 NUMBER(2),
col9 NUMBER(2),
col10 NUMBER(2));
INSERT INTO tab1
( col1,
col2,
col3,
col4,
col5,
col6,
col7,
col8,
col9,
col10
)
VALUES
( 1,
2,
3,
4,
5,
6,
7,
8,
9,
10
);
SELECT *FROM tab1;
COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8 COL9 COL10
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
1 2 3 4 5 6 7 8 9 10
CREATE TABLE tab2
( col NUMBER(2) );
All the columns in table tab1 should be inserted as seperate rows in table tab2. So 'Select col from tab2' should display the following
COL
----------
1
2
3
4
5
6
7
8
9
10
Your help would be greatly appreciated.
Thanks
Ganesh
October 20, 2005 - 4:47 pm UTC
a multi-table insert is useful for this.
create table t ( id int, day1 int, day2 int, day3 int, day4 int );
insert into t
select rownum, dbms_random.random, dbms_random.random,dbms_random.random,dbms_random.random
from all_objects where rownum <= 100;
create table t2 ( id int, day varchar2(4), val int );
insert ALL
into t2 ( id, day, val ) values ( id, 'DAY1', day1 )
into t2 ( id, day, val ) values ( id, 'DAY2', day2 )
into t2 ( id, day, val ) values ( id, 'DAY3', day3 )
into t2 ( id, day, val ) values ( id, 'DAY4', day4 )
select * from t;
SQL Query
A Reader, October 21, 2005 - 2:58 am UTC
Thanks very much for your fedback.
reader
A reader, October 21, 2005 - 6:10 pm UTC
i wrote the query like this:
select employee_id,last_name,salary,department_id,
avg(salary) over (partition by department_id) a_sal
from employees
group by employee_id,department_id,last_name,salary
order by employee_id;
in answere it was like this:
SELECT e.employee_id, e.last_name,
e.department_id, AVG(s.salary)
FROM employees e, employees s
WHERE e.department_id = s.department_id
GROUP BY e.employee_id, e.last_name, e.department_id;
i have not understood what is happening in the answere query ,i know it is a self join but could you please explain me how it is working internally, like how it is showing the same answere as mine one.....
b) could you please also explain me in general like when a query is given and how to tackle that query ..like how to start building that query from scratch ,are there any particular steps or diagram i make to simplify that query or not
October 22, 2005 - 10:33 am UTC
in answer to what?
reader
A reader, October 24, 2005 - 11:07 am UTC
Hi Tom
could you please answer my second question
thanks.
October 24, 2005 - 11:51 am UTC
too many "a readers here", not sure what second question of which you are refering to.
if you mean right above, that is the thing of books - my implementation of that answer is currently called "Effective Oracle by Design".
reader
A reader, October 25, 2005 - 6:14 pm UTC
Thank you for your answer immediately above
i have one more question..
how to find ename from emp where ename contains 2 l's whether these l's are in begining like llyat or in the middle like allen or like this dill or anywhere else.
i have written a query -
SELECT ename
FROM emp
WHERE upper(ename) LIKE '%L%L%'
or upper(ename) LIKE 'LL%'
or upper(ename) LIKE '%LL';
but it is not finding all of them
is there any better way.
October 26, 2005 - 11:35 am UTC
well, %L%L% will find all of them by itself. (it'll also find alily - do you want it to?)
show me one that it does not find
Ravi, October 26, 2005 - 2:10 am UTC
This is one way of doing it.It will list all ename having 2 or more than 2 'L' in the ename
select ename
from emp
where instr(upper(ENAME),'L',1,2)>0
reader
A reader, October 26, 2005 - 10:51 am UTC
thanks you very much Ravi.
sachin
Complex query
Vikas Khanna, October 27, 2005 - 8:15 am UTC
Would appreciate if you can please help in writing this complex query.
I have two tables:
T1
Id
Text
With Values :
1,various artists
2,various artists american idol season 2 finalists
3,various artists various artists - miscellaneous - exercise/meditation
T2
Id
Aggr_date
Request_count
1 01-SEP-2005 20
1 02-SEP-2005 30
2 01-SEP-2005 10
2 02-sep-2005 5
3 01-SEP-2005 4
3 02-SEP-2005 8
Select A.id,A.text,sum(request_count)
from
T1 A,T2 B
Where A.id = B.id
and aggr_date between to_date('01-SEP-2005','DD-MON-YYYY') and to_date('02-SEP-2005','DD-MON-YYYY')
group by A.id,A.text
having sum(request_count) > 20
Will result into qualifying:
A.id A.text Sum(Request_Count)
1 various artists 50
This will be called popular query results. Now we have find the matcing Id's from the table T1 which are NOT a part of the Popular query and the satisfy the following:
1.The text of the popular query is contained in the string text for the remaining match term id's. like "various artists" is a part of "various artists american idol season 2 finalists" and also various artists various artists - miscellaneous - exercise/meditation when we check with the Instr function.
2. Since both satisfy we need to pick for the one whose length(text) is the MAX.
The result set should be:
Id 1 correspponds to Id 3
So the id no 3rd should be choosen to be picked away. However, if length of the text is the same then SUM(request_Count) should be choosen as the criteria. As
Id Sum(request_count)
2 15
3 12
Then id no. 2 qualifies.
The result set should be:
Id 1 corresponds to Id 2
Thanks
October 27, 2005 - 12:32 pm UTC
no create table...
no insert intos....
no looking by me.......
Listed down the DDL,DML's
Vikas, October 28, 2005 - 3:59 am UTC
Hi Tom,
Please find the DDL and the subsequent DML's:
Create table t1 (id int, text varchar(50));
Table created.
Create table t2 (id int, aggr_date DATE, Request_Count int);
Table created.
Insert into t1 Values (1,'various artists');
Insert into t1 Values (2,'various artists american idol season 2 finalists');
Insert into t1 Values (3, 'various artists - miscellaneous - exercise');
insert into t2 values (1,'01-SEP-2005',20);
insert into t2 values (1,'02-SEP-2005',30);
insert into t2 values (2,'01-SEP-2005',10);
insert into t2 values (2,'02-SEP-2005',5);
insert into t2 values (3,'03-SEP-2005',4);
insert into t2 values (3,'03-SEP-2005',8);
Thanks in anticipation.
October 28, 2005 - 12:56 pm UTC
ok, what of the "most popular" stuff results in hundreds of rows - do you have the time to wait for all of the LIKE searches to complete and to find the maximum length?
we can do this, it won't be "fast"
Please HELP!
vikas Khanna, November 01, 2005 - 11:48 pm UTC
Hi Tom,
The Popular query will return only a handful of rows, so please help me in writing this query. It's getting too complex for me.
Thanks for anticipation!
Help required
Kumar, November 09, 2005 - 1:28 am UTC
Dear Tom,
I have tables like,
create table open_item(form_id number,
open_item_amount float)
create table form_data (form_id number,
period date)
insert into open_item values (1, 100);
insert into open_item values (2, 100);
insert into open_item values (3, 100);
insert into open_item values (4, 100);
insert into open_item values (5, 100);
insert into form_data (1, '01-Jan-1995');
insert into form_data (2, '01-Dec-1994');
I want an output like,
Current Previous Variance
100 100 0
....
I have written a query like,
1 select decode(y.period, to_date('1/1/1995', 'mm/dd/yyyy'), x.open_item_amount) as "current" ,
2 decode(y.period, to_date('12/1/1994','mm/dd/yyyy'), x.open_item_amount) as "previous",
3 decode(y.period, to_date('1/1/1995', 'mm/dd/yyyy'), x.open_item_amount)-
4 decode(y.period, to_date('12/1/1994','mm/dd/yyyy'), x.open_item_amount) as "Variance"
5 from open_item x, form_data y
6 where x.form_id = y.form_id
7* and y.period between to_date('12/1/1994','mm/dd/yyyy') and to_date('1/1/1995', 'mm/dd/yyyy')
SQL> /
current previous Variance
---------- ---------- ----------
100
100
But, I am getting the above output. How to solve this problem? Can you please help me?
Thanks in advance.
Note: form_id is different in both the periods.
November 09, 2005 - 9:34 am UTC
it is sort of hard to reverse engineer a big query that gives the wrong answer and then deduce what you might have been thinking and figure out what you really wanted.....
perhaps this gets you started?
ops$tkyte@ORA10GR2> select f.*, o.*,
2 lag(o.open_item_amount) over (order by f.form_id) last_open
3 from form_data f, open_item o
4 where f.form_id = o.form_id
5 /
FORM_ID PERIOD FORM_ID OPEN_ITEM_AMOUNT LAST_OPEN
---------- ------------------ ---------- ---------------- ----------
1 01-JAN-95 1 100
2 01-DEC-94 2 100 100
Vikas Khanna, November 14, 2005 - 3:06 pm UTC
Hi Tom,
The Popular query will return only a handful of rows, so please help me in
writing this query. It's getting too complex for me.
Would appreciate for your help!
Thanks!
November 14, 2005 - 4:12 pm UTC
ops$tkyte@ORA10GR2> with popular
2 as
3 (Select A.id,A.text,sum(request_count) cnt
4 from T1 A,T2 B
5 Where A.id = B.id
6 and aggr_date between to_date('01-SEP-2005','DD-MON-YYYY') and to_date('02-SEP-2005','DD-MON-YYYY')
7 group by A.id,A.text
8 having sum(request_count) > 20 )
9 select *
10 from (
11 select t1.id, t1.text, sum(request_count) cnt2
12 from t1, t2, popular
13 where t1.text not in ( select text from popular where text is not null )
14 and t1.text like '%'||popular.text||'%'
15 and t1.id = t2.id
16 group by t1.id, t1.text
17 order by length(t1.text) DESC, sum(request_count) DESC
18 )
19 where rownum = 1;
ID TEXT CNT2
---------- -------------------------------------------------- ----------
2 various artists american idol season 2 finalists 15
isn't what you said would be the outcome, but does follow the "rules" you laid out...
Good Response
Vikas, November 15, 2005 - 8:32 pm UTC
Thanks a ton!
getting a pre-defined format from a string
Thiru, November 18, 2005 - 4:33 pm UTC
Hi Tom,
Not sure whether my question fits into this thread but anyway hoping to get an answer from you.
I need to clip two dates from a string.
The string will always be 'between date1 and date2'
eg:
string could be :
between '01 JAN 2005' AND '10-SEP-2005'
or
between '01 JAN 2005' AND '10-SEP-2005'
(with gaps )
How do I get these two dates into two variables in my stored proc. These strings as mentioned above is part of a large string and are passed in as parameters to a proc.
November 19, 2005 - 9:47 am UTC
substr and instr.
look for first ', second ' using instr
now substr.
look for third ' and fourth ' using instr
now substr.
if you have to find the between itself (assuming no other betweens) - use instr to find between, substr off everything in front of it. Then, using instr, find the fourth ' and substr off everything after that.
using quotes
A reader, November 21, 2005 - 12:49 pm UTC
Hi Tom
may i know please as to what is the difference betwen sql statements run as such without quotes and with quotes
like select * from emp;
and 'select * from emp'
i could not find an answer to this
thanks
sachin
November 21, 2005 - 2:23 pm UTC
select * from emp
is a sql statement.
'select * from emp'
is a character string, not much different from 'hello world'
not sure what you mean?
A reader, November 21, 2005 - 2:34 pm UTC
like if we put that(sql statement) in ' ' does that becomes dynamic sql or what is the use of putting sql statement like this:
'select * from emp';
November 21, 2005 - 3:15 pm UTC
in plsql, sure, you would be
open ref_cursor for 'select * from emp';
that is now "dynamic sql", sql not visible to programming environment.
but - it is just a string, and I assume you are only talking about plsql at this point.
A reader, November 21, 2005 - 3:23 pm UTC
exactaly i am talking about pl/sql only
1)may i know under which circumstances can we put that in quotes to make it a string and
2)what is the use of that like will it make the code to run faster or what
i am very much confused about it ..please explain
November 21, 2005 - 5:17 pm UTC
you should always use static sql whenever possible.
you should restort to dynamic sql (executing sql stored in a string, your "quotes" comment) only when you absolutely cannot do it any other way.
A reader, November 21, 2005 - 5:27 pm UTC
thanks for all your answeres but my question is WHAT do we get by doing that (putting the sql in quotes)...WHY do we do THAT??????
Thanks and Regards
Sachin
November 21, 2005 - 5:34 pm UTC
you get dynamic sql?
you can assign any string to a plsql variable and execute it.
meaning you can make up the sql at runtime.
but - don't do that unless you HAVE to.
A reader, November 21, 2005 - 5:38 pm UTC
THANKS got it.........
sujana
sujana, December 13, 2005 - 12:53 pm UTC
Hi Tom,
I have three questions...like
1. pl/sql block to genarate prime numbers
2. pl/sql block to check whether th ginen string is a polindrome or not
3. pl/sql block to allocate rank for student data base using cursors.fields are sno primary key,sname,regno,maths,physics,chemistry,total,rank.
I hope i will get your answers soon
thanks,
sujana
December 13, 2005 - 1:26 pm UTC
this sounds more like "I have three assignments for you to write code for me" doesn't it?
or even homework...
Tell your teacher that #3 is a bad approach since the builtin SQL funtion rank() exists. Also, to put grades "in record" like that in the attributes math, physics, etc is a really bad design.
#1 and #2 are pretty well known algorithms, google around - then implement them in code.
SQL case
Tony, December 14, 2005 - 1:49 am UTC
Tom,
I am facing a difficult situation here.
I have a program which will accept the expression from the metadata tables and run in proc code.
The code is written in such a way that it will accept only 60000 characters.
My expressions are going beyond the limit
Now I have to reduce the length of all the expression
the exp like
case
when Annual Income >0 and <=5000 then '0 5000'
when Annual Income >5000 and <=8000 then '5001 8,000'
when Annual Income >8000 and <=12000 then '8,001 12,000'
when Annual Income >12000 and <=15000 then '12,001 15,000'
when Annual Income >15000 and <=18000 then '15,001 18,000'
when Annual Income >18000 and <=26000 then '18,001 26,000'
when Annual Income >26000 and <=32000 then '26,001 32,000'
when Annual Income >32000 and <=42000 then '32,001 42,000'
when Annual Income >42000 and <=48000 then '42,001 48,000'
end
this is an example..
can u please guide me how to make it small without changing the output.
thanks in advance
December 14, 2005 - 8:13 am UTC
alias annual_income to be "ai" (shorter)
case
when ai<=5000 then '0-5000'
when ai<=8000 then '5001-8000'
...
you don't need the > part.
if the ranges were "equal", we could just divide of course.
Table method
Bob B, December 14, 2005 - 10:44 am UTC
Or you could create a table and a view
CREATE TABLE INCOME_THRESHHOLDS(
max NUMBER
);
ALTER TABLE INCOME_THRESHHOLDS ADD(
PRIMARY KEY( max )
);
INSERT INTO INCOME_THRESHHOLDS VALUES( 5000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 8000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 12000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 15000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 18000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 26000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 32000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 42000 );
INSERT INTO INCOME_THRESHHOLDS VALUES( 48000 );
CREATE OR REPLACE VIEW INCOME_RANGES AS
SELECT
A.MIN,
A.MAX,
TO_CHAR( A.MIN + 1, 'fm999,999' ) || ' - ' || TO_CHAR( A.MAX, 'fm999,999' ) DESCRIPTION
FROM (
SELECT
NVL( LAG( it.MAX ) OVER ( ORDER BY it.MAX ), -1 ) MIN,
it.MAX
FROM INCOME_THRESHHOLDS it
) A
---------------------------------------------------
And then:
(
SELECT ir.DESCRIPTION
FROM INCOME_RANGES ir
WHERE ir.MIN < Annual_Income AND Annual_Income <= ir.MAX
) INCOME_DESC_TEXT
Or you could join it to your query, assuming that annual income is not null and each Annual Income is in an income range:
SELECT your_query.*, ir.DESCRIPTION
FROM (
YOUR QUERY
) your_query, income_ranges ir
WHERE ir.MIN < your_query.Annual_Income
AND your_query.Annual_Income <= ir.MAX
Sql Question
Yoav, January 08, 2006 - 2:29 am UTC
Hi Tom,
I have a varchar2(20) field in (9iR2 environment) that should contain only capital letters or numbers or combination of numbers and letters.
1. How can i create such a constraint ?
2. How can i write a single select statment that show
the data that isnt meet this requirement ?
Some data:
create table t
(a varchar2(20));
insert into t values('ABC123');
insert into t values('A*B');
insert into t values('_CD');
insert into t values('E%F');
insert into t values('123');
insert into t values('abc');
COMMIT;
SELECT * FROM T;
A
--------
ABC123
A*B ==>
_CD ==>
E%F ==>
123
abc ==>
Thank you.
January 08, 2006 - 11:43 am UTC
one approach... regular expressions in 10g could make this more terse perhaps
use same function in SQL but say "where function IS NOT NULL" to find those that will be in violation (or add the check constraint with EXCEPTIONS INTO to have the bad rows rowids logged into a table)
ops$tkyte@ORA10GR2> create table t
2 ( x varchar2(20)
3 check(replace(translate(x,'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ',rpad('0',36,'0')),'0','') is null)
4 )
5 /
Table created.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> insert into t values('ABC123');
1 row created.
ops$tkyte@ORA10GR2> insert into t values('A*B');
insert into t values('A*B')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated
ops$tkyte@ORA10GR2> insert into t values('_CD');
insert into t values('_CD')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated
ops$tkyte@ORA10GR2> insert into t values('E%F');
insert into t values('E%F')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated
ops$tkyte@ORA10GR2> insert into t values('123');
1 row created.
ops$tkyte@ORA10GR2> insert into t values('abc');
insert into t values('abc')
*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.SYS_C006375) violated
ops$tkyte@ORA10GR2> COMMIT;
Commit complete.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> SELECT * FROM T;
X
--------------------
ABC123
123
Michel Cadot, January 08, 2006 - 12:37 pm UTC
Hi,
This one is a less complex constraint:
SQL> create table t
2 (a varchar2(20)
3 check (translate(a,'_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ','_') is null));
Table created.
SQL> insert into t values('ABC123');
1 row created.
SQL> insert into t values('A*B');
insert into t values('A*B')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated
SQL> insert into t values('_CD');
insert into t values('_CD')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated
SQL> insert into t values('E%F');
insert into t values('E%F')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated
SQL> insert into t values('123');
1 row created.
SQL> insert into t values('abc');
insert into t values('abc')
*
ERROR at line 1:
ORA-02290: check constraint (MICHEL.SYS_C002119) violated
SQL> COMMIT;
Commit complete.
SQL>
SQL> SELECT * FROM T;
A
--------------------
ABC123
123
2 rows selected.
Regards
Michel
Problem with - EXCEPTIONS INTO caluse
A reader, January 09, 2006 - 6:51 am UTC
Hi Tom,
I tried to use the EXCEPTION INTO clause as you suggested.
The table T alterd , but the EXCEPTIONS table does not exists.
Is there any file i should run to create this table?
Thank You.
CONTROL-CNTL> create table t
2 ( barcode varchar2(20)
3 check(replace(translate(barcode,'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ',rpad('0',36,'0')),'0',
'') is null)
4 )
5 /
Table created.
CONTROL-CNTL> ALTER TABLE t
2 ENABLE VALIDATE CONSTRAINT SYS_C0035913
3 EXCEPTIONS INTO EXCEPTIONS
4 /
Table altered.
CONTROL-CNTL> desc EXCEPTIONS;
ERROR:
ORA-04043: object EXCEPTIONS does not exist
January 09, 2006 - 8:04 am UTC
ops$tkyte@ORA10GR2> <b>@?/rdbms/admin/utlexcpt</b>
ops$tkyte@ORA10GR2> rem
ops$tkyte@ORA10GR2> rem $Header: utlexcpt.sql,v 1.1 1992/10/20 11:57:02 GLUMPKIN Stab $
ops$tkyte@ORA10GR2> rem
ops$tkyte@ORA10GR2> Rem Copyright (c) 1991 by Oracle Corporation
ops$tkyte@ORA10GR2> Rem NAME
ops$tkyte@ORA10GR2> Rem except.sql - <one-line expansion of the name>
ops$tkyte@ORA10GR2> Rem DESCRIPTION
ops$tkyte@ORA10GR2> Rem <short description of component this file declares/defines>
ops$tkyte@ORA10GR2> Rem RETURNS
ops$tkyte@ORA10GR2> Rem
ops$tkyte@ORA10GR2> Rem NOTES
ops$tkyte@ORA10GR2> Rem <other useful comments, qualifications, etc.>
ops$tkyte@ORA10GR2> Rem MODIFIED (MM/DD/YY)
ops$tkyte@ORA10GR2> Rem glumpkin 10/20/92 - Renamed from EXCEPT.SQL
ops$tkyte@ORA10GR2> Rem epeeler 07/22/91 - add comma
ops$tkyte@ORA10GR2> Rem epeeler 04/30/91 - Creation
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table exceptions(row_id rowid,
2 owner varchar2(30),
3 table_name varchar2(30),
4 constraint varchar2(30));
Table created.
A reader, January 13, 2006 - 1:48 pm UTC
I tried executing the following:
select dn, stragg(charge) from charges;
and I am getting
ERROR at line 1:
ORA-00904: "STRAGG": invalid identifier
Thanks.
January 13, 2006 - 1:49 pm UTC
did you do the search? did you read about it? stragg is something I wrote.
Alex, January 13, 2006 - 2:41 pm UTC
Should he be posting that stuff? Are those real addresses?
January 15, 2006 - 3:17 pm UTC
indeed, they were - and all close to eachother (same neighborhood). I removed them.
ansi sql query weirdness
Stefan, January 17, 2006 - 8:38 am UTC
Hi tom,
i just came across what i believed to be a bug in oracle, and support then told me that this is correct behaviour according to ansi sql standard. albeit this just doesn't make any sense to me:
grant create session, create table to test identified by test
/
connect test/test
create table t1 (x int)
/
create table t2 (y int)
/
select * from t1 where x in (select x from t2);
this - to my surprise - returns "no rows selected".
but isn't that just plain wrong ? there is no column "X" in table t2, how come that is processed as if there was such a column ? is there any use for this "feature" ? :)
January 17, 2006 - 9:08 am UTC
not wierd at all - it is called simply a correlated subquery
you would not be surprised if:
select *
from t1
where t1.x in ( select t1.x
from t2)
worked right? Ths is the same thing - it is just a correlated subquery.
Alexander the ok, January 17, 2006 - 9:49 am UTC
I'm with Stefan on this one I don't understand that either.
How can you select a column from a table where it doesn't exist? Also why would you do that, what does it mean?
Is that the same as
select *
from t1
where t1.x = t1.x
In this case?
January 17, 2006 - 10:16 am UTC
it is not the same as "where t1.x = t1.x"
select * from t1 where x in (select x from t2);
is the same as:
select *
from t1
where x is not null
and exists ( select null
from t2 );
but, does this query make sense to you:
select *
from emp
where deptno in ( select dept.deptno
from dept
where dept.deptno = emp.deptno )
if so (correlated subquery) then
select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno )
should likewise make sense, and if dept didn't have a column named deptno but rather SOME_THING_dept, then:
select *
from emp
where deptno in ( select deptno
from dept
where dept.SOME_THING_deptno = deptno )
would likewise make sense.
select * from t1 where x in ( select x from t2 )
is simply a correlated subquery (and an excellent reason to consider using correlation names in all cases ;)
Alexander the ok, January 17, 2006 - 4:13 pm UTC
Yes those make sense. I just don't see why it's a correlated subquery.
select * from t1 where x in ( select x from t2 )
Where's the correlation? Doesn't there need to be a join from t1 to t2 in the subquery for this to be considered correlated? Your other examples have them.
January 17, 2006 - 4:28 pm UTC
why do you care of the x is after the word select or after the word where?
It matters NOT where the "x" appears.
You were happy with:
select *
from emp
where deptno in ( select deptno
from dept
where dept.SOME_THING_deptno = deptno )
this is not any different.
You were happy with this:
select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno )
this is not any different.
there never "need" be a join, ever. Not that the above EMP/DEPT examples are "joins", they are correlated subqueries with (arbitrary) predicates - the predicates need not be there, they are not "special" or magic.
Alexander the ok, January 18, 2006 - 9:02 am UTC
Tom sorry if I'm not making sense. My question kind of went from "how does that statement compile" to just trying to get my terminology straight. I see many names for different things like correlated subquery, scalar subquery etc. What I thought made the subquery correlated was a "correlation" via join in the subquery to the parent statement as indicated here:
select *
from emp
where deptno in ( select emp.deptno
from dept
where dept.deptno = emp.deptno ) <-----
and the other did not contain this, thus I wonder why you called it correlated. But it seems I was wrong to think this.
January 19, 2006 - 7:44 am UTC
it is correlated because the subquery refers to EMP.
any "subquery that refers to it's outer query" is correlated. It is not "joined" to, it uses correlation variables from the outer query.
Trevor, January 18, 2006 - 2:02 pm UTC
Alexander and Stefan, I think the easiest way to understand why this query works is to look at the scope of what is being selected. For the example given above:
create table t1 (x int)
/
create table t2 (y int)
/
select * from t1 where x in (select x from t2);
The reason why it returns no rows selected rather than an error is because it is looking at t2, seeing there is no x, and then going ahead and selecting x from t1 since that is the next up the chain. In other words, what you are expecting it to be doing is:
select * from t1 where t1.x in (select t2.x from t2);
and instead, what it is doing is:
select * from t1 where t1.x in (select t1.x from t2);
(It took me a while to wrap my head around this, and once I did, Tom's answer made much more sense.)
Alexander the ok, January 19, 2006 - 9:15 am UTC
</Quote
The reason why it returns no rows selected rather than an error is because it is
looking at t2, seeing there is no x, and then going ahead and selecting x from
t1 since that is the next up the chain.
</Quote
Is this true Tom? If so this would be the missing "piece" I was failing to see. The examples you gave were selecting a column in the subquery that actually existed in the table.
I just couldn't understand what that would be selecting. Even though I know it's using t1.x, but t1.x from t2?? It's just tough to compute.
I'll really be able to mess with some people at work with this one.
January 19, 2006 - 1:26 pm UTC
sorry, I sort of thought I said this:
you would not be surprised if:
select *
from t1
where t1.x in ( select t1.x
from t2)
worked right? Ths is the same thing
I put the correlation names in there for you - the ones that were implied....
If you want to mess with people at work - show them this and ask them "how" (requires 10g and above)
ops$tkyte@ORA10GR2> select * from t;
ID MSG
---------- --------------------
3 blah blah blah
ops$tkyte@ORA10GR2> select * from t;
ID MSG
---------- --------------------
2 boo
ops$tkyte@ORA10GR2> select * from t;
ID MSG
---------- --------------------
1 go away
ops$tkyte@ORA10GR2> select count(*) from t where id > 0;
COUNT(*)
----------
3
Alexander the ok, January 19, 2006 - 1:31 pm UTC
It's no problem, it's a good thing to know, thank you.
How was the training? I wonder what the top 5 things done wrong are. Storing dates as strings in there ;)
January 19, 2006 - 1:56 pm UTC
Storing dates in strings was in the effective schema section!
Stefan, January 20, 2006 - 6:03 am UTC
Tom, in response to your previous post.
What would happen if you were to issue a 4th select * from t ? :)
Trevor, thanks for the clarification i think i got it now as well
January 20, 2006 - 10:29 am UTC
I should mention that in the above silly example, I was in a single user database - there were no other sessions mucking with the data....
Alexander, January 20, 2006 - 9:14 am UTC
I'm going to have go through the 10g new features guide to figure out how you did that. I don't have 10g yet.
Are you going to leave me hangin on the top 5 things done wrong?
January 20, 2006 - 10:34 am UTC
o not using binds
o not having a test environment
o not considering statements that begin with "CREATE", "ALTER", "DROP" and so on
to be SOURCE CODE (and hence not using any sort of configuration management on
it and hence doing silly things like diffing two databases in an attempt to
figure out what is different between version 1 and version 2)
o trying to be database independent
o DBA vs Developer vs DBA relationship
sql query
mn, January 23, 2006 - 9:21 am UTC
Hi Tom,
When executing the below query(1) LC alias is doubling instead of getting 28500.In Query(2) when use /count(*) i am getting expected results .Is it possible to get expected results without using /count(*)?
(1)
SELECT CASE WHEN sum(TF_LEGR_BAL_CTRY)>0 AND GL_BAL_IND='Y'
THEN SUM(TF_LEGR_BAL_CTRY) ELSE 0 END ,TF_LEGR_BAL_CTRY
FROM TGDW_GTFMAS,TGDW_GFACLMT,PAR_GL
WHERE GL_CD=TF_GL_CD AND TF_ACC_NBR=FLN_ACC_NBR GROUP BY GL_BAL_IND
SQL> /
CASEWHEN sum(TF_LEGR_BAL_CTRY)>0ANDGL_BAL_IND='Y'THENSUM(TF_LEGR_BAL_CTRY)ELSE0 END LC----------------------------------------------------------------------------
TF_LEGR_BAL_CTRY
----------------
0
0
57000
28500
(2)
SELECT CASE WHEN sum(TF_LEGR_BAL_CTRY)>0 AND GL_BAL_IND='Y'
THEN SUM(TF_LEGR_BAL_CTRY)/COUNT(*) ELSE 0 END ,TF_LEGR_BAL_CTRY
FROM TGDW_GTFMAS,TGDW_GFACLMT,PAR_GL
WHERE GL_CD=TF_GL_CD AND TF_ACC_NBR=FLN_ACC_NBR GROUP BY GL_BAL_IND
5 /
ASEWHEN sum(TF_LEGR_BAL_CTRY)>0ANDGL_BAL_IND='Y'THENSUM(TF_LEGR_BAL_CTRY)/COUNT(*)ELS
-------------------------------------------------------------------------------
F_LEGR_BAL_CTRY
---------------
0
0
28500
28500
These are below tables involved in my query
SQL> select * from tgdw_gtfmas;
TF_LEGR_BAL_CTRY TF_GL_CD LOAD_ TF_ACC_NBR
---------------- ---------- ----- ----------
28500 102345 SG 1000
0 102345 SG 1000
SQL> select * from tgdW_gfaclmt;
FLN_ACC_NBR
-----------
1000
SQL> select * from par_gl;
GL_CD G I
---------- - -
102345 Y I
102345 Y 0
Thanks in advance
MM
January 23, 2006 - 10:38 am UTC
no creates...
no insert intos...
but basically you have a partial cartesian join here, you join by "A", you group by "B", you are aggregating multiple records into one. dividing the sum by the count of records aggregated "fixed" it.
but it likely means you don't yet fully understand the question you are trying to ask of the data - so, skip the sql, phrase the question.
query
ravi, January 24, 2006 - 3:14 am UTC
Hi Tom,
I have two tables.1st table with 210 columns.
Table structure like this
1st table
sid,record_date,up1,dwn1,up2,dwn2...up50,dwn50.
ex data:
1,24-dec-2005,10,20,30,40...50,50
2,24-dec-2005,20,20,20,20,..20,20
2nd table
name,col1,col2. This table contains some value and corresponding column names from table 1
ex data:
Ravi,up1,dwn1
gvs,up2,dwn2
now given a "name" in table2 i want to find all the corresponding column values in table1.(table2 actually contains column names of table1 with respect to the value).
How to get column values from table1 with respect to a value in table2 say
for ravi i know columns are up1,dwn1 and need to get their values from table1.
Thanks in advance
ravi
2 numbers from 3rd number
info439, January 27, 2006 - 3:13 pm UTC
Tom, If i have a list of number say from 1 to 128
, I want to fetch every 2 numbers after 3rd number in the list. How do i get this?
if the list is 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20..
then
I want to fetch 3&4,7&8,11&12 etc..
Thanks ,
January 28, 2006 - 12:52 pm UTC
ops$tkyte@ORA10GR1> create table t
2 as
3 select level x
4 from dual
5 connect by level <= 20;
Table created.
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select x
2 from (
3 select x, rownum r
4 from (select x
5 from t
6 order by x)
7 )
8 where mod(ceil(r/2),2) = 0
9 /
X
----------
3
4
7
8
11
12
15
16
19
20
10 rows selected.
That assumes that the data in "T" is not really just 1, 2, 3, .... but sort of "random data", for if I just wanted to generate 3/4, 7/8, .... there are better ways.
Thanks Tom,
A reader, January 28, 2006 - 1:52 pm UTC
That is not a random data in table. we have a sequence of codes like that from 1..256 and for some from 1..512 etc..[apart from other columns]. For some special conditions, we need to pickup the way i have mentioned earlier.
I think the way you showed me looks good[ as long as it gets me what i wanted]. But since you kind of mentioned it, what are other ways to do this :-)?
thank you.
January 29, 2006 - 8:08 am UTC
well, for starters, if they are contigous 1,2,3,4.... you don't need to use rownum, you already have the contigous numbers, so you could skip the assign rownum bit.
SQL problem
ramis, February 03, 2006 - 4:19 pm UTC
hello,
I have table T with the following columns, are all number fields, except of course quiz_date
student_id
class_id
quiz_id
marks
quiz_date
now each student belongs to one class_id. each student participates in many quizes taken throughout the month.each quiz has its unique id..
i want to get a very complex analysis where i can find out:
1. marks obtained by each student in quizes in any number of days period (i.e. no. of days can be any figure greater than zero),
2. marks obtained by each student in quizes in a month for any single year period (i.e. Jan, feb,...Dec),
suppose the data is
student_id class_id quiz_id marks quiz_date
1 1 1 50 22 Jan 2004
2 2 2 40 24 Jan 2004
3 1 3 34 24 Jan 2005
1 1 4 10 26 Jan 2005
1 1 5 30 29 Jan 2005
1 1 6 31 Jan 2005
3 2 7 34 02 Feb 2005
3 2 8 33 09 Feb 2005
3 2 9 56 09 Feb 2005
1 1 7 90 26 Feb 2005
2 2 8 0 26 Feb 2005
1 1 8 80 28 Feb 2005
2 2 8 65 28 Feb 2005
1 1 9 31 Mar 2005
2 2 9 11 31 Mar 2005
now i would like to have a query that shows me the most number of marks obtained by each student in any number of days period (in ascending order of date), say 10 days, 15 days etc...
this query should show the following columns in the output
desired output (i.e. most number of marks by each student in a in any 10 day period)
st_id cl_id t_quizes t_marks Hs_marks st_quiz end_quiz start_date end_date
1 1 1 50 50 1 1 22 Jan 2004 31 Jan 2004
1 1 3 40 30 4 6 26 Jan 2005 31 Jan 2005
1 1 23 170 90 7 8 26 Feb 2005 28 Feb 2005
2 so on...
secondly, most number of marks obtained by each student in a month for any single year
this query should show the following columns in the output
desired output (i.e. most number of marks by each student in a in any month)
st_id cl_id t_quizes t_marks Hs_marks st_quiz end_quiz month year
1 1 1 50 50 1 1 Jan 2004
1 1 3 40 30 4 6 Jan 2005
1 1 23 170 90 7 8 Feb 2005
2 so on...
[/pre]
where
st_id = studnet_id
cl_id = class_id
t_quizes = total_quizes
t_marks = total_marks
Hs_marks = highest _quizes
st_quiz = start_quiz no. of each sequence
end_quiz = end_quiz no. of each sequence
month = month in which the marks were obtained
year = year in which the marks were obtained
I would be most grateful if any one do it with shortest possible and fastest query
create table T
(student_id number,
class_id number,
quiz_id number,
marks number
quiz_date date)
INSERT INTO T (1,1, 1, 50, 22 Jan 2004);
INSERT INTO T (2,2, 2, 40, 24 Jan 2004);
INSERT INTO T (3,1, 3, 34, 24 Jan 2005);
INSERT INTO T (1,1, 4, 10, 26 Jan 2005);
INSERT INTO T (1,1, 5, 30, 29 Jan 2005);
INSERT INTO T (1,1, 6, 31 Jan 2005);
INSERT INTO T (3,2, 7,34, 02 Feb 2005);
INSERT INTO T (3,2, 8,33, 09 Feb 2005);
INSERT INTO T (3,2, 9,56, 09 Feb 2005);
INSERT INTO T (1,1, 7, 90, 26 Feb 2005);
INSERT INTO T (2,2, 8, 0, 26 Feb 2005);
INSERT INTO T (1,1, 8, 80, 28 Feb 2005);
INSERT INTO T (2,2, 8, 65, 28 Feb 2005);
INSERT INTO T (1,1, 9, 31 Mar 2005);
INSERT INTO T (2,2, 9, 11, 31 Mar 2005);
sql problem???
ramis, February 04, 2006 - 12:16 am UTC
Hi tom,
regarding my question just above this message, I have got a solution to my 2nd request and I thought i my first request wasn't even clear to understand..so I have reaksed my first request, i hope in a better way. Please ignore my last message..
I am sure you would provide me a great solution...
I have table T with the following columns, are all number fields, except of course quiz_date
student_id
class_id
quiz_id
marks
quiz_date
now each student belongs to one class_id. each student participates in many quizes taken throughout the month.each quiz has its unique id..
I require...
1. marks obtained by each student in quizes in any number of days period (i.e. no. of days can be any figure greater than zero),
what I want is that the most number of marks obtained by each student in any number of days (which we will specify in the query) period, say 10 days, 15 days, 365 days etc...
for example this is the sample data for student 1 form the list i provided in my original message.
[pre]
st_id c_id quiz_id marks quiz_date
1 1 1 50 22 Jan 2004
1 1 4 10 26 Jan 2005
1 1 5 30 29 Jan 2005
1 1 6 31 Jan 2005
1 1 7 90 26 Feb 2005
1 1 8 80 28 Feb 2005
1 1 9 31 Mar 2005
[/pre]
now I want to calculate the sum of marks obtained by this student in any period of 'n' days. This 'n' is what we would specify in the query.
now for above data, say, I want to calculate marks obtained by student 1 in any period of 10 days starting from his first quiz, then from 2nd quiz, then from 3rd and so on, in the same result.
his first quiz was on 22 Jan 2004 starting from 22nd' 10 days are completed on 31 jan 2004- this is shown by the first row in my desired output below.
now his next quiz is a year later on 26 Jan 2005, counting ten days from 26th are Feb 4, 2005 - 2nd row of my desired output.
the third quiz on 29 Jan 2005, 10 days are completed on 07 Feb 2005, same is for the third row..
[pre]
desired output
st_id sum_marks start_date end_date
1 50 22 Jan 2004 31 Jan 2004
1 40 26 Jan 2005 04 Feb 2005
1 30 29 Jan 2005 07 Feb 2005
1 31 Jan 2005 09 Feb 2005
1 170 26 Feb 2005 07 Mar 2005
1 80 28 Feb 2005 09 Mar 2005
1 31 Mar 2005 09 Apr 2005
[/pre]
in short the calculation will be done for 'n' days from each of his quizes in acsending order
same is the case for the other students..
I hope this would clear my requirement
I would love to have the shortest possible query for this...
create table T
(student_id number,
class_id number,
quiz_id number,
marks number
quiz_date date)
INSERT INTO T (1,1, 1, 50, 22 Jan 2004);
INSERT INTO T (2,2, 2, 40, 24 Jan 2004);
INSERT INTO T (3,1, 3, 34, 24 Jan 2005);
INSERT INTO T (1,1, 4, 10, 26 Jan 2005);
INSERT INTO T (1,1, 5, 30, 29 Jan 2005);
INSERT INTO T (1,1, 6, 31 Jan 2005);
INSERT INTO T (3,2, 7,34, 02 Feb 2005);
INSERT INTO T (3,2, 8,33, 09 Feb 2005);
INSERT INTO T (3,2, 9,56, 09 Feb 2005);
INSERT INTO T (1,1, 7, 90, 26 Feb 2005);
INSERT INTO T (2,2, 8, 0, 26 Feb 2005);
INSERT INTO T (1,1, 8, 80, 28 Feb 2005);
INSERT INTO T (2,2, 8, 65, 28 Feb 2005);
INSERT INTO T (1,1, 9, 31 Mar 2005);
INSERT INTO T (2,2, 9, 11, 31 Mar 2005);
regards
ramis
A reader, February 06, 2006 - 9:32 pm UTC
Tom,
I have a.b tables as below:
SQL> create table a (id number NOT NULL, note1 VARCHAR2(10),note2 VARCHAR2(10),note3 VARCHAR2(10));
Table created.
SQL> insert into a values(1,'ER',NULL,'MOD');
1 row created.
SQL> insert into a values(2,NULL,'DC','HOT');
1 row created.
SQL> insert into a values(3,'SI','BE',NULL);
1 row created.
SQL> insert into a values(4,NULL,NULL,'BAD');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from a;
ID NOTE1 NOTE2 NOTE3
---------- ---------- ---------- ----------
1 ER MOD
2 DC HOT
3 SI BE
4 BAD
SQL> create table b (id number NOT NULL, note1 VARCHAR2(10),note2 VARCHAR2(10),note3 VARCHAR2(10));
Table created.
SQL> insert into b values(1,'ER',NULL,NULL);
1 row created.
SQL> insert into b values(2,'FG','CC',NULL);
1 row created.
SQL> insert into b values(3,NULL,'MX','DEF');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from b;
ID NOTE1 NOTE2 NOTE3
---------- ---------- ---------- ----------
1 ER
2 FG CC
3 MX DEF
Need to update table a by table b with the logics as :
if table a' field is not null, keep it; if table a's field is null, update a by table b's
given data. Also if b's id not in a's id, do not update that row in a table.
So after updating, it looks like:
ID NOTE1 NOTE2 NOTE3
---------- ---------- ---------- ----------
1 ER MOD
2 FG DC HOT
3 SI BE DEF
4 BAD
Can we just use one SQL statement instead of using PL/SQL to get expected results in 9.2.0.4?
Thanks.
February 07, 2006 - 1:29 am UTC
place a primary key on B(ID) instead of NOT NULL and this will do it:
ops$tkyte@ORA9IR2> update ( select a.note1 anote1, a.note2 anote2, a.note3 anote3,
2 b.note1 bnote1, b.note2 bnote2, b.note3 bnote3
3 from a, b
4 where a.id = b.id )
5 set anote1 = nvl(anote1,bnote1),
6 anote2 = nvl(anote2,bnote2),
7 anote3 = nvl(anote3,bnote3)
8 /
3 rows updated.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from a;
ID NOTE1 NOTE2 NOTE3
---------- ---------- ---------- ----------
1 ER MOD
2 FG DC HOT
3 SI BE DEF
4 BAD
Great answer
A reader, February 07, 2006 - 10:29 am UTC
Tom,
Thanks so much for your great help. Always learn something
new and effient from your site.
To Ramis SQL problem
Michel Cadot, February 07, 2006 - 4:18 pm UTC
Ramis,
If you get a solution for one of your problems, why don't you post it here then everyone can take advantage of it?
Another point, it will be fair (if you want us to help you) to post a working test case. Yours contains many errors.
Regards
Michel
OK
Prasad, February 08, 2006 - 5:00 am UTC
Hi Tom,
This query retrieves duplicate values from a table..
SQL> desc t
Name Null?
Type
----------------------------------------------------------------- --------
--------------
X
NUMBER(38)
SQL> select * from t t1
2 where not exists(
3 select 'x' from t
4 group by x
5 having min(rowid) = t1.rowid)
6 /
Any other way to put this query??
February 08, 2006 - 8:11 am UTC
consecutive days??
Ramis, February 09, 2006 - 12:47 pm UTC
Tom,
sorry for the my earlier erroneous ddl data..
I have now corrected it and have reasked my query..hope would solve my problem
create table T
(student_id number,
class_id number,
quiz_id number,
marks number,
quiz_date date)
/
INSERT INTO T VALUES (1,1, 1, 50, '22 Jan 2004');
INSERT INTO T VALUES (2,2, 2, 40, '24 Jan 2004');
INSERT INTO T VALUES (3,1, 3, 34, '24 Jan 2005');
INSERT INTO T VALUES (1,1, 4, 10, '26 Jan 2005');
INSERT INTO T VALUES (1,1, 5, 30, '29 Jan 2005');
INSERT INTO T VALUES (1,1, 6, NULL, '31 Jan 2005');
INSERT INTO T VALUES (3,2, 7,34, '02 Feb 2005');
INSERT INTO T VALUES (3,2, 8,33, '09 Feb 2005');
INSERT INTO T VALUES (3,2, 9,56, '09 Feb 2005');
INSERT INTO T VALUES (1,1, 7, 90, '26 Feb 2005');
INSERT INTO T VALUES (2,2, 8, 0, '26 Feb 2005');
INSERT INTO T VALUES (1,1, 8, 80, '28 Feb 2005');
INSERT INTO T VALUES (2,2, 8, 65, '28 Feb 2005');
INSERT INTO T VALUES (1,1, 9, NULL , '31 Mar 2005');
INSERT INTO T VALUES (2,2, 9, 11, '31 Mar 2005');
SQL> SELECT * FROM T
2
SQL> /
STUDENT_ID CLASS_ID QUIZ_ID MARKS QUIZ_DATE
---------- ---------- ---------- ---------- ----------
1 1 1 50 22-01-2004
2 2 2 40 24-01-2004
3 1 3 34 24-01-2005
1 1 4 10 26-01-2005
1 1 5 30 29-01-2005
1 1 6 31-01-2005
3 2 7 34 02-02-2005
3 2 8 33 09-02-2005
3 2 9 56 09-02-2005
1 1 7 90 26-02-2005
2 2 8 0 26-02-2005
STUDENT_ID CLASS_ID QUIZ_ID MARKS QUIZ_DATE
---------- ---------- ---------- ---------- ----------
1 1 8 80 28-02-2005
2 2 8 65 28-02-2005
1 1 9 31-03-2005
2 2 9 11 31-03-2005
I require
marks obtained by each student in quizes in any number of days period (i.e.
no. of days (which we will specify in the query) period, say 10 days, 15 days, 365 days etc...
for example this is the sample data for student 1 form the list I provided above
original message.
SQL> SELECT * FROM T where student_id = 1
2
SQL> /
st_id c_id quiz_id marks quiz_date
1 1 1 50 22 Jan 2004
1 1 4 10 26 Jan 2005
1 1 5 30 29 Jan 2005
1 1 6 31 Jan 2005
1 1 7 90 26 Feb 2005
1 1 8 80 28 Feb 2005
1 1 9 31 Mar 2005
now for above data, say, I want to calculate marks obtained by student 1 in any period of 10 days starting from his first quiz, then from 2nd quiz, then from 3rd and so on, in the same result.
his first quiz was on 22 Jan 2004 starting from 22nd' 10 days are completed on 31 jan 2004- this is shown by the first row in my desired output below.
now his next quiz is a year later on 26 Jan 2005, counting ten days from 26th are Feb 4, 2005 - 2nd row of my desired output.
the third quiz on 29 Jan 2005, 10 days are completed on 07 Feb 2005, same is for the third row..
desired output
st_id sum_marks start_date end_date
1 50 22 Jan 2004 31 Jan 2004
1 40 26 Jan 2005 04 Feb 2005
1 30 29 Jan 2005 07 Feb 2005
1 31 Jan 2005 09 Feb 2005
1 170 26 Feb 2005 07 Mar 2005
1 80 28 Feb 2005 09 Mar 2005
1 31 Mar 2005 09 Apr 2005
same is the case for the other students..I want the answer for all students from one query in a single output..
regards
Ramis
To Ramis
Michel Cadot, February 09, 2006 - 2:47 pm UTC
Well, i still don't see the solution you found to your 2nd query (Feb 03-04 2006). Maybe you don't want to share.
Here's for the first one:
SQL> /
STUDENT_ID SUM_MARKS START_DATE END_DATE
---------- ---------- ----------- -----------
1 50 22 Jan 2004 31 Jan 2004
1 40 26 Jan 2005 04 Feb 2005
1 30 29 Jan 2005 07 Feb 2005
1 31 Jan 2005 09 Feb 2005
1 170 26 Feb 2005 07 Mar 2005
1 80 28 Feb 2005 09 Mar 2005
1 31 Mar 2005 09 Apr 2005
2 40 24 Jan 2004 02 Feb 2004
2 65 26 Feb 2005 07 Mar 2005
2 65 28 Feb 2005 09 Mar 2005
2 11 31 Mar 2005 09 Apr 2005
3 68 24 Jan 2005 02 Feb 2005
3 123 02 Feb 2005 11 Feb 2005
3 89 09 Feb 2005 18 Feb 2005
3 89 09 Feb 2005 18 Feb 2005
15 rows selected.
Regards
Michel
Michel Cadot
ramis, February 09, 2006 - 3:04 pm UTC
Michel Cadot,
thanks for your reply
perhaps u have missed to post your query with the result..please post it
I apologize for forgetting to post my query for the other problem
here is that
select student_id st_id,
class_id cl_id,
count(*) t_quizes,
sum(nvl(marks,0)) t_marks,
max(nvl(marks,0)) hs_marks,
min(quiz_id) st_quiz,
max(quiz_id) end_quiz,
to_char(quiz_date,'Mon') month,
to_char(quiz_date,'yyyy') year
from t
group by student_id, class_id, to_char(quiz_date,'Mon'), to_char(quiz_date,'yyyy')
order by student_id, year, to_date(to_char(quiz_date,'Mon'),'Mon')
SQL> /
ST_ID CL_ID T_QUIZES T_MARKS HS_MARKS ST_QUIZ END_QUIZ MON YEAR
---------- ---------- ---------- ---------- ---------- ---------- ---------- --- ----
1 1 1 50 50 1 1 Jan 2004
1 1 3 40 30 4 6 Jan 2005
1 1 2 170 90 7 8 Feb 2005
1 1 1 0 0 9 9 Mar 2005
2 2 1 40 40 2 2 Jan 2004
2 2 2 65 65 8 8 Feb 2005
2 2 1 11 11 9 9 Mar 2005
3 1 1 34 34 3 3 Jan 2005
3 2 3 123 56 7 9 Feb 2005
regards
ramis
To Ramis
Michel Cadot, February 09, 2006 - 4:57 pm UTC
Maybe you're right.
SQL> select student_id,
2 sum(marks)
3 over (partition by student_id order by quiz_date
4 range between current row and interval '9' day following)
5 sum_marks,
6 quiz_date start_date,
7 quiz_date + interval '9' day end_date
8 from t
9 order by student_id, quiz_date
10 /
STUDENT_ID SUM_MARKS START_DATE END_DATE
---------- ---------- ----------- -----------
1 50 22 Jan 2004 31 Jan 2004
1 40 26 Jan 2005 04 Feb 2005
1 30 29 Jan 2005 07 Feb 2005
1 31 Jan 2005 09 Feb 2005
1 170 26 Feb 2005 07 Mar 2005
1 80 28 Feb 2005 09 Mar 2005
1 31 Mar 2005 09 Apr 2005
2 40 24 Jan 2004 02 Feb 2004
2 65 26 Feb 2005 07 Mar 2005
2 65 28 Feb 2005 09 Mar 2005
2 11 31 Mar 2005 09 Apr 2005
3 68 24 Jan 2005 02 Feb 2005
3 123 02 Feb 2005 11 Feb 2005
3 89 09 Feb 2005 18 Feb 2005
3 89 09 Feb 2005 18 Feb 2005
15 rows selected.
Regards
Michel
Thanks Michel
ramis, February 09, 2006 - 11:22 pm UTC
Michel thanks alot for your query.
regards
Ramis
SQL QUERY
Bhavesh Ghodasara, February 14, 2006 - 8:07 am UTC
hi Tom,
I am creating the report in which i have to fill rows dynamically...
similar to this
CREATE TABLE t(a NUMBER,b NUMBER)
INSERT INTO t VALUES(1,'');
INSERT INTO t VALUES(2,'');
INSERT INTO t VALUES('',3);
INSERT INTO t VALUES('',4);
INSERT INTO t VALUES('',5);
SQL> select *
2 from t;
A B
--------- ---------
1
2
3
4
5
now i want output like this:
A B
--------- ---------
1 3
2 4
5
its like from a and b only one column have value while other have null.
and it doesnt matter that how many values a or b have..the no. of rows should
be greatest between a and b..here a having 2 valuesh while b having 3 so final
output contains 3 rows.
i want to avoid self-join,want to use analytics. I try it using first_value,lag and lead
but fail to do so.
how can i do that?
thanks in advance.
February 14, 2006 - 8:28 am UTC
ops$tkyte@ORA10GR2> select rn, max(a), max(b)
2 from (
3 select a, b,
4 coalesce(
5 case when a is not null then rn1 end ,
6 case when b is not null then rn2 end
7 ) rn
8 from (
9 select a, b,
10 row_number() over (partition by case when b is null then 1 end order by a) rn1,
11 row_number() over (partition by case when a is null then 1 end order by b) rn2
12 from t
13 where a is null OR b is null
14 )
15 )
16 group by rn
17 /
RN MAX(A) MAX(B)
---------- ---------- ----------
1 1 3
2 2 4
3 5
that only works if A is null or B is null - which I presume I can assume.
great work
Bhavesh Ghodasara, February 15, 2006 - 8:41 am UTC
great work tom thanks very much...
after my post i also succedded to do so..
my query is:
SELECT *
FROM
(
SELECT a,lag(b,GREATEST(y.cnta,y.cntb)-1)over(ORDER BY b ) lb
FROM t x,(SELECT COUNT(a)cnta,COUNT(b)cntb FROM t)y)z
WHERE NVL(a,lb)>0
thanks again..
NVLs and Outer Joins
Rish G, July 11, 2006 - 4:50 pm UTC
Tom,
I've been trying to figure out the difference between these 2 queries.
create table recip_elig(
recip_id varchar2(5),
med_stat varchar2(2),
fdos date )
insert into recip_elig
values('10010', 'WI', '10-APR-2006');
insert into recip_elig
values('10020', 'SC', '11-APR-2006');
insert into recip_elig
values('10030', 'L1', '02-APR-2006');
insert into recip_elig
values('10040', '23', '13-MAY-2006');
CREATE TABLE RECIP2082(
recip_id varchar2(5),
elig_ind varchar2(1),
trans_date date);
insert into recip2082
values('10010', 'L', '01-APR-2006');
insert into recip2082
values('10010', 'L', '01-MAY-2006');
insert into recip2082
values('10020', 'U', '01-MAR-2006');
insert into recip2082
values('10020', 'U', '01-APR-2006');
insert into recip2082
values('10020', 'U', '01-FEB-2006');
insert into recip2082
values('10020', 'U', '01-MAY-2006');
insert into recip2082
values('10030', 'A', '01-FEB-2006');
Criteria : I want to assign a code to each recipient in the recip_elig table based on certain conditions.
When the med_stat code for a recipient is either WI or L1, then assign a code = 01
When the med_stat code for a recipient is SC and the corresponding elig_ind in the recip2082 table is U and the FDOS in recip elig table lies within the month following the trans_date in the recip2082 table then assign a code of 02
when the med_stat code is 23 then assign a code of 03.
A simple query and this query that gives me the correct answer
SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
CASE WHEN MED_STAT IN ('WI', 'L1')
THEN '01'
WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
WHEN MED_STAT = '23' THEN '03'
END AS DUAL_ELIG
FROM RECIP_ELIG RE, RECIP2082 R2
WHERE RE.RECIP_ID = R2.RECIP_ID(+)
AND FDOS BETWEEN last_day(R2.TRANS_DATE(+))+1 AND last_day(add_months(R2.TRANS_DATE(+) ,1))
RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10030 L1 02-APR-06 01
10040 23 13-MAY-06 03
10010 WI 10-APR-06 01
Later, I had include another table in this query that had to be outer joined on recip_elig. Since Oracle does not allow a table to be outer joined to more than one table I tried to work around the above query by using nvl.
Here is the query :
SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
CASE WHEN MED_STAT IN ('WI', 'L1')
THEN '01'
WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
WHEN MED_STAT = '23' THEN '03'
END AS DUAL_ELIG
FROM RECIP_ELIG RE, RECIP2082 R2
WHERE RE.RECIP_ID = R2.RECIP_ID(+)
AND FDOS BETWEEN last_day(nvl(R2.TRANS_DATE, '01-JAN-1900'))+1 AND last_day(add_months(NVL(R2.TRANS_DATE, '01-JAN-2099') ,1))
RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10040 23 13-MAY-06 03
I was wondering why the NVL didnt workon this criteria
FDOS BETWEEN last_day(nvl(R2.TRANS_DATE, '01-JAN-1900'))+1 AND last_day(add_months(NVL(R2.TRANS_DATE, '01-JAN-2099') ,1) whereas the outer join worked? I cant seem to understand the difference.
Recip 10010 matches the 1st case condition, and should be assigned the code 01. why is it bumped off from the resultset of the second query?
Thanks once again,
Rish G.
July 12, 2006 - 3:26 pm UTC
why not use the ansi outer join syntax?
PLEASE use to_date with a format - stop relying on implicit conversions, you'll get seriously burned soon.
you missed a plus.
ops$tkyte@ORA10GR2> SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
2 CASE WHEN MED_STAT IN ('WI', 'L1')
3 THEN '01'
4 WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
5 WHEN MED_STAT = '23' THEN '03'
6 END AS DUAL_ELIG
7 FROM RECIP_ELIG RE, RECIP2082 R2
8 WHERE RE.RECIP_ID = R2.RECIP_ID(+)
9 AND FDOS BETWEEN last_day(nvl(R2.TRANS_DATE(+), '01-JAN-1900'))+1 AND
10 last_day(add_months(NVL(R2.TRANS_DATE(+), '01-JAN-2099') ,1))
11 /
RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10020 SC 11-APR-06 U 01-MAR-06 02
10040 23 13-MAY-06 03
10010 WI 10-APR-06 01
10030 L1 02-APR-06 01
You wanted to look at the date in the outer join part. When you leave off the outer join you get:
ops$tkyte@ORA10GR2> SELECT RE.RECIP_ID, RE.MED_STAT, RE.FDOS, R2.ELIG_IND, R2.TRANS_DATE,
2 CASE WHEN MED_STAT IN ('WI', 'L1')
3 THEN '01'
4 WHEN MED_STAT = 'SC' AND R2.ELIG_IND = 'U' THEN '02'
5 WHEN MED_STAT = '23' THEN '03'
6 END AS DUAL_ELIG
7 FROM RECIP_ELIG RE, RECIP2082 R2
8 WHERE RE.RECIP_ID = R2.RECIP_ID(+)
9 /
RECIP ME FDOS E TRANS_DAT DU
----- -- --------- - --------- --
10010 WI 10-APR-06 L 01-APR-06 01
10010 WI 10-APR-06 L 01-MAY-06 01
10020 SC 11-APR-06 U 01-MAR-06 02
10020 SC 11-APR-06 U 01-APR-06 02
10020 SC 11-APR-06 U 01-FEB-06 02
10020 SC 11-APR-06 U 01-MAY-06 02
10030 L1 02-APR-06 A 01-FEB-06 01
10040 23 13-MAY-06 03
8 rows selected.
the rows are not "missing", but the dates are out of range.
SQL VARIABLE
Henikein, July 12, 2006 - 12:33 pm UTC
Please help me this tom,
I want to populate a value in Variable from select statment to an procedure from sql prompt
let me show it..
sql> select max(ach_id) achid from ad_ach_history;
sql> exec ad_log_min(achid);
see the aliase probably contains max ach_id (2287) , i want to pass this value into ad_log_min procedure automatically, this is a sql script and runs periodically
error log says identifier achid must be declared
please tell me how to store value in a variable in SQLprompt session and use it again to pass in Procedure/Function/Package
July 12, 2006 - 3:50 pm UTC
ops$tkyte@ORA10GR2> create or replace procedure p( p_input in number )
2 as
3 begin
4 dbms_output.put_line( 'you sent me... ' || p_input );
5 end;
6 /
Procedure created.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> declare
2 l_input number;
3 begin
4 select max(user_id) into l_input
5 from all_users;
6 p(l_input);
7 end;
8 /
you sent me... 172
PL/SQL procedure successfully completed.
Spool
A reader, July 13, 2006 - 4:43 am UTC
Great i figured that out yesterday but then
How do i Spool is a question
I opend a Anonymus block and was able to pass the Variable as desired but the Spooling of the Output is not achived,
I need to spool the Execution of the Procedure and how is that possible inside a Pl/Sql Block.
Thanks Tom your the best..
July 13, 2006 - 7:53 am UTC
you do not "spool" inside of an anonymous block (since that is plsql and SPOOL is a sqlplus command)
You spool an anonymous block:
spool foo
begin
code...
end;
/
spool off
help
Paul, July 27, 2006 - 10:14 pm UTC
Hi Tom
Would you please help me.
I want deploy query that show all tablespace_name, owner of all tablespaces, and table_names of all owners.
the result must thus be something
--TABLESPACE_NAME
EXAMPLE
+HR -- schema or owner
.REGIONS --tables of schema
.LOCATIONS
.DEPARTMENTS
.JOBS
...
+OE
.CUSTOMERS
.WAREHOUSES
.ORDER_ITEMS
...
...
USERS
+Gk
.emp --rest of tables
.dept
+JC
... --rest of schemas
.... --rest of tablespaces
I used that queries but I do not how make relation between cursors.
select distinct(tablespace_name) from all_tables;
select distinct(owner) from all_tables where tablespace_name=<tablespace_name>;
select table_name from all_tables where owner=<owner>;
Please help me.
July 27, 2006 - 10:50 pm UTC
problem: there is no such thing as an "owner of all tablespaces"
do you mean - I want to show
"all tablespaces, the schemas that own stuff in them, the stuff they own"?
that would be a single query against dba_segments.
Query
Paul, July 28, 2006 - 1:03 pm UTC
Hi Tom,
But en dba_segments are not table_name .
Yes, I want to show all tablespaces, all schemas of each tablespace, all tables of each schema in procedure or query.
July 28, 2006 - 8:44 pm UTC
but in dba_segments is the segment_name, which is the table name.
dba_segments is what you want to query, it shows all segments, it shows their names, it shows what type they are.
it is in fact what you want
SQL Query
Paul, August 01, 2006 - 11:13 am UTC
Thanks Tom,
But my problem is that I do not know how deploy at the same time, tablespace_name, schemas and table_names in pl/sql server pages or query. I worked with cursors and arrays but I do not how connect cursors for get that view or result.
For example:
USERS
+ HR
+REGIONS
+LOCATIONS
+DEPARTMENTS
+JOBS
+EMPLOYEES
...
+ JC
+X
+DEPT
....
...
EXAMPLES
+ K
+ CITY
+ COUNTRIES
+ MC
...
...
Would you please help me?
August 01, 2006 - 7:01 pm UTC
what about just a "table" of data....
ops$tkyte%ORA10GR2> select decode( rn1, 1, tablespace_name ) tsname,
2 decode( rn2, 1, owner ) seg_owner,
3 segment_name, blocks
4 from (
5 select tablespace_name, owner, segment_name, blocks,
6 row_number() over (partition by tablespace_name order by owner, segment_name) rn1,
7 row_number() over (partition by tablespace_name, owner order by segment_name) rn2
8 from dba_segments
9 )
10 order by tablespace_name, owner, segment_name
11 /
TSNAME SEG_OWNER SEGMENT_NAME BLOCKS
------------ ------------ ------------------------------ ----------
BIG_TABLE BIG_TABLE BIG_TABLE 147456
BIG_TABLE_PK 22528
BT_OWNER_IDX 23552
BT_OWNER_OBJECT_TYPE_IDX 35840
OPS$TKYTE ABC 8
INDEX2K LOTTOUSER GAME_BET_LOG_IDX002 13312
GBL_DATE_GAME_TRANS_LIC 13312
GBL_LICENSEE_ID 25600
LOTTO LOTTOUSER GAME_BET_LOG 128
SYS_C001088 128
SYSAUX CTXSYS DR$CLASS 8
DR$INDEX 8
DR$INDEX_ERROR 8
DR$INDEX_PARTITION 8
DR$INDEX_SET 8
Interactive
Paul, August 07, 2006 - 9:57 pm UTC
Thanks very much Tom.
But how can I do it with cursors because I want to deploy in html page with checkbox for user choose tables of any schema.
I am doing with cursors, but i do not how do recursive cursors.
Would you please help?
August 08, 2006 - 7:35 am UTC
I have no idea what you mean, why you do you think you need "recursive cursors" and what does a checkbox have to do with it?
column sort
Anitha, August 08, 2006 - 2:22 am UTC
i have recorset in table like this
category academic_year class section
--------- ------------- ----- ---------
icse 2006-2007 I a
state 2004-2005 II b
state 2006-2007 Lkg b
icse 2006-2007 X a
i Want to get the output like this
class
------
Pre-kg
lkg
ukg
I
II
III
IV
V
VI
VII
VIII
IX
X
i want the solution for this please advise
sincearly
Anitha
August 08, 2006 - 7:45 am UTC
I have no idea how to turn your inputs into those outputs. I see lots of data in the output that I cannot see where it came from.
that and I have no table creates, no inserts.
Question
Paul, August 08, 2006 - 3:42 pm UTC
I will explain better.
I am doing psp form, in which deploy as tree that deploy tablespaces, schemas, table_name for choice user, but I do not how deploy tablespaces with cursors and how pass name of tablespace to deploy own schemas and tables.
August 09, 2006 - 10:04 am UTC
have you considered using HTML/DB - APEX instead of lovingly hand written code?
I would suggest for this - since the results will simply be MASSIVE, that you do not even attempt a "hierarchy"
Simply present a list of tablespaces.
let them click on one.
then show that tablespace segments (owner.segment_name).
What is better way to write such SQLS ?
Parag J Patankar, August 09, 2006 - 7:45 am UTC
Hi Tom,
Following is an SQL example of finding out missing data using mainly t12, t17 and v11 tables in 9.2 db
select a, b, c, d, e, f
from t12, t17, t31, d
where < big join condition between these tables >
a not in
(
select a
from t12
minus
select b
from t12, t17, t31, d, v11
where < big join condition between various tables >
)
/
t12 and t17 having 45000 records no index
v11 having 32000 records and no index
all tables analyzed using dbms_stats.
Will you pl tell me is there any better way to write SQL interms of perfomance ? I am ready to create additional indexs if required.
thanks for educating me.
best regards
pjp
August 09, 2006 - 10:46 am UTC
I don't know what you mean by "missing data" here.
missing data !!!
Parag J Patankar, August 10, 2006 - 12:39 am UTC
Hi Tom,
In my question in this thread "missing data" means missing reference nos. More details are
1/ I have to get references ( field b ) of table v11 by joining various tables to this table
2/ I have to select data from table t12, t17 where reference ( field a ) from table t12 is not exists in table v11
I hope this clarify my question.
thanks & regards
pjp
August 10, 2006 - 9:13 am UTC
not really - I don't know your data model, the implied relations, the actual in place relations and all.
select a, b, c, d, e, f
from t12, t17, t31, d
where < big join condition between these tables >
a not in
(
select a
from t12
minus
select b
from t12, t17, t31, d, v11
where < big join condition between various tables >
)
/
select ....
from t12, t17, t32, d
where <join>
and not exist (select null
from t12 t12_inner, v11
where t12_inner.a = t12.a
and v11.whatever = whatever.whatever )
seems to be similar. but cannot say for sure - you might look at this "concept" and see if it applies.
Paul, August 24, 2006 - 8:52 pm UTC
Thanks very much Tom
How does oracle evaluate null or is null ?
A reader, September 12, 2006 - 1:35 pm UTC
Tom,
I have a table with 1 Mil. rows out of which I have 1k rows where perticular timestamp filed is null.
now if I were to run query like
update t
set last_tmst = systimestamp
where tas_tmst is null <=== ***
we have a non-unique index on last_tmst. Although explain plan shows full tablescan.
Q.) how would oracle find which and where these nulls are located ? and then update ?
can you please explain ?
September 12, 2006 - 5:34 pm UTC
you have an index on last_tmst
you are searching on tas_tmst
so - why in the world would you think we would use it?
if you had an index on:
t(tas_tmst,0)
or
t(tas_tmst,ANY_NON_NULLABLE_COLUMN)
then we could use that index to find the nulls.
</code>
http://asktom.oracle.com/Misc/something-about-nothing.html <code>
for more info
no of times a procedure or package has been executed
A reader, September 13, 2006 - 2:16 pm UTC
is there a way we can find out that how many times a pack or procedure has been executed from dynamic views.
September 13, 2006 - 3:15 pm UTC
not really.
top level procedure calls can be seen in v$sql "begin procedure; end;" type of statements - but anything is calls would not be recorded.
auditing may help.
do it yourself auditing would definitely help.
Brad, October 23, 2006 - 1:06 pm UTC
I have 7 tables...each table has 30 columsn each and around 8 million rows each
..They all have a primary key on 2 fields (line_number,store_number)..whats the best way to join all these
tables..i tried the union all and max() trick..but it blew out the temp space...any other ways?
October 23, 2006 - 1:53 pm UTC
there is only one way to join them all - via a SQL statement that "joins"
Not sure what you were using union all and max() or what "trick" was involved - you are not being very clear.
A reader, October 23, 2006 - 4:12 pm UTC
i want to full outer join all tables...
its basically the following
select
line_number,
store_number,
max(sales),
max(stock)
from
(
select
line_number,
store_number,
sales,
null stock
from table_1
union all
select
line_number,
store_number,
null sales,
stock
from table_2
----
---
table3
--
table4
...
table5
table7
)
group by
line_number,
store_number
October 23, 2006 - 5:30 pm UTC
umm, you are not even joining.
is store number/line number unique in each of these seven tables.
this is not making any sense - I don't see a single join anywhere.
Excellent
Venkat, October 27, 2006 - 7:09 pm UTC
I have this simple query. Iam trying to get the column heading as Range1 and Range2 when i run this query. Let us say the birthdate is between 40 and 50 date range i would like it to display 40 and 50 not hardcoding the value like "40 and 50". Basically i want to concat B.RANGE1 and B.RANGE2 to show the column heading. I tried but not able to do this. Could you help?
select
(CASE when trunc( months_between(sysdate,birthdate) /12 ) between B.RANGE1 and B.RANGE2 then 1 else 0 END)
"RANGE1 and RANGE2"
FROM PS_PERSONAL_DATA A, PS_AGE_RANGE B
WHERE EMPLID = 'XXXXX'
October 27, 2006 - 8:19 pm UTC
you will have to "hard code" identifiers, column names are identifiers.
in general, every single row in your query could have a different value for its name!!!
To: Venkat
Michel Cadot, October 28, 2006 - 2:12 am UTC
As it, your query, if you have 10 ranges, will display nine lines with "0" and onr with "1". If you achieve what you want, you will have something like:
Between 40 and 50
-----------------
0
0
0
0
0
1
0
0
0
0
Which is quite useless.
So what do you really want to achieve?
Michel
Thanks
venkat, October 28, 2006 - 9:39 am UTC
What iam tryimg is lets us the age range table has values.
Range1 Range2
10 20
20 30
30 40
40 50
50 60
60 70
70 80
80 90
90 100
If the age is between 10 and 20 the colum should display '10 and 20' or between 20 and 30 the colum should display '20 and 30' and so on.. and i do not want to hard code '10 and 20' or '20 and 30' or ... I want these come from columns Range1 and Range2. If i were giving age between 10 and 20 in my query i can hradcode the column as "10 and 20". When i do not the values in Range1 and Range2 columns i cannot hardcode, right? The ranges in the table change so i want to make it dynamic instead of hardcoding.I hope i made my question clear.
Thanks again for your time to answer my questions.
October 28, 2006 - 10:42 am UTC
think about this for a minute please.
You have lots of ranges there.
And how many names can an individual column be known by exactly????????
think about it - really. A column can have a name of "10 and 20" but what happens when we join to your second row eh???
you are asking for something that is not quite possible to do regardless of how you try to do it - a column has A NAME.
To: Venkat
Michel Cadot, October 28, 2006 - 10:54 am UTC
I still don't understand.
What should be the result if you search for 2 EMPLID one in range 40-50 and the other one in range 10-20?
Michel
October 28, 2006 - 11:19 am UTC
(like I said "think about it" :)
Thanks Michel for the response
Venkat, October 29, 2006 - 6:06 pm UTC
So, are you saying it is not possible to display the results something like this below, showing the count of employees with their age falling in each range?
10 AND 20 20 AND 30 30 AND 40 40 AND 50 50 AND 60
========= ========= ========= ========= =========
20 20 20 20 20
Thanks again for all your time.
October 29, 2006 - 6:32 pm UTC
that definitely is - but - and this is important to know - IDENTIFIERS ARE STATIC
in order to write that query, you must:
a) query your range table to
b) discover your ranges to
c) discover the NUMBER OF COLUMNS you have to
d) create the query that incorporates your ranges as column names AND selects that many columns.
To: Venkat
Michel Cadot, October 30, 2006 - 2:46 am UTC
To show how to use what Tom said.
You can do it in SQL*Plus, if you know the number of ranges. For instance using good old SCOTT schema, I know there are 5 grades in SALGRADE, I can do:
SCOTT> def nbRange=5
SCOTT> def colWidth=12
SCOTT> col r1 format a&colWidth new_value c1
SCOTT> col r2 format a&colWidth new_value c2
SCOTT> col r3 format a&colWidth new_value c3
SCOTT> col r4 format a&colWidth new_value c4
SCOTT> col r5 format a&colWidth new_value c5
SCOTT> col f new_value f
SCOTT> select max(decode(grade,1,losal||' to '||hisal)) r1,
2 max(decode(grade,2,losal||' to '||hisal)) r2,
3 max(decode(grade,3,losal||' to '||hisal)) r3,
4 max(decode(grade,4,losal||' to '||hisal)) r4,
5 max(decode(grade,5,losal||' to '||hisal)) r5,
6 lpad('0',&colWidth-1,'9') f
7 from salgrade
8 /
R1 R2 R3 R4 R5 F
------------ ------------ ------------ ------------ ------------ -----------
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999 99999999990
1 row selected.
SCOTT> col c1 format &f heading "&c1"
SCOTT> col c2 format &f heading "&c2"
SCOTT> col c3 format &f heading "&c3"
SCOTT> col c4 format &f heading "&c4"
SCOTT> col c5 format &f heading "&c5"
SCOTT> select
2 count(case when grade=1 and sal between losal and hisal then 1 end) c1,
3 count(case when grade=2 and sal between losal and hisal then 1 end) c2,
4 count(case when grade=3 and sal between losal and hisal then 1 end) c3,
5 count(case when grade=4 and sal between losal and hisal then 1 end) c4,
6 count(case when grade=5 and sal between losal and hisal then 1 end) c5
7 from emp, salgrade
8 /
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999
------------ ------------ ------------ ------------ ------------
3 3 2 5 1
1 row selected.
Of course, the first query should be enclosed between "set termout off" and "set termout on" to avoid its result in the report.
Now if you don't know how many grades you have. You can simulate a SQL*Plus report with SQL:
SCOTT> set heading off
SCOTT> col nop noprint
SCOTT> col ul fold_before
SCOTT> with
2 ranges as (
3 select grade, lpad(losal||' to '||hisal, &colWidth) range,
4 count(*) over () nb
5 from salgrade
6 ),
7 data as (
8 select grade, to_char(count(*),&f) cnt
9 from emp, salgrade
10 where sal between losal and hisal
11 group by grade
12 )
13 select 1 nop,
14 replace(sys_connect_by_path(range,'/'),'/',' ') rg,
15 sys_connect_by_path(lpad('=',&colWidth,'='),' ') ul
16 from ranges
17 where level = nb
18 connect by prior grade = grade-1
19 start with grade = 1
20 union all
21 select 2 nop,
22 replace(sys_connect_by_path(nvl(d.cnt,lpad('0',&colwidth)),'/'),
23 '/',' '),
24 ''
25 from ranges r, data d
26 where level = r.nb
27 and d.grade (+) = r.grade
28 connect by prior r.grade = r.grade-1
29 start with r.grade = 1
30 order by 1
31 /
700 to 1200 1201 to 1400 1401 to 2000 2001 to 3000 3001 to 9999
============ ============ ============ ============ ============
3 3 2 5 1
2 rows selected.
I let you decypher the query, just execute it step by step to see what happens at each level.
Michel
Very useful
Biswadip Seth, October 30, 2006 - 6:39 am UTC
Hello Tom,
thanks for the query.It helped me alot...now i have a problem.in a table i have the data like
ID Name Dept
-----------------------
1 C EDM
3 A RSS
2 B KOL
4 P AAA
5 V BBB
in a query i want the output in sorted order of all the columns although there will be no relation between the columns of the row...i want a out like below
ID Name Dept
-----------------------
1 A AAA
3 B BBB
2 C EDM
4 P KOL
5 V RSS
Is it possible to get such a output? any help will be highly appreciative.
Thnaking You
Biswadip Seth
October 30, 2006 - 9:15 am UTC
that does not even begin to make sense to me, why would you want to do that? What kind of data could this be that this would make sense? I am very very curious on this one.
If you tell me, I'll show you how. don't forget the create table and insert intos as well!
To: Biswadip Seth
Michel Cadot, October 30, 2006 - 8:42 am UTC
Thanks Michel Again!!
Venkat, October 30, 2006 - 10:33 am UTC
I will try the query you have given. Thanks again for your valuable time.
Thanks Michel
Biswadip, October 30, 2006 - 11:22 pm UTC
Thanks Michel for your reply.
Actully i have a scenario where i need to show all the data always in the sorted manner like user want to display all the numbers fields data in sorted(highest value in the top) and string field alphabatically sorted,They were not very particular about the relationship, but they are more keen on see the value in the sorted order always.
So i needed that query.
I able to write a query....but it's show....if you able to write some thing better please reply me.
The following query is mine
----------------------------
Select a.id ,b.name ,c.dept
FROM (SElect rownum r ,ID
from (Select ID ,NULL NAME, NULL DEPT
from test_111 A Order By ID
))a,
(SElect rownum r ,NAME
from (Select NULL ID,NAME NAme, null dept
from test_111 A Order by name
))b,
(SElect rownum r ,dept
from (Select NULL ID,NUll NAme, Dept dept
from test_111 A Order by dept
))c
where a.r = b.r
and a.r = c.r
and b.r = c.r
October 31, 2006 - 8:41 am UTC
this is baffling
of what POSSIBLE use could this result be.
Utterly baffling.
select *
from (select id, row_number() over (order by id) rn from t) a,
(select name, row_number() over (order by name) rn from t) b,
(select dept, row_number() over (order by dept) rn from t) c
where a.rn = b.rn and b.rn = c.rn
order by a.rn;
if by "but it's show" you meant "but it's SLOW" - the only answer to that would be "no kidding, think about it"
Make the end user wait a long time
Use inordinate machine resources
to give them a scrambled egg
I cannot imagine the real world business case whereby this even BEGINS to make sense to do.
Again a Tricky one.....Urgent
Biswadip, October 30, 2006 - 11:30 pm UTC
I have a table called test(ID Number, ParentID Number, Name Varchar2(20))
i have the following the data in a hierarchical structure
Root (0)
|
----LP-0 (1)
|
|
----LI-1 (2)
| |
| |--LP-1 (2.1)
| |--LP-2 (2.2)
| |--LP-3 (2.2)
|
|
----LO-1 (3)
| |
| |
| |--LP-4 (3.1)
| |--LP-5 (3.2)
| |--LO-2 (3.3)
| |
| |--LP-6 (3.3.1)
| |--LP-7 (3.3.2)
| |--LO-3 (3.3.3)
| |
| |--LP-8 (3.3.3.1)
| |--LP-9 (3.3.3.2)
|----LP-10 (4)
So The data in the table is looks like
LEVEL ID PARENTID NAME
==============================================
1 1 Root
2 2 1 LP-0
2 3 1 LI-1
3 4 3 LP-1
3 5 3 LP-2
3 6 3 LP-3
2 7 1 LO-1
3 8 7 LP-4
3 9 7 LP-5
3 10 7 LO-2
4 11 10 LP-6
4 12 10 LP-7
4 13 10 LO-3
5 14 13 LP-8
5 15 13 LP-9
2 16 1 LP-10
I need a output with another column say LevelNumber the value of which are displayed in the
tree structure adjacent to each node. Read the number from the right,1st number from before the
1st dot(.) indicates which child it is of it's parent. like 1st or 2nd or 3rd or so on. and rest of the number in the concatenated manner indicates again which level and which parent
it belongs to.
i have written a query to get the value of the LevelNumber as I want. The query is below
SELECT m.ID
, m.Name
, m.ParentID ParentID
, NVL(LTRIM(SYS_CONNECT_BY_PATH(m.Rank, '.'), '.'),'0') LevelNumber
FROM (SELECT ID,Name,ParentID,(CASE
WHEN PARENTID IS NULL THEN NULL
ELSE RANK() OVER (PARTITION BY ParentID ORDER BY ID)
END) Rank
FROM test) m
CONNECT BY m.ParentID = PRIOR m.ID
START WITH m.ParentID IS NULL
ORDER BY ID
/
LEVEL ID NAME PARENTID LEVELNUMBER
=====================================================
1 1 Root 0
2 2 LP-0 1 1
2 3 LI-1 1 2
3 4 LP-1 3 2.1
3 5 LP-2 3 2.2
3 6 LP-3 3 2.3
2 7 LO-1 1 3
3 8 LP-4 7 3.1
3 9 LP-5 7 3.2
3 10 LO-2 7 3.3
4 11 LP-6 10 3.3.1
4 12 LP-7 10 3.3.2
4 13 LO-3 10 3.3.3
5 14 LP-8 13 3.3.3.1
5 15 LP-9 13 3.3.3.2
2 16 LP-10 1 4
I am able to get the output as I wanted....but the problem is I am inserting huge number of rows like more than 10000 rows at a time and then I am generating this LevelNumber which are inserted to it's 4 child table. I need this LevelNumber in the child tables so I am querying this select statement for all the child tables for 10000 rows which in turn makes my application very slow
Is there any better way to create this kind of number automatically.
Any kind of help is highly appreciative.
Thanks.
With Regards
Biswadip Seth
To: Biswadip Seth
Michel Cadot, October 31, 2006 - 2:17 am UTC
You neither read the first 3 lines of the link I posted nor Tom's followup last sentence:
"no create table
no insert into
no look"
"don't forget the create table and insert intos as well!"
Btw, the query and explaination you gave does not match with the result you showed in your previous post in which ID is not sorted.
Michel
Michel Cadot, October 31, 2006 - 9:10 am UTC
Tom,
Just as an exercise, here's a solution that makes only one table scan:
with data as (
select id, name, dept, rn,
row_number () over (order by id) rid,
row_number () over (order by name) rnam,
row_number () over (order by dept) rdep
from (select rownum rn, t.* from t) )
select case
when rid = rn then id
when rid < rn
then lead(id,decode(sign(rn-rid),-1,0,rn-rid)) over (order by rid)
when rid > rn
then lag(id,decode(sign(rid-rn),-1,0,rid-rn)) over (order by rid)
end id,
case
when rnam = rn then name
when rnam < rn
then lead(name,decode(sign(rn-rnam),-1,0,rn-rnam)) over (order by rnam)
when rnam > rn
then lag(name,decode(sign(rnam-rn),-1,0,rnam-rn)) over (order by rnam)
end name,
case
when rdep = rn then name
when rdep < rn
then lead(dept,decode(sign(rn-rdep),-1,0,rn-rdep)) over (order by rdep)
when rdep > rn
then lag(dept,decode(sign(rdep-rn),-1,0,rdep-rn)) over (order by rdep)
end dept
from data
order by rn
/
With a little time I can find a more complicated way to solve this (funny) question. :)
Michel
Create and Insert Scripts related to "a Tricky one...."
Biswadip, November 02, 2006 - 7:20 am UTC
The scripts are as follows
create table TEST
(
ID NUMBER,
PARENTID NUMBER,
NAME VARCHAR2(20)
)
/
Insert Scripts are as below
---------------------------
insert into TEST (ID, PARENTID, NAME) values (1, null, 'Root');
insert into TEST (ID, PARENTID, NAME) values (2, 1, 'LP-0');
insert into TEST (ID, PARENTID, NAME) values (3, 1, 'LI-1');
insert into TEST (ID, PARENTID, NAME) values (4, 3, 'LP-1');
insert into TEST (ID, PARENTID, NAME) values (5, 3, 'LP-2');
insert into TEST (ID, PARENTID, NAME) values (6, 3, 'LP-3');
insert into TEST (ID, PARENTID, NAME) values (7, 1, 'LO-1');
insert into TEST (ID, PARENTID, NAME) values (8, 7, 'LP-4');
insert into TEST (ID, PARENTID, NAME) values (9, 7, 'LP-5');
insert into TEST (ID, PARENTID, NAME) values (10, 7, 'LO-2');
insert into TEST (ID, PARENTID, NAME) values (11, 10, 'LP-6');
insert into TEST (ID, PARENTID, NAME) values (12, 10, 'LP-7');
insert into TEST (ID, PARENTID, NAME) values (13, 10, 'LO-3');
insert into TEST (ID, PARENTID, NAME) values (14, 13, 'LP-8');
insert into TEST (ID, PARENTID, NAME) values (15, 13, 'LP-9');
insert into TEST (ID, PARENTID, NAME) values (16, 1, 'LP-10');
commit;
Sree, November 02, 2006 - 11:05 am UTC
Hi,
I am currently using Forms 4.5 with Oracle 8.0 as backend. We are migrating all forms & database to 10g. As a part of migration everything is under control except one issue.
In Oracle Forms, the below statements are not working. I can't write the big query but giving you a hint by taking an example of EMP table where the code is written to update the statement in this manner.
update emp
set ename = :emp.ename,
sal = :emp.sal,
category = :emp.category,
where current of emp;
The above statement is throwing with an error when trying to compile in 10g Developer Suite.
Error 389:
Table, View or Alias name 'EMP' not allowed in this context.
Please throw light on this Tom
November 02, 2006 - 12:10 pm UTC
please utilize support for this one.
(but where current of would normally say "where current of CURSOR_NAME" and it would be strange to have the cursor named after the table - scoping issues no?)
how to pass parameters from forms to stored procedures
KRISHNAKANTH VM, November 14, 2006 - 12:24 pm UTC
For eg. I am having a procedure as follows
CREATE PROCEDURE str_proc
@yr char(4),
@nam char(5),
@cod char(2)
As
SELECT * FROM TABLE1
WHERE YEAR=@yr
AND NAME=@nam
AND CODE=@cod
Now I am going to call this stored procedure in MICROSOFT ACCESS as follows
EXECUTE str_proc '2004','RAM','CA'
It works fine. I am able to see all the rows which satisfies this condition.
But when I am getting the values YEAR,NAME,CODE from FORMS how will call stored procedure. I have tried calling the stored procedure as follows. But it ends in error.
EXECUTE str_proc [forms]![rsm_form].[combo2],[forms]![rsm_form].[text12],[forms]![rsm_form].[combo4]
Could you help me to resolve this issue.
November 15, 2006 - 6:41 am UTC
umm, this is not a microsoft forum, you might want to look at the hostname - it ends in oracle.com.
look for one that ends in microsoft.com for questions about access and related products.
sorting the fields in given order
Krishnakanth VM, November 21, 2006 - 8:45 am UTC
thank you very much for your response.
create table project(relyear char(4),dotnum char(1),budgetcategory
char(10),projecttype char(8));
insert into project values('2003','0','ecp','appl');
insert into project values('2004','0','ecp','rebl');
insert into project values('2004','0','ecp','appl');
insert into project values('2004','1','base','clnp');
insert into project values('2004','0','ecp','appl');
insert into project values('2003','1','ecp','clnp');
insert into project values('2003','1','base','appl');
insert into project values('2005','1','ecp','appl');
insert into project values('2005','0','cap','clnp');
insert into project values('2003','0','ecp','appl');
insert into project values('2004','0','base','rebl');
insert into project values('2003','1','ecp','rebl');
insert into project values('2003','1','base','appl');
insert into project values('2005','0','base','randr');
insert into project values('2005','0','cap','randr');
insert into project values('2005','0','cap','appl');
i need to sort the fields budgetcategory followed by projecttype.
eg.
(1) Budgetcategory: ECP
under ECP projecttype should be sorted as follows
(i) clnp
(ii) rebl
(iii) randr
(iv) appl
(2) Budgetcategory: CAP
under ECP projecttype should be sorted as follows
(i) clnp
(ii) rebl
(iii) randr
(iv) appl
(3) Budgetcategory: BASE
under ECP projecttype should be sorted as follows
(i) clnp
(ii) rebl
(iii) randr
(iv) appl
How this can be done ?
Thanks for your help in advance.
November 22, 2006 - 3:33 pm UTC
i probably could, if I understood what you were asking. do you mean:
order by budgetcategory,
case when budgetcategory = 'ECP'
then decode( projecttype, 'clnp', 1, 'rebl', 2, ... )
when budgetcategory = 'CAP'
then decode( projecttype, 'clnp', 1, 'rebl', 2, .... )
and so on? use a case statement
I have a query but wanted to keep this more dynamic
Kris, December 08, 2006 - 10:50 am UTC
1) I have a 3 tables I am using union all to get result set.
table x
id numaric
date1 date
date2 date
table y
id numaric
date1 date
date2 date
table Z
id numaric
date1 date
have data past 5 yreas data. thing is I need to get result for every quarter.
to get for one quarter i wrote some thing like this
( SELECT distinct ACCOUNT_ID, SUM(TRADE_QTY) TRADE_QTY
fROM (
select account_id, sum(trade_quantity) trade_qty
from OFFSET_FINAL_TRADES
WHERE buy_trade_time BETWEEN TO_DATE('01/01/2006','MM/DD/YYYY') AND
add_months(TO_DATE('01/01/2006','MM/DD/YYYY'),3)
group by account_id
UNION ALL
select account_id, sum(trade_quantity) trade_qty
from OFFSET_FINAL_TRADES
WHERE sell_trade_time BETWEEN TO_DATE('01/01/2006','MM/DD/YYYY') AND
add_months(TO_DATE('01/01/2006','MM/DD/YYYY'),3)
group by account_id
UNION ALL
select account_id, sum(trade_quantity) trade_qty
from OFFSET_working_TRADES
WHERE buy_trade_time BETWEEN TO_DATE('01/01/2006','MM/DD/YYYY') AND
add_months(TO_DATE('01/01/2006','MM/DD/YYYY'),3)
group by account_id
UNION ALL
select account_id, sum(trade_quantity) trade_qty
from OFFSET_working_TRADES
WHERE sell_trade_time BETWEEN TO_DATE('01/01/2006','MM/DD/YYYY') AND
add_months(TO_DATE('01/01/2006','MM/DD/YYYY'),3)
group by account_id
UNION ALL
select account_id, sum(trade_quantity) trade_qty
from OPEN_WORKING_TRADES
WHERE trade_time BETWEEN TO_DATE('01/01/2006','MM/DD/YYYY') AND add_months(TO_DATE('01/01/2006','MM/DD/YYYY'),3)
group by account_id
)
GROUP BY ACCOUNT_ID
is there any dynamic way to get results?
2) I have another table..
id number
date1 date
date2 date
value1 number
value2 number
string varchar2
values are some thing like
id date1 date2 value1 value2 string
1 10-01-2004 10-02-2004 20 30 Q1
2 10-03-2004 10-04-2004 20 40 Q1
3 10-04-2004 10-06-2004 40 20 Q2
NOW my result should be some thing like
id date1 date2 (are common for all the qurter wise)
Q1 Q2
id date1 date2 value1 ||-|| value value ||-||value2
1 10-01-2004 10-02-2004 20 30 NULL
2 10-03-2004 10-04-2004 20 40 NULL
3 10-04-2004 10-06-2004 null 40 20
Please help on this one ....
3
Kris
create columns from rows
Anh Tuan, February 09, 2007 - 2:27 am UTC
create columns from rows
I have data in in a table is:
Ma_SV Date Mark HoiDong
sv01 01/01/2006 7 hd01
sv01 01/06/2006 8 hd02
sv01 01/12/2006 9 hd03
sv02 01/01/2206 8 hd01
become:
Ma_SV Mark1 Mark2 Mark3
sv01 7 8 9
February 09, 2007 - 8:15 am UTC
search this site for the word
pivot
no create table
no insert intos
never never will there be a query produced....
alternate to union all
Sanji, February 09, 2007 - 4:23 pm UTC
Tom,
I'm trying to tune this query
select v.vendor_group, v.vendor, ' ' location_code, v.vendor_sname,
a.addr1, a.city_addr5, a.state_prov, a.postal_code,
v.phone_num, v.fax_num, v.tax_id, v.vat_reg_nbr,
v.create_date, m.dt_chg
from apvenmast v, apvenaddr a, apvenmast3 m
where v.vendor_group = a.vendor_group
and v.vendor = a.vendor
and v.vendor_group = m.vendor_group (+)
and v.vendor = m.vendor (+)
and v.vendor_status = 'A'
and a.location_code = ' '
and a.cur_addr = 'Y'
UNION
select v.vendor_group, v.vendor, l.location_code, v.vendor_sname,
a.addr1, a.city_addr5, a.state_prov, a.postal_code,
l.phone_num, l.fax_num, v.tax_id, l.vat_reg_nbr,
v.create_date, m.dt_chg
from apvenmast v, apvenloc l, apvenaddr a,
apvenmast3 m
where v.vendor_group = l.vendor_group
and v.vendor = l.vendor
and l.vendor_group = a.vendor_group
and l.vendor = a.vendor
and l.location_code = a.location_code
and v.vendor_group = m.vendor_group (+)
and v.vendor = m.vendor (+)
and v.vendor_status = 'A'
and l.active_status = 'A'
and a.cur_addr = 'Y'
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24982 Card=48191 Bytes=12176615)
1 0 SORT (UNIQUE) (Cost=24982 Card=48191 Bytes=12176615)
2 1 UNION-ALL
3 2 HASH JOIN (OUTER) (Cost=6216 Card=714 Bytes=164934)
4 3 NESTED LOOPS (Cost=5495 Card=714 Bytes=148512)
5 4 TABLE ACCESS (FULL) OF 'APVENADDR' (Cost=2639 Card=1428 Bytes=122808)
6 4 TABLE ACCESS (BY INDEX ROWID) OF 'APVENMAST' (Cost=2 Card=1 Bytes=122)
7 6 INDEX (UNIQUE SCAN) OF 'VENSET1' (UNIQUE) (Cost=1 Card=2)
8 3 TABLE ACCESS (FULL) OF 'APVENMAST3' (Cost=715 Card=59344 Bytes=1364912)
9 2 HASH JOIN (Cost=15159 Card=47477 Bytes=12011681)
10 9 HASH JOIN (OUTER) (Cost=12100 Card=18960 Bytes=3166320)
11 10 HASH JOIN (Cost=11288 Card=18960 Bytes=2730240)
12 11 TABLE ACCESS (FULL) OF 'APVENLOC' (Cost=1220 Card=37921 Bytes=2540707)
13 11 TABLE ACCESS (FULL) OF 'APVENMAST' (Cost=9753 Card=151979 Bytes=11702383)
14 10 TABLE ACCESS (FULL) OF 'APVENMAST3' (Cost=715 Card=59344 Bytes=1364912)
15 9 TABLE ACCESS (FULL) OF 'APVENADDR' (Cost=2639 Card=189908 Bytes=16332088)
I tried UNION ALL and the result set was very much the same.
As apparent, the tables would be scanned twice, one without using apvenloc and one with apvenloc.
How can this query be written more efficiently.
Thanks
Sanji
Developer
Pat, February 15, 2007 - 12:53 pm UTC
Hello Tom,
I have 3 tables A,B & C. Table C has foreign key references to A & B.
I need cartesian product of A & B plus the values from C.
I wrote a SQL with subselects, is there any other way to get the same result with out using subselects.
CREATE TABLE A
(AID NUMBER,ANM VARCHAR2(20));
CREATE TABLE B
(BID NUMBER,BNM VARCHAR2(20));
CREATE TABLE C
(AID NUMBER,BID NUMBER,CNM VARCHAR2(20),
CDT DATE,CTYPE CHAR(1));
INSERT INTO A VALUES(11,'A1 NAME');
INSERT INTO A VALUES(12,'A2 NAME');
INSERT INTO A VALUES(13,'A3 NAME');
INSERT INTO B VALUES(21,'B1 NAME');
INSERT INTO B VALUES(22,'B2 NAME');
INSERT INTO B VALUES(23,'B3 NAME');
INSERT INTO C VALUES(11,21,'C1121 NAME',SYSDATE,'A');
INSERT INTO C VALUES(11,22,'C1122 NAME',SYSDATE,'A');
INSERT INTO C VALUES(12,21,'C1221 NAME',SYSDATE,'A');
SELECT * FROM A;
SELECT * FROM B;
SELECT * FROM C;
AID ANM
11 A1 NAME
12 A2 NAME
13 A3 NAME
BID BNM
21 B1 NAME
22 B2 NAME
23 B3 NAME
AID BID CNM CDT CTYPE
11 21 C1121 NAME 2/15/2007 A
11 22 C1122 NAME 2/15/2007 A
12 21 C1221 NAME 2/15/2007 A
SQL for the output.
SELECT A.AID
,A.ANM
,B.BID
,B.BNM
,(SELECT C.CNM
FROM C
WHERE A.AID = C.AID
AND B.BID = C.BID) CNM
,(SELECT C.CDT
FROM C
WHERE A.AID = C.AID
AND B.BID = C.BID) CDT
,(SELECT C.CTYPE
FROM C
WHERE A.AID = C.AID
AND B.BID = C.BID) CTYPE
FROM A
,B
ORDER BY A.AID
,B.BID
output i need
AID ANM BID BNM CNM CDT CTYPE
11 A1 NAME 21 B1 NAME C1121 NAME 2/15/2007 A
11 A1 NAME 22 B2 NAME C1122 NAME 2/15/2007 A
11 A1 NAME 23 B3 NAME
12 A2 NAME 21 B1 NAME C1221 NAME 2/15/2007 A
12 A2 NAME 22 B2 NAME
12 A2 NAME 23 B3 NAME
13 A3 NAME 21 B1 NAME
13 A3 NAME 22 B2 NAME
13 A3 NAME 23 B3 NAME
Thanks
February 16, 2007 - 12:58 pm UTC
why didn't you just join?
select ...
from a, b, c
where a.aid = c.aid and b.bid = c.bid
/
outer join AxB to C if need be
select ....
from (select ... from a, b) X, c
where x.aid = c.aid(+) and x.bid = c.bid(+)
Union all
Sanji, February 16, 2007 - 1:22 pm UTC
Tom,
I am looking for an alternative to UNION ALL in a sql statement, (as mentioned in the prior to prior post by me).
Am not able to locate the example where you would have explained any such alternative. Could you point out where i can look for the subject.
Thanks
Sanji
February 17, 2007 - 11:16 am UTC
i just saw a big query, with no information about how the tables really relate - what the question being answered was.
there is no generic "this is how to avoid a union" answer out there -anywhere. You sort of need to phrase the requirements of the query and explain the relations of the tables (1 to 1, 1 to many, optional whatever.)
big query with long names - need small example to work with.
Substr
yoav, March 11, 2007 - 8:53 am UTC
Version: 10gR2
select '+DATA/psdwh/datafile/dw_crm_tts.335.615558725' from dual
I need to substr the string so the result should be the value between the
last '/' and the first '.' e.g: dw_crm_tts
Could you please show how ?
Is it possible via regular expression ?
March 12, 2007 - 8:16 pm UTC
this presumes a '/' and a '.' after that last slash, exist. if not, you need to modify. yes, regular expressions can do this too...
ops$tkyte%ORA10GR2> select x, in_str, sub_str,
2 substr( sub_str, 1, instr(sub_str,'.')-1 )
3 from (
4 select x, instr( x, '/', -1, 1 ) in_str, substr(x, instr( x, '/', -1, 1 )+1 ) sub_str
5 from (select '+DATA/psdwh/datafile/dw_crm_tts.335.615558725' x from dual)
6 )
7 /
X IN_STR SUB_STR SUBSTR(SUB_STR,1,INSTR(SUB_STR,'.')-1)
--------------------------------------------- ---------- --------------------------------------------- ---------------------------------------------
+DATA/psdwh/datafile/dw_crm_tts.335.615558725 21 dw_crm_tts.335.615558725 dw_crm_tts
ops$tkyte%ORA10GR2> select x, regexp_replace( regexp_replace( x, '.*/', '' ), '\..*', '' ) r
2 from (select '+DATA/psdwh/datafile/dw_crm_tts.335.615558725' x from dual)
3 /
X R
------------------------------ ------------------------------
+DATA/psdwh/datafile/dw_crm_tt dw_crm_tts
s.335.615558725
Order by decode with negative and positive numbers
DW, March 19, 2007 - 6:34 pm UTC
Earlier in this thread, you were describing using "order by decode (...)" to allow dynamic sorts, and you said: "If your numbers go negative -- you'll have to deal with that somehow as well." That is my situation: one of the sort options is an amount that could be negative or positive. What to_char format can I use to ensure that -25.50 precedes -1.50, which will precede 0.75, which will precede 15.50, so I get a list in this order:
-25.50
-1.50
0.75
15.50
My efforts so far seem unnecessarily complicated. I must be missing something obvious.
March 19, 2007 - 7:46 pm UTC
probably the 'easiest' way would be - well - dynamic sql (open ref_cursor using SQL_STMT)
but if you want static sql, add a column for each NUMBER type, like this:
ops$tkyte%ORA10GR2> create table t
2 as
3 select all_users.*, cast( dbms_random.value( -10, 10 ) as number(5,2) ) data
4 from all_users
5 /
Table created.
ops$tkyte%ORA10GR2> variable x number
ops$tkyte%ORA10GR2> exec :x := 4
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> select *
2 from t
3 order by decode( :x,
4 1, username,
5 2, case when user_id >= 0 then to_char(user_id,'999999') end,
6 3, to_char(created,'yyyymmddhh24miss'),
7 4, case when data >= 0 then to_char(data,'999.99') end) ASC NULLS FIRST,
8 decode( :x,
9 2, case when user_id < 0 then user_id end ) ASC,
10 decode( :x,
11 4, case when data < 0 then data end ) ASC
12 /
USERNAME USER_ID CREATED DATA
------------------------------ ---------- --------- ----------
DBSNMP 24 30-JUN-05 -9.95
BIG_TABLE 64 17-AUG-06 -9.94
RK 72 27-AUG-06 -9.81
OLAPSYS 47 30-JUN-05 -9.68
BI 60 31-JUL-06 -9.29
A 114 08-JAN-07 -9.08
ORDPLUGINS 44 30-JUN-05 -8.6
MDSYS 46 30-JUN-05 -6.71
OE 56 31-JUL-06 -6.18
WMSYS 25 30-JUN-05 -5.82
MDDATA 50 30-JUN-05 -5.55
SI_INFORMTN_SCHEMA 45 30-JUN-05 -5.08
SYSMAN 51 30-JUN-05 -4.63
DIP 19 30-JUN-05 -4.61
IX 57 31-JUL-06 -3.87
B 115 08-JAN-07 -3.3
SYS 0 30-JUN-05 -2.88
U1 94 21-DEC-06 -2.22
U2 95 21-DEC-06 -1.56
XDB 38 30-JUN-05 -1.41
OUTLN 11 30-JUN-05 -1.04
SCOTT 54 30-JUN-05 -.87
SH 58 31-JUL-06 .4
TSMSYS 21 30-JUN-05 1.02
PERFSTAT 118 17-MAR-07 1.97
MGMT_VIEW 53 30-JUN-05 2.5
HR 55 31-JUL-06 3.37
SYSTEM 5 30-JUN-05 4.41
EXFSYS 34 30-JUN-05 4.53
ANONYMOUS 39 30-JUN-05 5.12
DMSYS 35 30-JUN-05 5.95
ORDSYS 43 30-JUN-05 7.04
OPS$TKYTE 119 19-MAR-07 7.41
CTXSYS 36 30-JUN-05 8.37
PM 59 31-JUL-06 8.37
35 rows selected.
Excellent - worked great
DW, March 19, 2007 - 8:38 pm UTC
Thanks for the fast reply. Works great. Allowed elimination of a huge ugly concatenated dynamic sql statement--executed often, no bind variables. Arrgghh!! Thanks so much for your help. You and this site are awesome.
Assignment-Resource Query
Jitender Arora, March 21, 2007 - 12:58 pm UTC
Hi Tom,
I have a table :
Name Null? Type
----------------------------------- -------- ------------------------
ASSIGNMENT_NUM NUMBER
RES_ID VARCHAR2(2)
Which has the data as follows :
SQL> select * from assignment;
ASSIGNMENT_NUM RE
-------------- --
1 AJ
1 JA
2 AJ
2 JA
3 AJ
3 JA
4 AJ
5 JA
6 JA
9 rows selected.
My objective is to write a query to get the following results :
ASSIGNMENT_NUM RE OT
-------------- -- --
1 AJ JA
2 AJ JA
3 AJ JA
4 AJ
5 JA
6 JA
6 rows selected.
What I have so far is :
select ASSIGNMENT_NUM, RES_ID,
decode( RES_ID, max(RES_ID) over (partition by ASSIGNMENT_NUM) ,
min(RES_ID) over (partition by ASSIGNMENT_NUM) ,
max(RES_ID) over (partition by ASSIGNMENT_NUM) ) Other_Res
from assignment
ASSIGNMENT_NUM RE OT
-------------- -- --
1 AJ JA
1 JA AJ
2 AJ JA
2 JA AJ
3 AJ JA
3 JA AJ
4 AJ AJ
5 JA JA
6 JA JA
9 rows selected.
Is it possible to obtain the kind of results I'm looking for...?
Your help would be greatly appreciated.
Regards
JA
March 21, 2007 - 1:30 pm UTC
create table + inserts are mandatory if you want a query.
Also,
a) are there only two per assigment num
b) is the min and max what you really want
this is a simple pivot (you can search for pivot on this site) and we can achieve that using row_number() over (partitoin by assignment_num order by res_id)
Simple Query
Deepak, April 05, 2007 - 1:29 pm UTC
Hi Tom,
Have two simple queries on SCOTT.EMP table:
1>
select count(*) from emp where empno=9999;
OUTPUT: 0
2>
select count(*) from emp where empno=9999
group by empno;
OUTPUT: NO ROWS RETURNED
Question: What makes Oracle to return different results in above mentioned scenario?
I know this is a very silly question but I need your help to know the concept.
Please explain.
April 05, 2007 - 1:52 pm UTC
if there is nothing to group by - you always get a row - that is the "definition of the way it works". An aggregate without a group by always returns at least one row and at most one row.
An aggregate with a group by returns one row PER GROUP. you had no groups, hence no rows.
can we construct this query
A reader, April 10, 2007 - 9:12 am UTC
Dear Tom,
Thanks very much for your help we can find in this site. I have the following question
create table t (id number, x2 varchar2(3), x3 varchar2(5), x4 number, x5 number);
insert into t values (1, 'SPI', '01000',15,25);
insert into t values (1,'SPA', '01000',15,25);
insert into t values (2, 'SPI', '02000',20,30);
insert into t values (2, 'SPI','03000',20,30);
insert into t values (3, 'SPI','04000',11,22);
insert into t values (3, 'SPI','04000',13,22);
commit;
SQL> select * from t;
ID X2 X3 X4 X5
--------- --- ----- --------- ---------
1 SPI 01000 15 25
1 SPA 01000 15 25
2 SPI 02000 20 30
2 SPI 03000 20 30
3 SPI 04000 11 22
3 SPI 04000 13 22
Then I am doing the following select to get the difference(column by column)
select aa.id, aa.x2
from t aa, t bb
where aa.id = bb.id
and aa.x2 <> bb.x2;
ID X2
--------- ---
1 SPA
1 SPI
select aa.id, aa.x3
from t aa, t bb
where aa.id = bb.id
and aa.x3 <> bb.x3;
ID X3
-------- -----
2 03000
2 02000
select aa.id, aa.x4
from t aa, t bb
where aa.id = bb.id
and aa.x4 <> bb.x4;
ID X4
-------- ---------
3 13
3 11
I would like to have a unique select given me results such this
ID X2 X3 X4
--------- --- ----- ----
1 SPA
1 SPI
2 03000
2 02000
3 13
3 11
Is this possible?
This is just an example, my real production table contains more than 70 columns to compare
Thanks
April 10, 2007 - 11:14 am UTC
in real life are there always AT MOST two rows?
or could there be many many rows for each ID
A reader, April 11, 2007 - 2:44 am UTC
Tom,
In my real example there is only two rows by id. As shown in the simple example I have given. There is no id with more than two rows. All ids contains exactly two rows.
Thanks
April 11, 2007 - 11:23 am UTC
I can get your exact output (formatting wise) but I think this should more than suffice for your report. It shows id by id the columns that differ - and if none do, they are not displayed (i added id=4)
ops$tkyte%ORA10GR2> select * from t;
ID X2 X3 X4 X5
---------- --- ----- ---------- ----------
1 SPI 01000 15 25
1 SPA 01000 15 25
2 SPI 02000 20 30
2 SPI 03000 20 30
3 SPI 04000 11 22
3 SPI 04000 13 22
4 SPI 04000 13 22
4 SPI 04000 13 22
8 rows selected.
ops$tkyte%ORA10GR2> select id,
2 decode( x2, other_x2, null, x2 ) x2_1, decode( x2, other_x2, null, other_x2 ) x2_2,
3 decode( x3, other_x3, null, x3 ) x3_1, decode( x3, other_x3, null, other_x3 ) x3_2,
4 decode( x4, other_x4, to_number(null), x4 ) x4_1, decode( x4, other_x4, to_number(null), other_x4 ) x4_2,
5 decode( x5, other_x5, to_number(null), x5 ) x5_1, decode( x5, other_x5, to_number(null), other_x5 ) x5_2
6 from (
7 select id,
8 row_number() over (partition by id order by rowid) rn,
9 x2, lag(x2) over (partition by id order by rowid) other_x2,
10 x3, lag(x3) over (partition by id order by rowid) other_x3,
11 x4, lag(x4) over (partition by id order by rowid) other_x4,
12 x5, lag(x5) over (partition by id order by rowid) other_x5
13 from t
14 )
15 where rn = 2
16 and decode( x2, other_x2, 0, 1 ) + decode( x3, other_x3, 0, 1 ) + decode( x4, other_x4, 0, 1 ) + decode( x5, other_x5, 0, 1 ) > 0
17 /
ID X2_ X2_ X3_1 X3_2 X4_1 X4_2 X5_1 X5_2
---------- --- --- ----- ----- ----- ----- ----- -----
1 SPA SPI
2 03000 02000
3 13 11
Excellent
A reader, April 12, 2007 - 7:51 am UTC
Tom,
This is excellent. I was very very happy to see your answer. I just would like to know how I can come to such a level of sql mastering. I have a PhD diplôma in Fluid Mechanics and have never studied sql before. I have just get your books and I am using them in my new job (oracle. Yes don't be surprised from Fluid Mechanics to oracle). I will appreciate your advices that might make me good in sql in near future.
Thanks a lot
April 13, 2007 - 12:00 pm UTC
experience - it is the only way I know, you gather a bag of tricks over time as you master and use the language.
how to simple calculate?
wawan, April 12, 2007 - 9:39 pm UTC
Dear Tom,
in above query , with the result
ID X2_ X2_ X3_1 X3_2 X4_1 X4_2 X5_1 X5_2
---------- --- --- ----- ----- ----- ----- ----- -----
1 SPA SPI
2 03000 02000
3 13 11
suppose we want to multiply some column with other
column , eg X3_1 * X3_2 ,
should we write again the
"decode( x3, other_x3, null, x3 )*decode( x3, other_x3, null, other_x3 )"
in the query ?
or any other short methode, maybe just X_31*X3_2 ?
I have query like this :
apps@FINTST> l
1 select suppl_no,store_no,sum(TOTAL_NET_AMOUNT) AAA
2 ,ratio_to_report(sum(TOTAL_NET_AMOUNT)) over()*100 prctg
3 ,ratio_to_report(
4 case when sum(TOTAL_NET_AMOUNT)>0 then sum(TOTAL_NET_AMOUNT)
5 end) over()*100 prctg2
6 from (
7 select suppl_no,store_no,TOTAL_NET_AMOUNT
8 from fin_orders@dba_ho
9 where deldate between '01-MAR-2007' and '31-MAR-2007'
10 and suppl_no=21424 and status in(6,7,8,9)
11 union all
12 select suppl_no,store_no,TOTAL_NET_AMOUNT
13 from orders@dba_ho
14 where deldate between '01-MAR-2007' and '31-MAR-2007'
15 and suppl_no=21424 and ord_status in(6,7,8)
16 )
17* group by suppl_no,store_no
apps@FINTST> /
......................... APPS-DEV Enter
SUPPL_NO STORE_NO AAA PRCTG PRCTG2
---------- ---------- ---------- ---------- ----------
21424 1 8260000 4.16674549 4.14814815
21424 3 28025000 14.1371722 14.0740741
21424 4 81125000 40.9233932 40.7407407
21424 5 -298750 -.15070402
21424 6 43365000 21.8754138 21.7777778
21424 7 -590000 -.29762468
21424 8 17700000 8.92874033 8.88888889
21424 18 20650000 10.4168637 10.3703704
8 rows selected.
how to add 2 column for calculate AAA*PRCTG, and
AAA*PRCTG2 ?
April 13, 2007 - 12:59 pm UTC
you should code what logically fits your needs?? I don't know what you need.
Can analytics be used here?
C Smith, April 17, 2007 - 11:22 am UTC
I have these tables:
CREATE TABLE t
(date_value DATE);
CREATE TABLE item_stock
(item VARCHAR2(10),
change_date DATE,
stock NUMBER);
T contains every date in a specific month (eg 01-APR-2007, 02-APR-2007...30-APR-2007), and will change periodically to contain dates from a new month
ITEM_STOCK contains a history of stock changes for a given item over time.
Sample data for Item_Stock:
Item Change_Date Stock
---- -------- -----
A 30-APR-2007 5
A 22-APR-2007 0
A 18-APR-2007 4
A 06-APR-2007 2
A 02-APR-2007 0
A 31-MAR-2007 1
I need a way in SQL to count the period in time represented by table T for which the stock is 0. For example, for the above sample data the result should be 12 because the period from 30-APR-2007 to 22-APR-2007 (non-inclusive of the lower date) contains 8 days where the stock is 0, and the period from 06-APR-07 to 02-APR-2007 non-inclusive of the lower date) contains 4 days where the stock is 0
I think this would do it:
SELECT COUNT(*)
FROM (SELECT date_value
FROM t
WHERE EXISTS
(SELECT stock FROM
(SELECT stock
FROM (SELECT change_date, stock
FROM item_stock
WHERE item = 'A'
AND change_date < t.date_value
ORDER BY change_date DESC)
WHERE rownum = 1)
WHERE stock = 0)
Except t.date_value will show up as an invalid column due to the nesting level of the correlated aubquery being more than one.
What can I do instead?
This is Oracle 8i, by the way.
Is there a way to do this is pure SQL that avoids having the table T at all, ie we use the start_date 01-APR-2007 and 30-APR-2007. I'm guessing that would be to difficult in SQL, however,and we would need PL/SQL?
Thanks for you help...
April 18, 2007 - 10:32 am UTC
ops$tkyte%ORA10GR2> variable b varchar2(20)
ops$tkyte%ORA10GR2> variable e varchar2(20)
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :b := '01-apr-2007'; :e := '30-apr-2007';
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> select * from item_stock order by 1,2;
ITEM CHANGE_DA STOCK
---------- --------- ----------
A 31-MAR-07 1
A 02-APR-07 0
A 06-APR-07 2
A 18-APR-07 4
A 22-APR-07 0
A 30-APR-07 5
b 31-MAR-07 1
b 01-APR-07 0
b 10-APR-07 2
b 18-APR-07 4
b 20-APR-07 0
b 28-APR-07 5
12 rows selected.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select item, b, e, e-b+1 days, stock
2 from
3 ( select item,
4 greatest( change_date, to_date(:b,'dd-mon-yyyy') ) b,
5 nvl( lead(change_date-1) over
6 (partition by item order by change_date),
7 to_date(:e,'dd-mon-yyyy')) e,
8 stock
9 from
10 (
11 select *
12 from item_stock
13 where change_date between
14 to_date(:b,'dd-mon-yyyy') and to_date(:e,'dd-mon-yyyy')
15 union
16 select *
17 from item_stock
18 where change_date = (select max(change_date)
19 from item_stock
20 where change_date <= to_date(:b,'dd-mon-yyyy'))
21 )
22 )
23 order by 1, 2
24 /
ITEM B E DAYS STOCK
---------- --------- --------- ---------- ----------
A 02-APR-07 05-APR-07 4 0
A 06-APR-07 17-APR-07 12 2
A 18-APR-07 21-APR-07 4 4
A 22-APR-07 29-APR-07 8 0
A 30-APR-07 30-APR-07 1 5
b 01-APR-07 09-APR-07 9 0
b 10-APR-07 17-APR-07 8 2
b 18-APR-07 19-APR-07 2 4
b 20-APR-07 27-APR-07 8 0
b 28-APR-07 30-APR-07 3 5
10 rows selected.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select item, sum(e-b+1) days
2 from
3 ( select item,
4 greatest( change_date, to_date(:b,'dd-mon-yyyy') ) b,
5 nvl( lead(change_date-1) over
6 (partition by item order by change_date),
7 to_date(:e,'dd-mon-yyyy')) e,
8 stock
9 from
10 (
11 select *
12 from item_stock
13 where change_date between
14 to_date(:b,'dd-mon-yyyy') and to_date(:e,'dd-mon-yyyy')
15 union
16 select *
17 from item_stock
18 where change_date = (select max(change_date)
19 from item_stock
20 where change_date <= to_date(:b,'dd-mon-yyyy'))
21 )
22 )
23 where stock = 0
24 group by item
25 /
ITEM DAYS
---------- ----------
A 12
b 17
Question
Mark, April 18, 2007 - 8:24 am UTC
Hello Tom,
SELECT COL1, COL2, COL3, ....
FROM T1
WHERE COL4 != COL5
If I have to make Oracle to use index then how to create an index? Logical reads for this query is very high even though the number rows generated are less than 1000.
COL4 and COL5 are something like STATUS column.
Please help.
April 18, 2007 - 12:14 pm UTC
create index t_idx on t( case when col4 != col5 then 1 end );
select ...
where case when col4 != col5 then 1 end = 1;
Thanks
Mark, April 26, 2007 - 9:05 am UTC
Hello Tom,
Thanks for the earlier reply about creating function index.
A view is created on T1 table and when queried using the view, it does not use the index as the column names does not match.
create index t_id on t1( case when col4 != col5 then 1 end );
create view v_mark as select col1 cola, col2 colb, col3 colc, col4 cold, col5 cole from t1;
select *
FROM v_mark
where case when cold != cole then 1 end = 1;
Here it does a full table scan. But directly accessing the table will do a index scan.
Since the function index refers to col4 and col5 columns and query from view refers to cold and cole.
Please help in solving my problem.
Thanks
April 26, 2007 - 1:22 pm UTC
got example?
ops$tkyte%ORA10GR2> create table t ( x int, y int, z int, a varchar2(10) );
Table created.
ops$tkyte%ORA10GR2> insert into t select rownum, rownum, rownum, rownum from all_objects
2 union all
3 select 1, 2, 3, 4 from dual;
50952 rows created.
ops$tkyte%ORA10GR2> create index t_idx on t( case when x<>y then 1 else null end );
Index created.
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' )
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace view v as select x a, y b, z c from t;
View created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @at
ops$tkyte%ORA10GR2> column PLAN_TABLE_OUTPUT format a72 truncate
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select * from v where case when a<>b then 1 else null end = 1;
Execution Plan
----------------------------------------------------------
Plan hash value: 470836197
------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 2 (0
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 15 | 2 (0
|* 2 | INDEX RANGE SCAN | T_IDX | 1 | | 1 (0
------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(CASE WHEN "X"<>"Y" THEN 1 ELSE NULL END =1)
ops$tkyte%ORA10GR2> select * from t where case when x<>y then 1 else null end = 1;
Execution Plan
----------------------------------------------------------
Plan hash value: 470836197
------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | 2 (0
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 21 | 2 (0
|* 2 | INDEX RANGE SCAN | T_IDX | 1 | | 1 (0
------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(CASE WHEN "X"<>"Y" THEN 1 ELSE NULL END =1)
ops$tkyte%ORA10GR2> set autotrace off
SQL Help
Arindam Mukherjee, April 27, 2007 - 1:32 am UTC
create table t1
(appl_no number(10) NOT NULL, -- Primary key
prev_appl_no number(10) , -- Reference appl_no (For Re-Investment Case)
name varchar2(30)
)
/
INSERT INTO t1 VALUES(1,NULL,'ARINDAM')
/
INSERT INTO t1 VALUES(2,NULL,'ANINDYA')
/
INSERT INTO t1 VALUES(3,NULL,'AVIJIT')
/
INSERT INTO t1 VALUES(4,NULL,'ARUNAVA')
/
INSERT INTO t1 VALUES(5,NULL,'AGNIBHA')
/
INSERT INTO t1 VALUES(6,1,'ARINDAM') --- RE-INVESTMENT
/
INSERT INTO t1 VALUES(7,NULL,'ARINDAM')
/
INSERT INTO t1 VALUES(8,4,'BATMAN') -- RE-INVESTMENT BUT NAME IS DIFFERENT
/
SQL> SELECT * FROM t1;
APPL_NO PREV_APPL_NO NAME
---------- ------------ -------------
1 ARINDAM
2 ANINDYA
3 AVIJIT
4 ARUNAVA
5 AGNIBHA
6 1 ARINDAM
7 ARINDAM
8 4 BATMAN
I need the output of RE-INVESTMENT case in the format given below. Please help me write that SQL.
APPL_NO NAME
---------- -------------
6 ARINDAM
1 ARINDAM
8 BATMAN
4 ARUNAVA
April 27, 2007 - 10:44 am UTC
please help us help you..
you can start that process by......
explaining your logic. and please - USE THE CODE BUTTON on things that need to be fixed width to read properly.
To: Arindam
Narendra, April 27, 2007 - 6:00 am UTC
SELECT appl_no, name
FROM t1
START WITH prev_appl_no IS NOT NULL
CONNECT BY appl_no = PRIOR prev_appl_no
Thanks a lot !!
Arindam Mukherjee, April 27, 2007 - 10:54 am UTC
Dear Narendra from UK
Please accept my heartfelt thanks.Your code looks very simple but I could not think this simple way.
Thank you once again.
accomode different values in a single row
KRISHNAKANTH VM, May 22, 2007 - 12:41 pm UTC
I am having couple of tables as follows. I have given the SQL query to create, insert and select. Please execute the following query.
create table rsm_project(projseqnum int, projectid varchar(8),projecttype varchar(4))
create table rsm_projectstate(projseqnum int,region varchar(15),seqnum smallint)
-----------
insert into rsm_project values(32540,'F-06-054','PROJ')
insert into rsm_project values(32538,'F-06-045','RBL')
insert into rsm_project values(32541,'F-06-042','PROJ')
----------
insert into rsm_projectstate values(32538,'FACT',1)
insert into rsm_projectstate values(32538,'EA',2)
insert into rsm_projectstate values(32541,'FACT',1)
insert into rsm_projectstate values(32541,'FEO',2)
insert into rsm_projectstate values(32541,'IA',3)
insert into rsm_projectstate values(32541,null,4)
----------
select * from dbo.rsm_project
select * from dbo.rsm_projectstate
----------
select
dbo.rsm_project.projseqnum,
dbo.rsm_project.projectid,
dbo.rsm_project.projecttype,
dbo.rsm_projectstate.region
from dbo.rsm_project,dbo.rsm_projectstate
where
dbo.rsm_project.projseqnum*=dbo.rsm_projectstate.projseqnum
and dbo.rsm_project.projectid='f-06-042'
and dbo.rsm_projectstate.region is not null
--and dbo.rsm_projectstate.region in (me.text45.value)
----------
me.text45.value may contain one of the following. These values are choosen by the user from the list box.
(i) 'FACT','FEO','IA'
(ii) 'FACT','IA'
(iii) if no values are selected then all the values of the region should be displayed except null i.e, 'FACT','FEO','IA'
In case (i) and (iii) I will be getting the output as follows.
projseqnum projectid projecttype region
32541 F-06-042 PROJ FACT
32541 F-06-042 PROJ FEO
32541 F-06-042 PROJ IA
In cse (ii) I will be getting output as follows.
projseqnum projectid projecttype region
32541 F-06-042 PROJ FACT
32541 F-06-042 PROJ IA
But I want the output like as follows
for case(i) and (iii)
projseqnum projectid projecttype region
32541 F-06-042 PROJ FACT,FEO,IA
for case(ii)
32541 F-06-042 PROJ FACT,IA
I need to accomodate the value of region in a single textbox.
Could you please help us to fix this problem.
Using CASE for aggregate analysis
craig, June 22, 2007 - 11:36 am UTC
Hello
In producing a summary report I am applying a technique of using CASE statements in order to count rows that meet the required criteria, eg:-
Code Count(*) Serviced Appntmnt1Missed Appntmnt2Missed
----- -------- -------- --------------- ---------------
A1 2,500 2,000 250 250
B2 3,000 2,200 800 0
select code, count(*), sum(serviced) as "Serviced", sum(appt1) as "Appntmnt1Missed", sum(appt2) as "Appntmnt2Missed"
from (
select p.code,
case when
(t.serviced between to_date('01-MAY-2007','DD-MON-YYYY') and to_date('31-MAY-2007','DD-MON-YYYY'))
then 1 else 0 end serviced,
case when
(t.no_access_date1 is not null and t.appointment_date1 is not null and t.no_access_date2 is null)
then 1 else 0 end appt1,
case when
(t.no_access_date2 is not null and t.appointment_date2 is not null and t.no_access_date3 is null)
then 1 else 0 end appt2
from ( < numerous tables and joins and predicates to pick out the total data set > )
)
group by code
order by code
Could you comment on using this technique, would it be a sensible approach, performant?
many thanks
June 22, 2007 - 5:12 pm UTC
this is perfect
the other typical approach is to union together many queries with a where clause that is different for each one - this is much better.
Sql Query
Rajesh, July 03, 2007 - 1:24 pm UTC
I have a table with Branch codes (Brcd) and Report Dates (report_dt). I want to select the max date, min date and the various dates on which reports were received, branch wise. Can you please help me out?
July 05, 2007 - 8:45 am UTC
sure, use min and max grouped by brcd???? seems very basic here...
select brcd, min(report_dt), max(report_dt) from table group by brcd;
SQL QUERY
Rajesh, July 05, 2007 - 2:04 pm UTC
Thanks a lot!!! Tom.
May be i was not clear i want it in this format
BRCD MAX(DT) MIN(DT) ALL_DATES
--------------------------------------
0850 30-JUN-07 31-DEC-06 31-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
30-JUN-07
0377 15-JUN-07 12-DEC-06 12-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
15-JUN-07
and so on....
Min and Max date is ok. How do I get it in the above format. Please Help...
July 05, 2007 - 3:02 pm UTC
select brcd,
max(dt) over (partition by brcd),
min(dt) over (partition by brcd),
dt
from t
and if you really just want it on the "first" row
decode( row_number() over (partition by brcd order by dt, 1,
max(dt) over (partition by brcd) )
Sql Query
Rajesh, July 05, 2007 - 2:10 pm UTC
Dear Tom,
Please excuse my earlier format
The output should read like this
BRCD MAX(DT) MIN(DT) ALL_DATES
--------------------------------------
0850 30-JUN-07 31-DEC-06 31-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
30-JUN-07
0377 15-JUN-07 12-DEC-06 12-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
15-JUN-07
SQL QUERY
Rajesh, July 05, 2007 - 3:50 pm UTC
Hi Tom,
Thanks for your immediate response
Iam getting the result as below:
BRCD MAX(REPOR MIN(REPOR REPORT_DT
---------- --------- --------- ---------
377 15-JUN-07 15-DEC-06 15-DEC-06
377 15-JUN-07 15-DEC-06 31-DEC-06
377 15-JUN-07 15-DEC-06 31-JAN-07
377 15-JUN-07 15-DEC-06 31-MAR-07
377 15-JUN-07 15-DEC-06 30-APR-07
377 15-JUN-07 15-DEC-06 31-MAY-07
377 15-JUN-07 15-DEC-06 15-JUN-07
850 30-JUN-07 31-DEC-06 31-DEC-06
850 30-JUN-07 31-DEC-06 31-JAN-07
850 30-JUN-07 31-DEC-06 31-MAR-07
850 30-JUN-07 31-DEC-06 30-APR-07
850 30-JUN-07 31-DEC-06 31-MAY-07
850 30-JUN-07 31-DEC-06 30-JUN-07
13 rows selected.
whereas i require in the following format
BRCD MAX(DT) MIN(DT) ALL_DATES
--------------------------------------
0850 30-JUN-07 31-DEC-06 31-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
30-JUN-07
0377 15-JUN-07 12-DEC-06 12-DEC-06
31-MAR-07
30-APR-07
31-MAY-07
15-JUN-07
I tried the decode syntax also, but i could not succeed, please help
July 05, 2007 - 4:27 pm UTC
so, re-read my answer...
try to UNDERSTAND what it is doing.
scott%ORA10GR2> select deptno,
2 decode( row_number() over (partition by deptno order by hiredate),
3 1, min(hiredate) over (partition by deptno) ) minhd,
4 decode( row_number() over (partition by deptno order by hiredate),
5 1, max(hiredate) over (partition by deptno) ) maxhd,
6 hiredate
7 from emp
8 order by deptno, hiredate;
DEPTNO MINHD MAXHD HIREDATE
---------- --------- --------- ---------
10 09-JUN-81 23-JAN-82 09-JUN-81
10 17-NOV-81
10 23-JAN-82
20 17-DEC-80 12-JAN-83 17-DEC-80
20 02-APR-81
20 03-DEC-81
20 09-DEC-82
20 12-JAN-83
30 20-FEB-81 03-DEC-81 20-FEB-81
30 22-FEB-81
30 01-MAY-81
30 08-SEP-81
30 28-SEP-81
30 03-DEC-81
14 rows selected.
spoon = full
laughing out loud!!!
Jay, July 05, 2007 - 5:23 pm UTC
Laughing Out Loud @ Spoon = Full
Inline View
Jimmy, August 14, 2007 - 1:28 am UTC
Hi Tom,
Good day! as a starter in writing SQL query, I just want to know when to use the inline view e.g. (select * from (select .....from ...)?
thanks in advance.
August 20, 2007 - 10:59 am UTC
whenever you want to
whenever it makes your life easier.
it is never absolutely needed - it can make writing queries easier (like modular coding in general does)
PREVIOUS_DAY ?
A reader, August 21, 2007 - 3:18 pm UTC
Tom,
I know we have NEXT_DAY oracle function. I do not think we have PREVIOUS_DAY function...Is there any other way that we can achieve this?
SCOTT@MYDB >select sysdate from dual;
SYSDATE
---------
21-AUG-07
SCOTT@MYDB >select next_day(sysdate,'MON') from dual;
NEXT_DAY(
---------
27-AUG-07
I need to find out something like this:
select PREVIOUS_DAY(sysdate,'MON') PREV from dual;
PREV
--------
20-AUG-07
Looking forward to seeing your reply. Please help.
Thanks.
August 22, 2007 - 11:57 am UTC
next_day( dt-7 )
just go back a week, then ask for the next day.
Simple Idea
A reader, August 22, 2007 - 2:33 pm UTC
Tom,
Thanks for your reply.
I think i was thinking too much about this I could not see the forest for the trees..simple idea eluded my mind.
Finding out top 3 Numbers in a Single Statement
Manjunath, August 23, 2007 - 2:39 am UTC
Hi Tom,
I just found a way to find the 2nd,or 3rd... highest number in a table. I would like to know if there is a better method to optimize this query.
11:29:00 SQL> select num from test;
NUM
----------
12
13
14
12
13
14
12
13
14
12
13
14
19
15
15
15
15
15
18 rows selected.
to find the 2nd higest number I use
11:45:39 SQL> select min(num) from (select * from (select distinct(num) from test order by num desc) where rownum <=2);
MIN(NUM)
----------
15
Similiarly for 3rd higest I change the rownum <=2 to <=3
11:51:14 SQL> select min(num) from (select * from (select distinct(num) from test order by num desc) where rownum <=3);
MIN(NUM)
----------
14
Thanks in advance.
Manjunath.
August 23, 2007 - 11:27 am UTC
select *
from (select t.*, row_number() over (order by num) rn
from t)
where rn = :n;
gets the "nth" record.
Manjunath, August 27, 2007 - 7:22 am UTC
Thanks Tom.
top 10 sql commands in the system that are using most memory.
jas, September 10, 2007 - 6:12 pm UTC
top 10 sql commands in the system that are using most memory.
Please help me to write a query for this.
September 15, 2007 - 3:11 pm UTC
hah, define "using the most memory" first.
what memory.
SQL
reader, September 24, 2007 - 11:01 pm UTC
SELECT hic_master_key, min_discrepancy_key
FROM (SELECT hic_master_key, MIN (discrepancy_key) AS min_discrepancy_key
FROM (SELECT
hic_master_key, discrepancy_key
FROM certs.discrepancy d1
WHERE discrepancy_type_cd IN
('CMS Y PLAN N', 'CMS N PLAN Y', 'CONTRACT')
AND discrepancy_status_cd IN ('U', 'V', 'R')
AND creation_date < '01-Jan-2007'
ORDER BY discrepancy_key)
WHERE ROWNUM <= 100000
GROUP BY hic_master_key
ORDER BY 2)
WHERE ROWNUM <= 500
------------------------------------------------------------
(1) SELECT STATEMENT ALL_ROWS
Est. Rows: 500 Cost: 12,365
COUNT STOPKEY
(8) VIEW (Embedded SQL)
Est. Rows: 1,501 Cost: 12,365
(7) SORT ORDER BY STOPKEY
Est. Rows: 1,501 Cost: 12,365
(6) HASH GROUP BY
Est. Rows: 1,501 Cost: 12,365
(5) COUNT STOPKEY
(4) VIEW (Embedded SQL)
Est. Rows: 1,501 Cost: 12,362
(3) TABLE TABLE ACCESS BY GLOBAL INDEX ROWID CERTS.DISCREPANCY [Analyzed]
Blocks: 2,542,368 Est. Rows: 11,698,423 of 145,040,060 Cost: 12,362
(2) INDEX (UNIQUE) INDEX FULL SCAN CERTS.PK_DISCREPANCY [Analyzed]
Est. Rows: 18,610 Cost: 57
Above query working great if it get 100,000 records and returns results
in sec. The same query with same explain plan hang for hours and
writing to temp space if we do not have records. I think that
is happening because oracle is seraching data entil end to get the 100,000
records.
If we remove order by from inline view ( inner query) it retruns the results
immeiately, but that we do not want. The purpose of the sql is to
get the aging of the (discrepancy_key) and retrun minium value of oldest
to users so they can work on it.
We tried with all indexing options without any sucess. The table has 150 millions
records and discrepancy_key is primary key. We have collected histograms
but no difference. Table is paritioned on some other column using hash.
Is there any other way to rewrite this?
September 26, 2007 - 9:02 pm UTC
insufficient data here
it is unlikely is is "hung", that means "STOPPED, NOT WORKING, BUG"
it is more likely that it is not running as fast as you anticipated, which is very different.
I hate it when people compare strings to things that are not strings, FIX THAT...
for this predicate:
WHERE discrepancy_type_cd IN
('CMS Y PLAN N', 'CMS N PLAN Y', 'CONTRACT')
AND discrepancy_status_cd IN ('U', 'V', 'R')
AND creation_date < '01-Jan-2007'
ORDER BY discrepancy_key
you are in rock and hard place - you range scan on the date, you order by discrpepency_code, you where on type_cd and status_cd.
the where rownum < n will attempt for "first rows" style optimization in general. so it would like to see an index leading with discrepency_key - but if most of the records with "low" keys are NOT in the IN's or not less than 1-jan-2007, you waste many cycles.
Maybe, an index on
( case discrepancy_type_cd IN
('CMS Y PLAN N', 'CMS N PLAN Y', 'CONTRACT')
AND discrepancy_status_cd IN ('U', 'V', 'R') then 1 end,
discrepency_key,
creation_date)
and a where that uses '=' on the case instead of two in's would work, that would find the IN's immediately and read the data sorted by discrepency_key in order - filtering by creation_date - and then hitting the table.
or reverse the order of the last two columns in the index - that might work.
you of course have to change the where clause to use case.
and you have to be using constants in the case - I'm assuming the in lines are not variable, the creation_date predicate is variable.
SQL Query
ARC, September 28, 2007 - 3:54 am UTC
Tom, many many thanks for providing solution on time.
Again need you help in framing a query.
I am having a table with parent and child data into and am using connect by prior to format the output from the table.
select PARENT_MATERIAL_NO,
CHILD_MATERIAL_NO,
ITEM_NODE_NO,
PARENT_MATERIAL_NAME,
CHILD_MATERIAL_NAME,
SUM(CHILD_MATERIAL_QUANTITY) SUM_QTY,
CHILD_BOM_FLAG
from BOM_ITEM
where IS_VALID_FLAG = 'A'
start with PARENT_MATERIAL_NO = C3_REC.MATERIAL_NUM
connect by prior CHILD_MATERIAL_NO = PARENT_MATERIAL_NO
group by PARENT_MATERIAL_NO, CHILD_MATERIAL_NO, ITEM_NODE_NO,
PARENT_MATERIAL_NAME, CHILD_MATERIAL_NAME, VALID_FLAG, CHILD_BOM_FLAG
This query executes for evert Material# taking input from C3_REC record in a procedure and the result inserts into a table LKP_LMP001_BOM_ITEM_TD.
The output from table LKP_LMP001_BOM_ITEM_TD looks like below for a Material#
Matrial_no Parent Child Qty
A A B 1
A B C 2
A C D 1
A D E 2
Now I am trying to frame a query to calculate total number of individual parts that are require to make a materia A like below.
Matrial_no Parent Child Qty
A A B 1 = 1
A B C 2 = 2*1
A C D 2 = 1*2*1
A D E 4 = 2*1*2*1
Please help me in framing query to get above output and advise on how can I use a sibgle query in resulting Parent and Child for all Materials.
Thanks in advance.
-ARC
Row Hit Count
A reader, October 01, 2007 - 1:26 pm UTC
Thank you Tom for providing a valuable insight.
I have a question:
How do we count the number of times the record was selected? In my scenario, I have a table which contains an extra column called HIT...I select the columns in a random fashion...So every time a row is selected the Hit column for the row will get incremented by 1...I wish oracle provided a TRIGGER on SELECT option..
Tom, Is there any alternatives available? DBMS_FGA is one alternative..but i do not think it is the BEST over here.
Any Suggestions or help on this will be greatly appreciated.
Thanks in advance for your reply.
October 03, 2007 - 4:23 pm UTC
dbms_fga is not an option at all.
this sounds like a 'bad idea'
what is the real life use case for this - does the requestor of this 'feature' understand what this would entail?
basically, selects do not update, you would need to
a) UPDATE the data
b) using the returning clause, bulk collect the rows you just hit
c) understand that your searches will basically SERIALIZE since just about one person at a time can update a row.
this is a really bad idea. I would discard it out of hand.
Counting Number of rows for each table in a schema
Baiju_PM, October 05, 2007 - 4:58 am UTC
Sir,
I have a requirement wherein I want to find out the number of rows for each table in a schema.
Using sql or plsql.
Eg:
I have say 20 tables in Oracle user scott.
Now, my requirement is that I want to find out the number of rows for each table where the count of rows is greater than 0.
The output should printed like this:
TABLENAME NUMROWS
=====================
EMP 14
DEPT 5
BONUS 10
a 34
c 45
d 3
g 34
..
..
..
..
Thanks
Regarding Sql query
Smita, October 10, 2007 - 3:36 am UTC
to include 1 more column should it be included in group by aswell??
Regarding Sql query
Smita, October 10, 2007 - 3:42 am UTC
to include 1 more column should it be included in group by aswell??in the first sql query asked by bhavani
To Baiju_PM
Stephan Uzzell, October 10, 2007 - 11:19 pm UTC
Something like this, perhaps....
declare
x number;
begin
for i in (select table_name from user_tables where table_name not like 'SYS_IOT%' order by table_name)
loop
execute immediate 'select count(*) from ' || i.table_name into x;
if x>0 then
dbms_output.put_line(rpad(i.table_name,30,'.') || lpad(x,11,'.'));
end if;
end loop;
end;
/
HTH
can not access tables for other user
Animesh, October 23, 2007 - 8:27 am UTC
Hi
above written block works fine but when included in procedure and using "ALL_TABLES" instead of "USER_TABLES" this gives different results (Only tables owned by current user included in result set).
what if "no of rows" required for all user?
while selecting from "ALL_TABLES" from SQLPLUS window it shows all the data (Including other users ) but from a procedure it gives only data owned by that user.
Am i missing something ?
October 23, 2007 - 11:13 am UTC
because the all_* views look as your roles and decide what you can see based on the roles available.
and roles are not enabled in a definers rights (The default) routines:
http://asktom.oracle.com/tkyte/Misc/RolesAndProcedures.html just do this
select count(*) from all_objects;
set role none;
select count(*) from all_objects;
you'll see the difference "in sqlplus"
your choices:
a) use invokers rights IF APPROPRIATE and use the user_ views.
b) use the DBA_* view if you are wanting to see ANY table, regardless of whether you have access to that table or not.
Thank you
Animesh, October 26, 2007 - 9:14 am UTC
Hi,
My query is resolved, thank you very much.
In this context i can use "AUTHID CURRENT_USER" in the defination to fetch from all_*views.
thanks again
Sql query
AK, November 14, 2007 - 5:46 pm UTC
Hi Tom,
Will appreciate your time on this
I have 2 tables
create table tablea (col1 number, col2 varchar2(10),col3 varchar2(10), col4 varchar2(10), col5 varchar2(10));
create table tableb (col11 number, col12 varchar2(10),col13 varchar2(10), col14 varchar2(10));
insert into tablea values (1,'AK','XY','WE','QR');
insert into tableb values (1,'VALX','VALY', 'ZX');
insert into tableb values (1,'DSF','ERER', 'TRY');
insert into tableb values (1,'TRT','YTY', 'OUT');
TableA
COL1 COL2 COL3 COL4 COL5
1 AK XY WE QR
TABLEB
COL11 COL12 COL13 COL14
1 VALX VALY ZX
1 DSF ERER TRY
1 TRT YTY OUT
I need the result in the following form:
COL1 COL COL2 COL3 COL4 COL5 COL12 COL13 COL14
1 1 AK XY WE QR VALX VALY ZX
1 2 DSF ERER TRY
1 3 TRT YTY OUT
The tables are joined on COL1 & COL11.
The total no of rows returned will be the no. of rows satisfying the join criteria.
COL is a counter incrementing for COL1.
In a regular join COL2, COL3, COL4 & COL5 values of the 1st row will repeat in row 2 & 3 ... I don't want that.
This is a huge table and if I cant do it in sql then I will have to do it in pl/sql cursor loop (insert , update)
Many Thanks in advance
AK
November 21, 2007 - 10:44 am UTC
this is not specified very precisely. No keys, no information as to relations.
do we assume that the cardinality of a given col11 value in B is ALWAYS greater than or equal to the cardinality of tablea??
In any case, use the first query below if there are always more rows in B for a given col11 value. use the second query if A might have more.
ops$tkyte%ORA10GR2> select col1, col2, col3, col4, row_number() over (partition by col1 order by col2, col3, col4) rn from tablea;
COL1 COL2 COL3 COL4 RN
---------- ---------- ---------- ---------- ----------
1 AK XY WE 1
2 ak xy we 1
ops$tkyte%ORA10GR2> select col11, col12, col13, col14, row_number() over (partition by col11 order by col12, col13, col14) rn from tableb;
COL11 COL12 COL13 COL14 RN
---------- ---------- ---------- ---------- ----------
1 DSF ERER TRY 1
1 TRT YTY OUT 2
1 VALX VALY ZX 3
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with
2 a as
3 ( select col1, col2, col3, col4,
4 row_number() over (partition by col1 order by col2, col3, col4) rn
5 from tablea
6 ),
7 b as
8 ( select col11, col12, col13, col14,
9 row_number() over (partition by col11 order by col12, col13, col14) rn
10 from tableb
11 )
12 select b.col11, b.rn, a.col2, a.col3, a.col4, b.col12, b.col13, b.col14
13 from a, b
14 where a.col1(+) = b.col11
15 and a.rn(+) = b.rn
16 order by b.col11, b.rn
17 /
COL11 RN COL2 COL3 COL4 COL12 COL13 COL14
---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1 1 AK XY WE DSF ERER TRY
1 2 TRT YTY OUT
1 3 VALX VALY ZX
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with
2 a as
3 ( select col1, col2, col3, col4,
4 row_number() over (partition by col1 order by col2, col3, col4) rn
5 from tablea
6 ),
7 b as
8 ( select col11, col12, col13, col14,
9 row_number() over (partition by col11 order by col12, col13, col14) rn
10 from tableb
11 )
12 select coalesce( b.col11, a.col1 ) col11, coalesce( b.rn, a.rn) col,
13 a.col2, a.col3, a.col4, b.col12, b.col13, b.col14
14 from a full outer join b on (a.col1 = b.col11 and a.rn = b.rn)
15 order by coalesce(b.col11,a.col1), coalesce(b.rn,a.rn)
16 /
COL11 COL COL2 COL3 COL4 COL12 COL13 COL14
---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1 1 AK XY WE DSF ERER TRY
1 2 TRT YTY OUT
1 3 VALX VALY ZX
2 1 ak xy we
Will this work for you?
Mahmood, November 16, 2007 - 10:44 pm UTC
Hello AK,
I hope this works for you...
SCOTT @ MYDB> select * from tablea;
COL1 COL2 COL3 COL4 COL5
---- ---- ---- ---- ----
1 AK XY WE QR
SCOTT @ MYDB> select * from tableb;
COL11 COL12 COL13 COL14
---------- ---------- ---------- ----------
1 VALX VALY ZX
1 DSF ERER TRY
1 TRT YTY OUT
SCOTT @ MYDB> insert into tablea values (2,'CC','DD','EE','FF');
SCOTT @ MYDB>
SCOTT @ MYDB> insert into tableb values (2,'x','xx', 'xxx');
SCOTT @ MYDB> insert into tableb values (2,'y','yy', 'yyy');
SCOTT @ MYDB> insert into tableb values (2,'z','zz', 'zzz');
SCOTT @ MYDB> commit;
SCOTT @ MYDB> select * from tablea;
COL1 COL2 COL3 COL4 COL5
---- ---- ---- ---- ----
1 AK XY WE QR
2 CC DD EE FF
SCOTT @ MYDB> select * from tableb;
COL11 COL12 COL13 COL14
---------- ---------- ---------- ----------
1 VALX VALY ZX
1 DSF ERER TRY
1 TRT YTY OUT
2 x xx xxx
2 y yy yyy
2 z zz zzz
SCOTT @ MYDB> select row_number() over (partition by col1 order by null) COL,a.* from
2 (select * from tablea inner join tableb on (col1=col11)) a;
COL COL1 COL2 COL3 COL4 COL5 COL11 COL12 COL13 COL14
--- ---- ---- ---- ---- ---- ---------- ---------- ---------- ----------
1 1 AK XY WE QR 1 VALX VALY ZX
2 1 AK XY WE QR 1 DSF ERER TRY
3 1 AK XY WE QR 1 TRT YTY OUT
1 2 CC DD EE FF 2 x xx xxx
2 2 CC DD EE FF 2 y yy yyy
3 2 CC DD EE FF 2 z zz zzz
SCOTT @ MYDB> break on col1 on col2 on col3 on col4 on col5
SCOTT @ MYDB> /
COL COL1 COL2 COL3 COL4 COL5 COL11 COL12 COL13 COL14
--- ---- ---- ---- ---- ---- ---------- ---------- ---------- ----------
1 1 AK XY WE QR 1 VALX VALY ZX
2 1 DSF ERER TRY
3 1 TRT YTY OUT
1 2 CC DD EE FF 2 x xx xxx
2 2 y yy yyy
3 2 z zz zzz
Raj, November 21, 2007 - 1:41 am UTC
I have following two tables.
-- CREATING table T1
create table T1( A number(2), S number(3));
-- CREATING table T2
create table T2( A number(2), B number(3), S number(3));
-- InsertING into T1
Insert into T1 (A,S) values (1,203);
Insert into T1 (A,S) values (2,203);
Insert into T1 (A,S) values (3,203);
Insert into T1 (A,S) values (4,201);
Insert into T1 (A,S) values (5,203);
Insert into T1 (A,S) values (6,203);
-- InsertING into T2
Insert into T2 (A,B,S) values (1,100,201);
Insert into T2 (A,B,S) values (2,200,203);
Insert into T2 (A,B,S) values (2,300,201);
Insert into T2 (A,B,S) values (3,400,203);
Insert into T2 (A,B,S) values (4,400,203);
Insert into T2 (A,B,S) values (5,500,203);
Insert into T2 (A,B,S) values (5,600,201);
Insert into T2 (A,B,S) values (5,700,203);
Insert into T2 (A,B,S) values (6,600,201);
Insert into T2 (A,B,S) values (6,700,203);
commit;
select
t1.a, t2.b, t1.s s1, t2.s s2
from
t1, t2
where
t1.a=t2.a
order by
t1.a, t2.b
/
A B S1 S2
---------- ---------- ---------- ----------
1 100 203 201
2 200 203 203
2 300 203 201
3 400 203 203
4 400 201 203
5 500 203 203
5 600 203 201
5 700 203 203
6 600 203 201
6 700 203 203
Is it possible to get an output like this?
I need to output the relation from Column A to Column B
A B S1 S2 Column (A to B) Mapping
---------- ---------- ---------- ---------- -------------------------
1 100 203 201 one-one
2 200 203 203 one-many
2 300 203 201 one-many
3 400 203 203 many-one
4 400 201 203 many-one
5 500 203 203 many-many
5 600 203 201 many-many
5 700 203 203 many-many
6 600 203 201 many-many
6 700 203 203 many-many
Thanks in advance
November 21, 2007 - 2:25 pm UTC
explain to me how 5/500 is many to many first.
explain in painstaking detail what this really all means.
Raj, November 21, 2007 - 6:54 pm UTC
Explaining the logic is the toughest part.
Actual requirement is, columns S1 and S2 are statuses and if it is a one-to-one mapping and any one of the statuses are not 203 then, they are excluded from the output. Likewise for other mappings too.
Sorry, I think I should include one more insert statement.
Insert into T2 (A,B,S) values (6,500,203);
Thanks again.
November 26, 2007 - 10:10 am UTC
... Explaining the logic is the toughest part. ...
no kidding, if it were easy - well, anyone could be a developer/analyst/architect/whatever you want to call them...
and you have to work a tad bit harder at explaining the logic - because - well - I don't get it.
could we "guess"?
sure we could (but I would encourage people to NOT guess as we might get an answer that appears to work with a tiny data set like this but doesn't work in real life and then we are in big trouble.... but anyway....)
unless and until you can write the entire set of logic down....
you cannot answer the question
and neither can we.
A reader, December 10, 2007 - 11:32 pm UTC
Get all rows for each client row
Maverick439, January 05, 2008 - 12:01 pm UTC
Tom, I am trying to get for each client_id in Temp_2 table , get all rows in Temp_1 table
eg:
Code client_id Amount
SSDI 10000 345.67
LI 10000
MA 10000
PS 10000
SSDI 10001
LI 10001 234.56
MA 10001
PS 10001
How to acheive this?
CREATE TABLE TEMP_1
(
CODE VARCHAR2(10 BYTE),
ID INTEGER
)
/
Insert into TEMP_1
(CODE, ID)
Values
('SSDI', 1);
Insert into TEMP_1
(CODE, ID)
Values
('LI', 2);
Insert into TEMP_1
(CODE, ID)
Values
('PS', 3);
Insert into TEMP_1
(CODE, ID)
Values
('MA', 4);
COMMIT;
CREATE TABLE TEMP_2
(
ID INTEGER,
AMOUNT NUMBER(5,2),
CODE VARCHAR2(10 BYTE),
CLIENT_ID INTEGER
)
/
Insert into TEMP_2
(ID, AMOUNT, CODE, CLIENT_ID)
Values
(1, 345.67, 'SSDI', 10000);
Insert into TEMP_2
(ID, AMOUNT, CODE, CLIENT_ID)
Values
(2, 234.56, 'LI', 10001);
COMMIT;
With my query, I could get only one by one by hardcoding client_id's
select a.code,a.id,b.client_id,b.amount
from temp_1 a,
(select code,amount,client_id from temp_2 where client_id=10000 ) b
where a.code=b.code(+)
CODE ID CLIENT_ID AMOUNT
SSDI 1 10000 345.67
LI 2
MA 4
PS 3
Thanks for all your help..
January 07, 2008 - 7:49 am UTC
you give no information as to how these tables relate.
but - given your inputs and outputs, and " I am trying to get for each client_id in Temp_2 table , get all rows in
Temp_1 table"
it sounds like a cartesian join:
ops$tkyte%ORA10GR2> select *
2 from temp_1 a, temp_2 b
3 order by b.client_id
4 /
CODE ID ID AMOUNT CODE CLIENT_ID
---------- ---------- ---------- ---------- ---------- ----------
SSDI 1 1 345.67 SSDI 10000
LI 2 1 345.67 SSDI 10000
PS 3 1 345.67 SSDI 10000
MA 4 1 345.67 SSDI 10000
SSDI 1 2 234.56 LI 10001
LI 2 2 234.56 LI 10001
PS 3 2 234.56 LI 10001
MA 4 2 234.56 LI 10001
8 rows selected.
ops$tkyte%ORA10GR2>
and if amount should only be reported once, you can use analytics to identify the 'first' row:
ops$tkyte%ORA10GR2> select a.code, a.id, b.client_id, decode(a.rn,1,b.amount) amount
2 from (select code, id, row_number() over (order by id) rn from temp_1) a, temp_2 b
3 order by b.client_id
4 /
CODE ID CLIENT_ID AMOUNT
---------- ---------- ---------- ----------
SSDI 1 10000 345.67
LI 2 10000
PS 3 10000
MA 4 10000
SSDI 1 10001 234.56
LI 2 10001
PS 3 10001
MA 4 10001
8 rows selected.
Request for an Explanation
Vikas Sangar, January 05, 2008 - 1:52 pm UTC
Dear Mr. Kyte,
First of all, I wish you a very Happy New Year.
Coming to the point, I came across a Select query that returns the Nth Largest/Smallest record from a column depending on the condition provided in a subquery with a self join. The query goes something like this (I've used Departments table of supplied Example schema)-
select d1.Department_ID from departments d1
where 1 = (select count(*) from departments d2
where d1.department_id<d2.department_id);
DEPARTMENT_ID
-------------
260
This quey is returning the second largest Department_ID from the table Departments (The integer in the first where caluse specifies the position of output record being returned. 0 being the first largest, 1 being the second largest and so on...).
I would like you to kindly expain the execution of this query and throw some light on how this query is working and how is it returning the desired output.
Also, can you please suggest (Please provide examples) what are the other possible ways of returning Nth Largest/Smallest records from a table using SQL queries?
Regards.
January 07, 2008 - 7:56 am UTC
hmmm, wondering how you wrote the query if you don't know what it does...
select d1.Department_ID
from departments d1
where 1 = (select count(*)
from departments d2
where d1.department_id<d2.department_id);
think of that query as having the following psuedo code:
for D1 in (select * from departments )
loop
-- count number of departments LESS THAN this one
select count(*)
into l_count
from departments
where department_id < <b>D1.DEPARTMENT_ID</b>;
if ( l_count = 1 ) -- if there is only ONE department less than this
-- then this must be the 'second' department
then
OUTPUT
end if;
end loop
other (possibly more efficient) approaches:
select *
from (
select a.*, rownum rn
from (select * from departments order by department_id) a
where rownum <= 2
)
where rn = 2;
or
select *
from (select d.*, row_number() over (order by department_id) rn
from departments d)
where rn = 2;
see:
https://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html https://asktom.oracle.com/Misc/oramag/on-top-n-and-pagination-queries.html for in depth explanations of the last two...
Request for an Explanation
Vikas Sangar, January 07, 2008 - 8:57 am UTC
Dear Mr. Kyte,
Thanks a lot for your prompt and excellent explanation .
No, as I have told you, I came across this query. I haven't formulated this query...:).
regards.
record as column
Arangaperumal, March 26, 2008 - 9:43 am UTC
create table vendor(vid number,name varchar2(100));
insert into vendor values(1,'Oracle');
insert into vendor values(2,'Sony');
insert into vendor values(3,'Ford');
insert into vendor values(4,'Lufthansa');
insert into vendor values(4,'ACC Shipping');
create table product(pid number,description varchar2(100));
insert into product values(1,'Airlines ');
insert into product values(2,'Auto');
insert into product values(3,'Software');
insert into product values(4,'Electronics');
create table product_dealt(id number,vid number,pid number);
insert into product_dealt values(1,1,3);
insert into product_dealt values(2,2,4);
insert into product_dealt values(3,3,2)
insert into product_dealt values(4,4,1)
i want report like this
vendor.name Airlines Auto Software Electronics
------------------- ------------- ------- ----------- ----------------
Oracle Y
Ford Y
Lufthansa Y
ACC Shipping
March 26, 2008 - 10:04 am UTC
I found your data uninteresting - there were no cases where a vendor had two things.
you have none with a vendor has no things.
you have two vendors with the same primary key.
you have no integrity defined.
i hope (pray) you do not have a table like product_dealt in real life - what is the point of that ID column - you need no surrogate key there at all, it would be a bad thing.
so, fixing your data, making it interesting:
ops$tkyte%ORA10GR2> select v.vid, v.name, p.description,
2 ( decode( p.description, 'Airlines', 'y' )) airlines,
3 ( decode( p.description, 'Auto', 'y' )) auto,
4 ( decode( p.description, 'Software', 'y' ) )software,
5 ( decode( p.description, 'Electronics', 'y' )) Electronics
6 from vendor v, product p, product_dealt a
7 where v.vid = a.vid(+)
8 and p.pid(+) = a.pid
9 order by v.name
10 /
VID NAME DESCRIPTION A A S E
---------- ------------------------------ -------------------- - - - -
5 ACC Shipping
3 Ford Auto y
4 Lufthansa Airlines y
1 Oracle Software y
2 Sony Electronics y
2 Sony Software y
6 rows selected.
ops$tkyte%ORA10GR2> select v.name,
2 max( decode( p.description, 'Airlines', 'y' )) airlines,
3 max( decode( p.description, 'Auto', 'y' )) auto,
4 max( decode( p.description, 'Software', 'y' ) )software,
5 max( decode( p.description, 'Electronics', 'y' )) Electronics
6 from vendor v, product p, product_dealt a
7 where v.vid = a.vid(+)
8 and p.pid(+) = a.pid
9 group by v.name
10 order by v.name
11 /
NAME A A S E
------------------------------ - - - -
ACC Shipping
Ford y
Lufthansa y
Oracle y
Sony y y
ORA-01427: single-row subquery returns more than one row
bonnie, April 01, 2008 - 2:58 pm UTC
Hi Tom,
With out going into pl/sql, I'm trying to get a sequence as follows but ran into the error "ORA-01427: single-row subquery returns more than one row". Is there any way to do this in sql? I'm trying to insert roughly 2 mil row, so I'm trying to stay away from pl/sql if possible.
Your help is greatly appreciated.
CREATE TABLE PRODUCTS
(
PRODUCT_ID INTEGER,
PRODUCT_NM VARCHAR2(50 BYTE),
PRODUCT_CATEGORY VARCHAR2(30 BYTE),
customer_id integer
);
CREATE TABLE CUSTOMER
(
CUSTOMER_ID INTEGER,
CUSTOMER_NM VARCHAR2(50 BYTE),
CUSTOMER_ADDRESS VARCHAR2(100 BYTE)
);
INSERT INTO PRODUCTS
VALUES (1, 'P1','C1', 1);
INSERT INTO PRODUCTS
VALUES (2, 'P2','C2',1);
INSERT INTO PRODUCTS
VALUES (3, 'P3','C3',2);
INSERT INTO PRODUCTS
VALUES (4, 'P4','C4',3);
INSERT INTO PRODUCTS
VALUES (5, 'P5','C5',4);
INSERT INTO PRODUCTS
VALUES (6, 'P6','C6',4);
INSERT INTO CUSTOMER
VALUES(1, 'TEST1','ADD1');
INSERT INTO CUSTOMER
VALUES(2, 'TEST1','ADD1' );
INSERT INTO CUSTOMER
VALUES(3, 'TEST1','ADD1' );
INSERT INTO CUSTOMER
VALUES(4, 'TEST2','ADD2' );
SELECT rownum , c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM,
(select rownum item_seq
from PRODUCTS
where customer_ID = p.customer_ID) item_seq
FROM CUSTOMER C, PRODUCTS p
where c.customer_ID = p.customer_ID
order by CUSTOMER_ID, p.PRODUCT_ID;
error:
(select rownum item_seq
*
ERROR at line 2:
ORA-01427: single-row subquery returns more than one row
April 01, 2008 - 9:03 pm UTC
ummm, why are you not just using a sequence one might ask????
I'm not at all sure what you are trying to do here - instead of posting code that does not work post text that describes what you are trying to do.
if:
(select rownum item_seq
from PRODUCTS
where customer_ID = p.customer_ID)
returned more than one record - well, what do you expect to have happen????
why not just use SEQUENCES??????????????????
I think - guessing here - intuition - you might be trying to create a number from 1 to N for each product a customer bought...
that might be:
SELECT rownum, x.*
from (
select c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM, row_number() over (partition by customer_id order by product_nm) item_seq
FROM CUSTOMER C, PRODUCTS p
where c.customer_ID = p.customer_ID
order by CUSTOMER_ID, p.PRODUCT_ID
) x
;
bonnie, April 01, 2008 - 3:39 pm UTC
More info for above problem:
SELECT rownum , c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM,
(select count(*)
from PRODUCTS
where customer_ID = p.customer_ID) total_item
FROM CUSTOMER C, PRODUCTS p
where c.customer_ID = p.customer_ID
order by CUSTOMER_ID, p.PRODUCT_ID;
ROWNUM CUSTOMER_ID CUSTOMER_NM PRODUCT_NM TOTAL_ITEM
1 1 TEST1 P1 2
2 1 TEST1 P2 2
3 2 TEST1 P3 1
4 3 TEST1 P4 1
5 4 TEST2 P5 2
6 4 TEST2 P6 2
The TOTAL_seq is what I'm hoping to get:
ROWNUM CUSTOMER_ID CUSTOMER_NM PRODUCT_NM TOTAL_ITEM TOTAL_seq
1 1 TEST1 P1 2 1
2 1 TEST1 P2 2 2
3 2 TEST1 P3 1 1
4 3 TEST1 P4 1 1
5 4 TEST2 P5 2 1
6 4 TEST2 P6 2 2
April 01, 2008 - 9:04 pm UTC
see above.
bonnie, April 02, 2008 - 10:15 am UTC
This is exactly what I want! Thank you so much!
bonnie, April 07, 2008 - 3:05 pm UTC
Hi Tom,
You've been a tremendous help for me with my sql/plsql. I have another problem I want to see if it's possible to tackly using SQL only.
Suppose certain products can be a stand_alone while others can not. Is there a way that I can flag the first occurrance of a non standalone using the above query as an example for each customer?
alter table PRODUCTS
add stand_alone_prod varchar2(1)
update PRODUCTS
set stand_alone_prod = 'N'
WHERE PRODUCT_ID IN (2,4);
alter table products
rename column main_prod to stand_alone_prod
SELECT rownum, x.*
from (
select c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM, row_number() over (partition by c.customer_id
order by product_nm) item_seq, stand_alone_prod
FROM CUSTOMER C, PRODUCTS p
where c.customer_ID = p.customer_ID
order by CUSTOMER_ID, p.PRODUCT_ID
) x
ROWNUM CUSTOMER_ID CUSTOMER_NM PRODUCT_NM ITEM_SEQ STAND_ALONE_PROD
1 1 TEST1 P1 1
2 1 TEST1 P2 2 N
3 2 TEST2 P3 1
4 3 TEST3 P4 1 N
5 4 TEST4 P5 1
6 4 TEST4 P6 2
7 5 TEST5 P2 1 N
8 5 TEST5 P3 2
9 5 TEST5 P4 3 N
WANT 1ST_standAlone column :
ROWNUM CUSTOMER_ID CUSTOMER_NM PRODUCT_NM ITEM_SEQ STAND_ALONE_PROD 1ST_standAlone
1 1 TEST1 P1 1 Y
2 1 TEST1 P2 2 N
3 2 TEST2 P3 1 Y
4 3 TEST3 P4 1 N
5 4 TEST4 P5 1 Y
6 4 TEST4 P6 2
7 5 TEST5 P2 1 N
8 5 TEST5 P3 2 Y
9 5 TEST5 P4 3 N
Thank you again for your invaluable help.
April 09, 2008 - 9:22 am UTC
i don't piece together things - I don't see the data that would make this result set here. need create table, inserts - everything as small as possible. the data from above does not create this result set shown here.
it generated:
ops$tkyte%ORA10GR2> SELECT rownum, x.*
2 from (
3 select c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM, row_number() over (partition by c.customer_id
4 order by product_nm) item_seq, stand_alone_prod
5 FROM CUSTOMER C, PRODUCTS p
6 where c.customer_ID = p.customer_ID
7 order by CUSTOMER_ID, p.PRODUCT_ID
8 ) x
9 /
ROWNUM CUSTOMER_ID CUSTOMER_NM PRODUCT_NM ITEM_SEQ S
---------- ----------- --------------- --------------- ---------- -
1 1 TEST1 P1 1
2 1 TEST1 P2 2 N
3 2 TEST1 P3 1
4 3 TEST1 P4 1 N
5 4 TEST2 P5 1
6 4 TEST2 P6 2
6 rows selected.
Bonnie, April 09, 2008 - 10:19 am UTC
Sorry Tom,
I added a new customer and the script was not included:
Please run the following:
CREATE TABLE PRODUCTS
(
PRODUCT_ID INTEGER,
PRODUCT_NM VARCHAR2(50 BYTE),
PRODUCT_CATEGORY VARCHAR2(30 BYTE),
CUSTOMER_ID INTEGER,
STAND_ALONE_PROD VARCHAR2(1 BYTE)
)
TABLESPACE QSHI_SRC;
CREATE TABLE CUSTOMER
(
CUSTOMER_ID INTEGER,
CUSTOMER_NM VARCHAR2(50 BYTE),
CUSTOMER_ADDRESS VARCHAR2(100 BYTE)
)
TABLESPACE QSHI_SRC;
INSERT INTO CUSTOMER ( CUSTOMER_ID, CUSTOMER_NM, CUSTOMER_ADDRESS ) VALUES (
1, 'TEST1', 'ADD1');
INSERT INTO CUSTOMER ( CUSTOMER_ID, CUSTOMER_NM, CUSTOMER_ADDRESS ) VALUES (
2, 'TEST2', 'ADD2');
INSERT INTO CUSTOMER ( CUSTOMER_ID, CUSTOMER_NM, CUSTOMER_ADDRESS ) VALUES (
3, 'TEST3', 'ADD3');
INSERT INTO CUSTOMER ( CUSTOMER_ID, CUSTOMER_NM, CUSTOMER_ADDRESS ) VALUES (
4, 'TEST4', 'ADD4');
INSERT INTO CUSTOMER ( CUSTOMER_ID, CUSTOMER_NM, CUSTOMER_ADDRESS ) VALUES (
5, 'TEST5', 'ADD5');
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
1, 'P1', 'C1', 1, NULL);
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
2, 'P2', 'C2', 1, 'N');
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
3, 'P3', 'C3', 2, NULL);
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
4, 'P4', 'C4', 3, 'N');
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
5, 'P5', 'C5', 4, NULL);
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
6, 'P6', 'C6', 4, NULL);
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
2, 'P2', 'C2', 5, 'N');
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
4, 'P4', 'C4', 5, 'N');
INSERT INTO PRODUCTS ( PRODUCT_ID, PRODUCT_NM, PRODUCT_CATEGORY, CUSTOMER_ID,
STAND_ALONE_PROD ) VALUES (
3, 'P3', 'C3', 5, NULL);
COMMIT;
I tried using bulk collect into and setting the limit to 100 and use a for loop to assign this value, but at 100, the break sometimes happens in the middle of a customer, breaking the logic flow in the loop as the logic depends on a complete customer set, so I'm trying to see if it's possible to get it in a single sql.
Thank you again for your help.
bonnie, April 22, 2008 - 6:14 pm UTC
Hi Tom,
I found the answer: I know it looks stupid, but this is what my desire effect was. I was using pl/sql to process this in the past, but due the growth in data, the entire process took over 12 hrs using pl/sql. By taking on your advice to employ sql where possible I was able to speed the entire process up to under an hour. Thank you for the push and your expert advice.
select CUSTOMER_ID, CUSTOMER_NM, PRODUCT_NM,item_seq,
(CASE WHEN stand_alone_prod = 'Y' AND ROWCNT = IS_StandAloneProd THEN
'Y' ELSE 'N' END) is_initial_loc
from (
SELECT ROWCNT, CUSTOMER_ID, CUSTOMER_NM, PRODUCT_NM,item_seq,stand_alone_prod,
max(ROWCNT) KEEP (DENSE_RANK FIRST
ORDER BY CUSTOMER_ID, PRODUCT_NM) OVER (PARTITION BY CUSTOMER_ID, stand_alone_prod) IS_StandAloneProd
from (
select rownum ROWCNT ,c.CUSTOMER_ID, c.CUSTOMER_NM , p.PRODUCT_NM, row_number() over (partition by c.customer_id
order by product_nm) item_seq, nvl( stand_alone_prod,'Y' ) stand_alone_prod
FROM CUSTOMER C, PRODUCTS p
where c.customer_ID = p.customer_ID
order by CUSTOMER_ID, p.PRODUCT_ID
) x)
order by customer_Id, PRODUCT_NM, item_seq
ROOPA, April 30, 2008 - 8:25 am UTC
table1 format
MONTH AMOUNT_PAID
01.12.2006 00:00:00 5395
01.11.2006 00:00:00 567
01.11.2006 00:00:00 1974
01.04.2007 00:00:00 2462
01.04.2007 00:00:00 1974
01.11.2006 00:00:00 5395
01.02.2008 00:00:00 5395
table2 format
MONTH JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
01-DEC-2006 0 0 0 0 0 0 0 0 0 0 0 5395
01-FEB-2007 0 5395 0 0 0 0 0 0 0 0 0 0
01-NOV-2006 0 0 0 0 0 0 0 0 0 0 5395 0
in table1 the data is in year wise whereas in table2 the data is in month wise.
how to convert table1 data format into table2 data format i.e yearly data to monthly data?
April 30, 2008 - 10:55 am UTC
look at the other place you posted this same exact thing.
Scalar Subquery
Robert, May 07, 2008 - 12:01 pm UTC
Tom,
I'm trying to get this (simple) question under the wire of 'Scalar Subquery' (discussed above and referenced elsewhere), and SQL Query (which is the original topic)
Can you please explain why/how this query works!?
Oracle 9.2.0.8
create table a (a_col number);
create table b (b_col number);
insert into a values (1);
insert into b values (2);
select *
from a
where a_col in (select a_col from b);
This query returns value from table a !!?!
"select a_col from b" .... THERE IS NO a_col.b !?!?
But when you select a column from 'b' that doesn't exist in 'a'... THEN you get an error.
What is going on here?
Does this have to do with the new join syntax?
Thanks!
Robert.
May 08, 2008 - 3:26 am UTC
you do not have a scalar subquery, you have a correlated subquery
select *
from a
where a_col in (select a_col from b);
is the same as:
select a.a_col
from a
where a.a_col in (select <b>a.a_col</b> from b);
Just like you can:
select *
from dept
where dept.deptno in (select emp.deptno
from emp
where emp.deptno = <b>dept.deptno</b>
and emp.job = 'CLERK' )
the outer queries "row" is visible in the correlated subquery - usually we use that row in the "where" clause, but it is valid anywhere in the correlated subquery, including the select list.
This is why it is never a BAD IDEA to use correlation names and fully qualify column references...
Sub-Query
Carl Nylander, May 09, 2008 - 11:22 am UTC
Considering a table with columns:
recId NUMBER
version NUMBER
year NUMBER
and rows such as
1, 1, 2008
2, 2, 2008
3, 1, 2009
To only retrieve the highest version per year, such as:
2, 2, 2008 AND
3, 1, 2009
I am having to accomplish this with PL/SQL using several cursors. Is there a way to accomplish this with a single SQL statement?
May 12, 2008 - 11:57 am UTC
sure be nice to have a create table
and insert into's...
yes, there are many ways to do this in a single sql statement - analytics, aggregates, IN subqueries - those three immediately pop into mind - writing code is about the furthest idea in my mind.
ops$tkyte%ORA10GR2> select *
2 from t
3 where (version,year) in (select max(version), year from t group by year)
4 /
RECID VERSION YEAR
---------- ---------- ----------
2 2 2008
3 1 2009
ops$tkyte%ORA10GR2> select *
2 from (
3 select recid, version, year, max(version) over (partition by year) max_ver
4 from t
5 )
6 where version = max_ver
7 /
RECID VERSION YEAR MAX_VER
---------- ---------- ---------- ----------
2 2 2008 2
3 1 2009 1
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select *
2 from t t1
3 where version = (select max(version) from t t2 where t2.year = t1.year)
4 /
RECID VERSION YEAR
---------- ---------- ----------
2 2 2008
3 1 2009
query
A reader, May 12, 2008 - 12:08 pm UTC
Thanks Tons....
olerag, May 12, 2008 - 2:04 pm UTC
Opps, there goes the bell. Back into Kingergarten for me.
gayathri, July 15, 2008 - 1:10 pm UTC
Hi,
I have a problem.
I have to insert my values into 3 different tables. Among these tables few coluns will have the same value. This query has to be written as a single query. I have no clue how to start it??? Plss help me
July 15, 2008 - 8:09 pm UTC
neither do I because I have nothing to work with here
no table creates
no data to be inserted
no rules to insert it
no reasoning as to why you would duplicate data which is wrong, wrong wrong....
To : gayathri from Chennai, INDIA
Rajaram Subramanian, July 16, 2008 - 10:03 am UTC
ask query
sulabh, August 01, 2008 - 10:07 am UTC
how to show the table name with its contenet
A reader, August 27, 2008 - 12:21 am UTC
Hi Tom,
I've a table as
CREATE TABLE "TMP_TBL"
( "PROD_ID" NUMBER(9,0),
"ONDATE" DATE,
"STARTTIME" NUMBER(9,0),
"ENDTIME" NUMBER(9,0),
"CATEGORIES" NUMBER(9,0)
)
and the data insert as
insert into tmp_tbl values(4571,'2007-01-01',865800,918000,500010666);
insert into tmp_tbl values(4570,'2007-02-01',919800,972000,500000459);
insert into tmp_tbl values(4572,'2007-02-12',973800,1026000,500000333);
insert into tmp_tbl values(4572,'2007-03-11',1027800,1080000,500100459);
insert into tmp_tbl values(4570,'2007-03-31',1243800,1296000,500000459);
insert into tmp_tbl values(4573,'2007-04-11',864000,1296017,500000452);
insert into tmp_tbl values(4573,'2007-04-15',1189800,1242000,500004459);
insert into tmp_tbl values(4573,'2007-04-22',1135800,1188000,500000159);
insert into tmp_tbl values(4570,'2007-05-11',1081800,1134000,500000479);
insert into tmp_tbl values(4576,'2007-05-21',1188000,1189800,500000459);
insert into tmp_tbl values(4576,'2007-06-01',1620000,1621800,522000459);
insert into tmp_tbl values(4578,'2007-06-11',1728000,1729800,500000459);
insert into tmp_tbl values(4578,'2007-06-17',1512000,1513800,500400454);
commit;
I've to do a report within a period giving between '2007-01-01' and '2007-06-30'
the layout should be a format fix for a whole year 2007 as
Months total hour
------ ----------
September 2007 0
October 2007 0
November 2007 0
December 2007 0
January 2007 2
February 2007 5
March 2007 4
April 2007 11
May 2007 4
June 2007 15
July 2007 0
August 2007 0
How I can output include these months (July thru December as they're out of date range of the period giving) in the column of Months as a simple query.
Thanks.
T
August 27, 2008 - 8:34 am UTC
why do you start with september? does it always start with september?
will it always be a single year? is the input really "2007" or a date range, if it is a date range and you always want a year, what would be displayed if the input data set had data for say september in this example?
and how do you use your start/end times to compute time? what unit are they in.
A reader, August 27, 2008 - 10:43 am UTC
Hi Tom,
Sorry for my previous message was not clear enough
The user said that he want the report layout as whole year cycle (start from September previous year to august current year) and the report may be run for period of 2 years consecultive.
Therefore the Months concatenated with year just in case the report run for 2 years.
If the date range enter less than a year then the report should show the data that fall into the date range giving and display Zero for the rest of the months outside date range (i.e. date range giving between '2007-01-01' and '2007-06-30')
The hour will be calculate as round((EndTime - StartTime)/108000) where 108000 = 30*60*60
BTW, the hours' calculation that I included the previous message just a sample, I did not apply the formula yet ... :)
Please let me know if I still confuse you with my explanation .... :)
Thanks,
T.
August 28, 2008 - 9:33 am UTC
your dates still confuse me.
Months total hour
------ ----------
September 2007 0
October 2007 0
November 2007 0
December 2007 0
January 2007 2
February 2007 5
March 2007 4
April 2007 11
May 2007 4
June 2007 15
July 2007 0
August 2007 0
look at the ordering there - did you really mean "sept 2006", NOT 2007???
is this your fiscal year? Eg: if the inputs were:
ops$tkyte%ORA10GR2> variable sd varchar2(30)
ops$tkyte%ORA10GR2> variable ed varchar2(30)
ops$tkyte%ORA10GR2> alter session set nls_date_format = 'yyyy-mm';
Session altered.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :sd := '2007-01'; :ed := '2007-06';
PL/SQL procedure successfully completed.
wouldn't you expect the following date range?
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with date_range
2 as
3 (
4 select
5 case when to_char(sd,'mm') = '09' then sd
6 when to_number(to_char(sd,'mm')) < 9 then add_months(trunc(sd,'y'),-4)
7 else add_months(trunc(sd,'y'),8)
8 end new_sd,
9 case when to_char(ed,'mm') = '08' then ed
10 when to_number(to_char(ed,'mm')) < 8 then add_months(trunc(ed,'y'),7)
11 else add_months(trunc(ed,'y'),19)
12 end new_ed
13 from (select to_date(:sd,'yyyy-mm') sd , to_date(:ed,'yyyy-mm') ed from dual)
14 ),
15 data as
16 (select add_months(new_sd,level-1) dt
17 from date_range
18 connect by level <= months_between(new_ed,new_sd)+1
19 )
20 select to_char(dt,'fmMonth YYYY') the_date from data
21 /
THE_DATE
--------------
September 2006
October 2006
November 2006
December 2006
January 2007
February 2007
March 2007
April 2007
May 2007
June 2007
July 2007
August 2007
12 rows selected.
I think this is what the reader wanted
Matt, August 28, 2008 - 11:23 am UTC
Start with some date table and insert the range of dates. Most applications have a date table like this to use anyway:
create table dt_tbl ( dt date );
insert into dt_tbl values (to_date('2006-09','yyyy-mm' ));
insert into dt_tbl values (to_date('2006-10','yyyy-mm' ));
insert into dt_tbl values (to_date('2006-11','yyyy-mm' ));
insert into dt_tbl values (to_date('2006-12','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-01','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-02','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-03','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-04','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-05','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-06','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-07','yyyy-mm' ));
insert into dt_tbl values (to_date('2007-08','yyyy-mm' ));
commit;
Then the query:
select b.disp_date as the_date, NVL(a.cnt,0) as the_count from
( select to_char(ondate,'Month YYYY') disp_date, count(*) cnt, to_char(ondate,'yyyy-mm') sort_date
from tmp_tbl group by to_char(ondate,'Month YYYY'), to_char(ondate,'yyyy-mm') ) a,
( select to_char(dt,'Month YYYY') disp_date, to_char(dt,'yyyy-mm') sort_date from dt_tbl ) b
where a.sort_date(+) = b.sort_date
order by b.sort_date;
Returns:
THE_DATE THE_COUNT
--------------- ----------
September 2006 0
October 2006 0
November 2006 0
December 2006 0
January 2007 1
February 2007 2
March 2007 2
April 2007 3
May 2007 2
June 2007 3
July 2007 0
August 2007 0
12 rows selected.
August 29, 2008 - 10:31 pm UTC
why when you can generate it???
without having to create data, right above I showed how to generate it?
A reader, August 28, 2008 - 11:54 am UTC
And again ... it was my mistake for date range ... it should be between September 2006 and August 2007 ... :)
Thanks a lot Tom ... that's is exactly the format I'm looking for ...
Thanks Matt ... for the tips.
But the date rang must be use within period enter by user.
Thanks again both of you.
T.
August 29, 2008 - 10:48 pm UTC
so, take my query, outer join to it - you are done - don't you see how you can get there from here?
I output the dates based on the user date range - it should be pretty "trivial" from here?
ops$tkyte%ORA10GR2> select trunc(ondate,'mm') dt, sum( (endtime-starttime)/108000 ) hrs
2 from tmp_tbl
3 where ondate between to_date(:sd,'yyyy-mm') and to_date(:ed,'yyyy-mm')
4 group by trunc(ondate,'mm')
5 order by trunc(ondate,'mm')
6 /
DT HRS
------- ----------
2007-01 .483333333
2007-02 .966666667
2007-03 .966666667
2007-04 4.96682407
2007-05 .5
2007-06 .016666667
6 rows selected.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with date_range
2 as
3 (
4 select
5 case when to_char(sd,'mm') = '09' then sd
6 when to_number(to_char(sd,'mm')) < 9 then add_months(trunc(sd,'y'),-4)
7 else add_months(trunc(sd,'y'),8)
8 end new_sd,
9 case when to_char(ed,'mm') = '08' then ed
10 when to_number(to_char(ed,'mm')) < 8 then add_months(trunc(ed,'y'),7)
11 else add_months(trunc(ed,'y'),19)
12 end new_ed
13 from (select to_date(:sd,'yyyy-mm') sd , to_date(:ed,'yyyy-mm') ed from dual)
14 ),
15 data as
16 (select add_months(new_sd,level-1) dt
17 from date_range
18 connect by level <= months_between(new_ed,new_sd)+1
19 )
20 select data.dt the_date , nvl( round(sum(t.hrs)), 0 ) hours
21 from data,
22 (select trunc(ondate,'mm') dt, (endtime-starttime)/108000 hrs
23 from tmp_tbl
24 where ondate between to_date(:sd,'yyyy-mm') and to_date(:ed,'yyyy-mm')) t
25 where data.dt = t.dt(+)
26 group by data.dt
27 order by data.dt
28 /
THE_DAT HOURS
------- ----------
2006-09 0
2006-10 0
2006-11 0
2006-12 0
2007-01 0
2007-02 1
2007-03 1
2007-04 5
2007-05 1
2007-06 0
2007-07 0
2007-08 0
12 rows selected.
A reader, September 02, 2008 - 5:58 am UTC
CREATE TABLE USER_APPOINTMENT
(
APPOINTMENT_ID NUMBER(10),
APPOINTMENT_DATE DATE,
START_TIME VARCHAR2(8),
END_TIME VARCHAR2(8)
);
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
1, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0830', '0840');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
3, TO_Date( '01/01/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0830', '0810');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
4, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0850', '0900');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
5, TO_Date( '09/04/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0830', '0840');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
2, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0840', '0850');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
6, TO_Date( '09/04/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0850', '0900');
COMMIT;
We can give appointment from 0800 and 0900 on each date want the output of fully booked dates and free times on available dates.
September 02, 2008 - 11:42 am UTC
this is an ill formed 'question', You would actually need to be a tad more verbose before any answer could be given.
"we can give appointment from 0800 and 0900 on each date" - I don't know what that means - not at all.
same for the rest of the sentence.
I can say your method of storing the start/end times is 100% the wrong way to do it.
You would either
a) have just two fields - start_datetime, end_datetime of type date or timestamp (not three as you have now with the date/string and string - what is UP with that STRING - stop that, use the right types)
b) have just two fields - start_datetime, duration of type date and interval
as it is - your data model is wrong - do not store the data in this fashion.
A reader, September 03, 2008 - 4:23 am UTC
CREATE TABLE USER_APPOINTMENT
(
APPOINTMENT_ID NUMBER(10),
APPOINTMENT_DATE DATE,
START_TIME VARCHAR2(8),
END_TIME VARCHAR2(8)
);
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
1, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0830', '0840');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
4, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0850', '0900');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
5, TO_Date( '09/04/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0830', '0840');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
2, TO_Date( '09/03/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0840', '0850');
INSERT INTO USER_APPOINTMENT ( APPOINTMENT_ID, APPOINTMENT_DATE, START_TIME,
END_TIME ) VALUES (
6, TO_Date( '09/04/2008 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), '0850', '0900');
COMMIT;
we can give appointment from 0800 and 0900 on each date means that on each day appointment can only be given between 0800 am and 0900 am. As on date 9/3/2008 all the time is booked therefore this date will be reported as fully booked date. On date 9/4/2008 time from 0840 and 0850 available.
September 03, 2008 - 11:26 am UTC
you know - there is more missing information than supplied information here. I guess I'm supposed to guess "appointments are always 10 minutes each".
And if they are ten minutes always - why bother - WHY BOTHER putting a start and stop?
This table is wrong, it should only have appointment_id and appointment_date - get rid of start and stop time, they are bad, they are wrong.
I will show you how to do this, against a properly stored set of data (a single column is all that is necessary here!)
If you persist in using this incorrectly stored information - you can use the ideas I present to do it yourself (hint, you would basically have to to_char() your date, concatenate the start_time, ignore the end_time and to_date() the resulting concatenation)
Also, you wrote:
we can give appointment from 0800 and 0900 on each date means that on each day
appointment can only be given between 0800 am and 0900 am. As on date 9/3/2008
all the time is booked therefore this date will be reported as fully booked
date. On date 9/4/2008 time from 0840 and 0850 available.
well, frmo 8am-9am on the 3rd of September, it looks to me that 8:00, 8:10, and 8:20 are free.. Same on the 4th. So, either
a) your data supplied is wrong
b) your description of what you want is wrong.
I'll presume the data is correct, the description (8am-9am) is correct and that you meant to say "there are three free on the 3rd and 4 on the 4th"
If that is not correct, use the IDEAS here, the concept and fix it to suit your needs:
ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 id NUMBER(10),
4 dt DATE
5 );
Table created.
ops$tkyte%ORA10GR2> INSERt INTO t ( id, dt) VALUES ( 1, TO_Date( '09/03/2008 0830', 'mm/dd/yyyy hhmi' ));
1 row created.
ops$tkyte%ORA10GR2> INSERT INTO t ( id, dt) VALUES ( 2, TO_Date( '09/03/2008 0840', 'mm/dd/yyyy hhmi' ));
1 row created.
ops$tkyte%ORA10GR2> INSERT INTO t ( id, dt) VALUES ( 4, TO_Date( '09/03/2008 0850', 'mm/dd/yyyy hhmi' ));
1 row created.
ops$tkyte%ORA10GR2> INSERT INTO t ( id, dt) VALUES ( 5, TO_Date( '09/04/2008 0830', 'mm/dd/yyyy hhmi' ));
1 row created.
ops$tkyte%ORA10GR2> INSERT INTO t ( id, dt) VALUES ( 6, TO_Date( '09/04/2008 0850', 'mm/dd/yyyy hhmi' ));
1 row created.
ops$tkyte%ORA10GR2> COMMIT;
Commit complete.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select id, to_char(dt,'dd-mon hh24:mi') dt_str from t order by dt;
ID DT_STR
---------- ------------
1 03-sep 08:30
2 03-sep 08:40
4 03-sep 08:50
5 04-sep 08:30
6 04-sep 08:50
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable sdt varchar2(30)
ops$tkyte%ORA10GR2> variable edt varchar2(30)
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :sdt := '3-sep-2008'; :edt := '4-sep-2008'
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with data
2 as
3 (select to_date( :sdt, 'dd-mon-yyyy') + 8/24 + 10/24/60 * (level-1) theTime
4 from dual
5 connect by level <= 6)
6 select data.theTime, id, case when t.dt is null then 'Free' else 'Booked' end status
7 from data,
8 (select *
9 from t
10 where dt between to_date(:sdt,'dd-mon-yyyy')+8/24 and to_date(:sdt,'dd-mon-yyyy')+9/24 ) t
11 where data.theTime = t.dt(+)
12 order by data.theTime;
THETIME ID STATUS
-------------------- ---------- ------
03-sep-2008 08:00:00 Free
03-sep-2008 08:10:00 Free
03-sep-2008 08:20:00 Free
03-sep-2008 08:30:00 1 Booked
03-sep-2008 08:40:00 2 Booked
03-sep-2008 08:50:00 4 Booked
6 rows selected.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> with data
2 as
3 (select to_date(:sdt,'dd-mon-yyyy')+level-1 theDate
4 from dual
5 connect by level <= to_date(:edt,'dd-mon-yyyy')-to_date(:sdt,'dd-mon-yyyy')+1),
6 times
7 as
8 (select 8/24+10/24/60*(level-1) times
9 from dual
10 connect by level <= 6),
11 full_data
12 as
13 (select theDate + times theTime
14 from data, times)
15 select full_data.theTime, id, case when t.dt is null then 'Free' else 'Booked' end status
16 from full_data,
17 (select *
18 from t
19 where dt between to_date(:sdt,'dd-mon-yyyy')+8/24 and to_date(:edt,'dd-mon-yyyy')+9/24 ) t
20 where full_data.theTime = t.dt(+)
21 order by full_data.theTime;
THETIME ID STATUS
-------------------- ---------- ------
03-sep-2008 08:00:00 Free
03-sep-2008 08:10:00 Free
03-sep-2008 08:20:00 Free
03-sep-2008 08:30:00 1 Booked
03-sep-2008 08:40:00 2 Booked
03-sep-2008 08:50:00 4 Booked
04-sep-2008 08:00:00 Free
04-sep-2008 08:10:00 Free
04-sep-2008 08:20:00 Free
04-sep-2008 08:30:00 5 Booked
04-sep-2008 08:40:00 Free
04-sep-2008 08:50:00 6 Booked
12 rows selected.
first query is for a given DAY
second query is if you have a need for a range of days
A reader, September 03, 2008 - 5:13 am UTC
By the way at this time desing can not be changed as it is already implemented design.
September 03, 2008 - 11:44 am UTC
that is not a reason to not correct something. Never was, never will be.
And if it is already implemented and in production - you must be making modifications (else you would not be asking questions about it) and therefore making a corrective change is doable.
Insert, Select and returning clause
Ganesh Ram, November 19, 2008 - 11:20 am UTC
Hi Tom,
I have a procedure like with a
DECALRE
VAR1 dept.dept_head%type:=0;
BEGIN
INSERT INTO EMP (EMP_NAME)
SELECT DEPT_HEAD
FROM DEPT
WHERE ROWNUM<2
returning DEPT_HEAD into var1;
exception
...
end;
this gives me an error PL/SQL: ORA-00933: SQL command not properly ended and UPDATE DCR_TRACK_INFO INF
SET INF.MI_PERFORMANCE_KY = (SELECT MI.PERFORMANCE_KY
FROM MI2.MI_TRACK MI, DCR_TRACK_CATALOG CAT
WHERE MI.TRACK_KY = CAT.MI_TRACK_KY
AND CAT.TERRITORY_KEY = AN_TERRITORY_KY
AND MI.TRACK_KY=TV_EXCE_DET(LN_EX_CNT).ENTITY_KY
AND ROWNUM<2)
WHERE INF.MI_TRACK_KY=TV_EXCE_DET(LN_EX_CNT).ENTITY_KY
RETURNING PERFORMANCE_KY INTO LN_PERF_KEY; the update statemenet also gives me an error
I need your help/clarification in these cases.
Thanks
Ganesh
November 24, 2008 - 10:44 am UTC
a) insert as select with returning - not supported
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14261/collections.htm#sthref1208
The INSERT, UPDATE, and DELETE statements can include a RETURNING clause, which returns column values from the affected row into a PL/SQL record variable. This eliminates the need to SELECT the row after an insert or update, or before a delete.
By default, you can use this clause only when operating on exactly one row. When you use bulk SQL, you can use the form RETURNING BULK COLLECT INTO to store the results in one or more collections.you can only get more than one row value when using FORALL.
b) no example, why NO EXAMPLE. I don't know what error you get, what you did wrong.
ops$tkyte%ORA10GR2> create table t ( x int );
Table created.
ops$tkyte%ORA10GR2> insert into t values ( null );
1 row created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
2 l_data t.x%type;
3 begin
4 update t set x = (select dbms_random.random from dual) returning x into l_data;
5 dbms_output.put_line( l_data );
6 end;
7 /
2044676981
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> select * from t;
X
----------
2044676981
Executable DB-LS
Charlie MuJiang, January 12, 2009 - 5:44 pm UTC
-- DB ls
select object_type, object_name, subobject_name,
decode( status, 'INVALID', '*', '' ) status,
decode( object_type,
'TABLE', (select tablespace_name
from user_tables
where table_name = object_name),
'TABLE PARTITION', (select tablespace_name
from user_tab_partitions
where table_name = object_name and partition_name = subobject_name),
'INDEX', (select tablespace_name
from user_indexes
where index_name = object_name),
'INDEX PARTITION', (select tablespace_name
from user_ind_partitions
where index_name = object_name and partition_name =
subobject_name),
'LOB', (select tablespace_name
from user_segments
where segment_name = object_name),
null ) tablespace_name
from user_objects a
order by object_type, object_name
;
Complex SQL Query
Monika, March 17, 2009 - 10:20 am UTC
Hi Tom,
I check all section in this thread but was not able to find my answer, request you to please help me out in this
I have two tables as given below
create table t1
(
ID number,
Col1 varchar2(10),
DOM1 varchar2(10),
DOM2 varchar2(10)
);
create table m1
(
Colname varchar2(10),
code varchar2(10),
val varchar2(10)
);
Insert into t1 values (1,'AB1','A','D1');
Insert into t1 values (2,'AB2','B','D2');
Insert into t1 values (3,'AB3','C','D3');
Insert into t1 values (4,'AB4','C','D1');
Insert into t1 values (5,'AB5','B','D3');
Insert into m1 values('DOM1','A','AAAA');
Insert into m1 values('DOM1','B','BBBB');
Insert into m1 values('DOM1','C','CCCC');
Insert into m1 values('DOM2','D1','D111');
Insert into m1 values('DOM2','D2','D222');
Insert into m1 values('DOM2','D3','D333');
Basically second table m1 is a master table having all the possible values of that DOM1,DOM2 columns in T1 can have
Using a single query (aim is to create a view of this) I want below output
SQL> select * from t1; -- values in t1 being replaced by corresponding values in m1
ID COL1 DOM1 DOM2
-------------------------------
1 AB1 AAAA D111
2 AB2 BBBB D222
3 AB3 CCCC D333
4 AB4 CCCC D111
5 AB5 BBBB D333
I know i can use either a function or decode to do this but is there any other way to do so?
March 17, 2009 - 11:00 am UTC
just join??
ops$tkyte%ORA10GR2> select t1.id, t1.col1, a.val dom1, b.val dom2
2 from t1, m1 a, m1 b
3 where t1.dom1 = a.code and a.colname = 'DOM1'
4 and t1.dom2 = b.code and b.colname = 'DOM2'
5 /
ID COL1 DOM1 DOM2
---------- ---------- ---------- ----------
1 AB1 AAAA D111
2 AB2 BBBB D222
3 AB3 CCCC D333
4 AB4 CCCC D111
5 AB5 BBBB D333
olerag, March 22, 2009 - 12:24 pm UTC
Considering a table ...
create table myTable (
a number,
b number,
c varchar2(5)
);
With the following rows....
insert into myTable(a,b,c)
values(1,1,'Hello ');
insert into myTable(a,b,c)
values(2,1,'World');
insert into myTable(a,b,c)
values(3,2,'Toms ');
insert into myTable(a,b,c)
values(4,2,'Cool');
Using SQL only, I would like to see these results...
1 Hello World
2 Toms Cool
Instead of......
1 Hello
1 World
2 Toms
2 Cool
(ignoring, of course, the display of the PK column (a) but using it on the "order by")....
So, grouping the corresponding number column (b) and concatenating the associated string column (c) in order to display (in this case) only two rows is what I'm after. Also, this concept must have a functional name - what would you call it?
This is no problem when I utilize a "helping" pl/sql function. However is this possible using only SQL?
March 24, 2009 - 11:02 am UTC
ops$tkyte%ORA10GR2> select b, replace( substr(max(sys_connect_by_path( c, '|' )),2), '|', ' ' ) scbp
2 from (
3 select b, c, row_number() over (partition by b order by c) rn
4 from myTable
5 )
6 start with rn = 1 connect by prior rn+1 = rn and prior b = b
7 group by b
8 /
B SCBP
---------- -------------------------
1 Hello World
2 Cool Toms
Sorry...
olerag, March 24, 2009 - 12:27 am UTC
You can ignore my previous question. Looks like a job for "string aggregate".
Need to rebuild a string programmatically.
SJ, March 26, 2009 - 4:22 am UTC
Hi Tom,
I need to rebuild a string value programmatically which has '&' and insert the same into another table.
Now the string may contain n no of '&' occurrances.
Hence, can you suggest/modify the existing code
so that it can be used in a general way.
Script for table generation is given as under :
Create table abc (col1 number,col2 varchar2(100));
Insert into abc values (1,`filename=2GB_Final_Test.zip¿||chr(38)||¿filename2=280648119¿||chr(38)||¿filename3=null¿);
Insert into abc values (2,¿filename=c10k-k4u2p10-mz.120-30.S3.bin¿||chr(38)||¿filename2=268438016¿||chr(38)||¿filename3=null¿);
Insert into abc values (3,¿filename=oracle.exe¿||chr(38)||¿filename2=12345¿||chr(38)||¿ filename3=23456¿);
Insert into abc values (4,¿filename=c12kprp-boot-mz.120-33.S1.bin¿||chr(38)||¿filename2=278725201¿||chr(38)||¿filename3=null¿);
Insert into abc values (5,¿filename=oracle.exe¿||chr(38)||¿filename2=123455¿||chr(38)||¿filename3=23456¿||chr(38)||¿filename4=test¿);
Now,i want the 4 row to be build using pl-sql,for the table which is created
or 3 row (This will now be independant of no of '&' occurring in the main string.)
March 30, 2009 - 2:52 pm UTC
I have no idea what you are trying to do.
"rebuild a string" - that has no meaning to anyone.
query
A reader, March 30, 2009 - 4:18 pm UTC
Inserting string having n no of '&' thru pl-sql
J, April 05, 2009 - 3:47 am UTC
I'm having one procedure where user inputs 2 values :
id which is 11111,22222,33333,etc
and string which is combination of parsed strings parsed with character '#'.
Now if user passes 1 and string which is
filename=oracle.exe&mdfid=123455&relmdfid=44545454#filename=java.exe&mdfid=4343434&relmdfid=656565, these would get inserted in a table.
Look of table would be :
id file_name
----------------------------------------------------------------------------------------------
11111 filename=oracle.exe&mdfid=123455&relmdfid=44545454
11111 filename=java.exe&mdfid=4343434&relmdfid=656565
Can you provide me with working code for the same?
If the user inputs filename=oracle.exe&mdfid=123455&relmdfid=44545454#filename=java.exe&mdfid=4343434&relmdfid=656565&mdfid=123456,the code should be generic to handle not only 3 occurrances of '&'.
April 07, 2009 - 5:34 am UTC
I have no idea what you mean.
We don't care if you have 1 or 400 or 1000 or ... &'s in a string, insert away.
If you are playing around in SQLPlus, you might be observing that SQLPlus (sqlplus a command line tool, not the database) uses & for a special character.
Just
SQL> set define off
before playing with &'s and you'll see that in your code, when you write your application, & will have no meaning - other than being & of course.
What J means (I think)
Stephan Uzzell, April 07, 2009 - 5:23 pm UTC
Hi Tom,
I think what J is trying to say is that his procedure can get as input both an id (a number) and a string, and he needs to parse that string into multiple rows in the DB:
so, an input of (1,'this is my string')
would yield
ID STRING
---------- -----------------
1 this is my string
now, the trickier bit would be an input of (1,'this is my first string#this is my second string')
which should yield
ID STRING
---------- ------------------------
1 this is my first string
1 this is my second string
Now, whether that's doable as a non-trivial exercise - I'll have to play with it a bit more.
For J
Stephan Uzzell, April 08, 2009 - 11:00 am UTC
OK, I made it work, but it feels a bit kludgy to me... I'm sure someone could clean it up a bit.
SQL> create table j
2 ( id number,
3 str varchar2(255)
4 )
5 /
Table created.
SQL> var mystr varchar2(4000)
SQL> exec :mystr := '1,this is my first string#this is my second string'
PL/SQL procedure successfully completed.
SQL> INSERT into j( id, str )
2 WITH data AS
3 ( SELECT id,
4 trim( SUBSTR( txt, instr( txt, '#', 1, level ) + 1, instr(
5 txt, '#', 1, level+1 ) - instr( txt, '#', 1, level ) -1 ) )
6 AS token
7 FROM
8 ( SELECT REPLACE(
9 SUBSTR( txt, 0, instr( txt, ',' )-1 ),
10 '#', '' ) AS id,
11 '#' ||SUBSTR( txt, instr( txt, ',' )+1 ) AS txt
12 FROM
13 ( SELECT '#' ||:mystr ||'#' txt
14 FROM dual
15 )
16 )
17 CONNECT BY level <= LENGTH( :mystr )
18 - LENGTH( REPLACE( :mystr,
19 '#', '' ) )+1
20 )
21 SELECT id,
22 token
23 FROM data;
2 rows created.
SQL>commit;
Commit complete.
SQL>select * from j;
ID STR
---------- ----------------------------------------
1 this is my first string
1 this is my second string
CROSSTAB QUERY
Sirajudeen M I, June 03, 2009 - 8:33 am UTC
I got this with CASE but not with DECODE. Please correct me¿.
Also instead of Hard coding the dates, any way to make it dynamic?
create table temp1
(f_name varchar2(100),
f_date date,
f_amount integer
);
select * from temp1
insert into temp1 values('A', to_date('15-june-2009', 'dd-mm-yyyy'), 100)
insert into temp1 values('B', to_date('15-june-2009', 'dd-mm-yyyy'), 200)
insert into temp1 values('C', to_date('17-june-2009', 'dd-mm-yyyy'), 500)
insert into temp1 values('D', to_date('17-june-2009', 'dd-mm-yyyy'), 600)
select F_NAME, DECODE(F_DATE, '15-june-2009',f_amount) as "15-June-09",
DECODE(F_DATE, '17-june-2009', f_amount) as "17-June-09"
from temp1
F_NAME 15-June-09 17-June-09
A
B
C
D
SELECT
f_Name As Name,
CASE WHEN f_date ='15-june-2009' THEN f_amount ELSE 0 END As "15-June-09",
CASE WHEN f_date ='17-june-2009' THEN f_amount ELSE 0 END As "17-june-09"
FROM temp1
NAME 15-June-09 17-june-09
A 100 0
B 200 0
C 0 500
D 0 600
Please assist. Cant this be achieved via SQL?? and not using PL/SQL....
June 04, 2009 - 9:32 am UTC
... I got this with CASE but not with DECODE. Please correct me¿. ...
given that case can do whatever decode can do - not surprising... Of course you can do it with case instead of decode.
why - WHY WHY WHY - do people constantly insist on comparing datatype X to datatype Y without explicitly converting??!?!?!?!?!?
your decode compares a date to a string.
ops$tkyte%ORA10GR2> select F_NAME, DECODE(F_DATE, to_date('15-june-2009','dd-month-yyyy'),f_amount) as "15-June-09",
2 DECODE(F_DATE, to_date('17-june-2009','dd-month-yyyy'),f_amount) as "17-June-09"
3 from temp1
4 /
F 15-June-09 17-June-09
- ---------- ----------
A 100
B 200
C 500
D 600
the "rules" of decode would have the date converted into a string (using your default date mask) and then compared as string to string - the strings do not match - hence, your logic fails.
convert EXPLICITLY ALWAYS
and watch your FORMATS (mm is not really appropriate)
You can make the bits in the decode/case 'dynamic' (just bind them in at run time) but column names MUST BE KNOWN AT HARD PARSE TIME - hence if you want the column names to change based on the inputs, you would need to have a unique query for each set of inputs (not a good idea).
single quotes problem
ram, June 19, 2009 - 9:41 am UTC
Hi tom
Some of my column contains single quotes and some does not . I will pass a variable to a Pl/sql function which should return few more columns matching to that string . Some input paramaters contains single quotes and some doesnot
could you please help me
June 19, 2009 - 3:58 pm UTC
simple
using bind variables, bind variables are blind to quotes
and they are the right way to do it.
oerr ora -
Samy, June 23, 2009 - 3:03 am UTC
hi Tom,
i have seen in lot of places you use this oerr ora 12514 (OERR ORA ERRCODE)
but when i try same i get error
SP2-0734: unknown command beginning "oerr ora 1..." - rest of line ignored.
can i know the reason.
June 26, 2009 - 8:57 am UTC
because when you see me do it, it is from the command line.
It is not a sqlplus command.
ops$tkyte%ORA10GR2> exec dbms_output.put_line( sqlerrm( -1 ) );
ORA-00001: unique constraint (.) violated
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> <b>HOST</b> oerr ora 1
00001, 00000, "unique constraint (%s.%s) violated"
// *Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
// For Trusted Oracle configured in DBMS MAC mode, you may see
// this message if a duplicate entry exists at a different level.
// *Action: Either remove the unique restriction or do not insert the key.
SQL Queries
Sebastien, February 03, 2010 - 2:51 pm UTC
Dear Tom, Dear All,
I went through various questions and answers (actually I was looking for the differences when querying with "Group by and Order by" from Oracle 9 to 11. [There are some! ;-) ]
This said, as a matter of fact, I read a few exchanges, which I related to a 12 years old experience... therefore want to share with all.
1/ Data are data. i.e. they are what was intended to be stored in first place the way things were understood to be useful.
2/ Queries are tools to retrieve sub sets from the original data (or combined sets)
3/ Although the 1. and 2. may look odd to be written here, they are a key statement!
a- Do not expect to retrieve data in an intelligent way if you do not know how they are stored!
b- Do not expect to do a good job if the only way you get your query is to use languages such as PL/SQL. It is a good one, but just try to avoid it!
c- A PL/SQL code may be much slower than a SQL+ code alone. I am working with windows/batch files/oracle flat SQL files. My code are not exceeding 150 kB in siz but are more than hundred time faster than codes developped in PL/SQL doing the same but being extremely large. What the difference? Why did those guy did that? Very easy: they did not learn the data! Just a very simple example, when you enter a supermarket, if you do not get how things are organized, how long would it take you to buy ten items? Same here!
d- So my very small council: try to catch the structure and developp thereafter your queries. From experience (I am using Oracle for Statistical purposes on large DB... using plenty of mathematical functions) 99% of the stuff is SQL! Remember, as soon as you are structured yourself, this will work!
e- A small note. The embedded evaluation of the CPU cycle cost may not always work. I have a very funny query showing up with a huge number there... although this large query performs in seconds, replacing a former query that required -- with the same platform -- over an hour!
Enjoy Oracle!
Cheers
Sebastien
PARSING and HINTS
Sirajudeen M I, March 09, 2010 - 12:01 am UTC
Hi Tom,
Clement day
This is about the PARSING which happens every time when we execute the SQL.
Coming to my question; you know very well that in DWH we have Daily/ Weekly / Monthly or Adhoc runs. Consider the scenario that we are using the Same SQL in our ETL part for all those runs in different scenarios (including the CDC with Date as Date part will be changing). In this case, every time the query is both HARD parsed and SOFT parsed.
Is there any possibility to avoid this and straight a way go for execution? In other way; can the execution path or the query be pinned there in the buffer so that tha actions/tasks prior to executions can be avoided?
HINTS:
I have read in one amongst the threads where you have highlighted about threads and also when to go for it. Also you have given some good HINTS and bad ones. So, if the Oracle itself takes care of the Execution path and why do we need to explicitly go for it? Please clarify.
Thanks and Regards,
Sirajudeen M I
March 09, 2010 - 11:46 am UTC
.. This is about the PARSING which happens every time when we execute the SQL. ...
only if you write it that way! You can parse once and execute millions of times, you need not - should not - parse for every execute.
.. you know very well that in DWH we have Daily/ Weekly /
Monthly or Adhoc runs. ...
No, I do not, I don't even know what a daily/weekly/monthly and especially ad-hoc runs are. You might, but I don't.
... In this case, every time the query is both
HARD parsed and SOFT parsed.
...
why? that is not my understanding of how things work.
PARSING and HINTS
Sirajudeen M I, March 10, 2010 - 4:14 am UTC
Tom,
Clement day
My concern is: We are executing the same SELECT SQL in the DWH ambience for our DAILY or WEEKLY or MONTHLY ETL data.
As part of execution of the query; it is HARD PARSED every time and then SOFT PARSED. In this case; i dont want the QUERY to be HARD PARSED very time as we dont do any syntactic changes in that query and i want that query to be pinned in the cache so that execution can be much faster comparing with earlier one. Is this clear? Also let me know where to check whether the query is Hard parsed or soft parsed alone?
Coming to HINTS:
Please let me know why to go for HINTS. Is it to decrease the execution time or what is the main reason? As i have read in one of your threads where you have specified some HINTS as bad and some as good, which include; FIRST_ROW(n).
Thanks and Regards,
Sirajudeen M I
HINTS and PARSING
Sirajudeen M I, March 11, 2010 - 7:28 am UTC
Tom,
Clement day - it means Peaceful day :)
And thank you for clarifying. HARD PARSE has to happen atleast once and i do agree. My concern is; i dont want the same query to be hard parsed again the next day once it has been been executed successfully the previous day. So, the query has to be there in the cache with the status for hard parse as "YES" and whenever the same query comes for execution, it needs to be soft parsed alone. I believe this is clear. SO, what has to done to achieve this?
Coming to the HINTS;
Yep, do you mean that HINTS are for handling Paginations?
Or can you list out few scenarios for using them?
In the URL:
https://asktom.oracle.com/Misc/oramag/on-top-n-and-pagination-queries.html i could see the comparison of HINTS with Analytical and ROWNUM features.
Thanks.
March 11, 2010 - 8:35 am UTC
It would have to be that
a) the shared pool is large enough to hold every thing you do forever (not practical)
b) that the sql is used often enough to have it make sense for it to stay in the shared pool (doesn't sound like it)
c) that nothing happens overnight that would invalidate the sql (like statistics)
I would not worry about a sql statement not staying in the shared pool from day to day - the overhead of parsing it ONCE per day is - well - so small as to not warrant our attention.
total number of queries executed
Kishore, June 17, 2010 - 4:17 pm UTC
Hi Tom ,
I ended up in a situation where i need to know how to get the total number of queries executed in the database since the instance is up and running . Do we have a way to find out ? I tried to lookup in AWR Report but couldnt find it .
Can you please help me .
Thanks,
June 22, 2010 - 12:39 pm UTC
select * from v$sysstat where name = 'execute count';
that'll tell you how many sql (including top level plsql) statements have been executed since the instance was started.
How to achive
Manoj, June 29, 2010 - 4:51 am UTC
Hi Tom,
I have a table contacts(emp_id varchar, contact_date date, option number, result number, status number).
Table can contain multiple rows for an employee.
Now I have to do the followint things:
1. Max contact_date for an employee when result = 32
2. Max contact_date for an employee when result = 31 and option = 1
3. Max contact_date for an employee when result = 31 and status = 0
I know these can be defined in three different queries or may be put inside an function containing those queries.
But I want to know if this can be handled in a single query.
July 06, 2010 - 12:46 pm UTC
select emp_id,
max(case when result=32 then contact_date),
max(case when result=31 and option = 1 then contact_date),
max(case when result=31 and status = 0 then contact_date)
from t
where result in (31,31)
group by emp_id;
SQL
Abhisek, June 29, 2010 - 11:52 am UTC
Hi,
Table is dept varchar, dept_date date, status number
Can you suggest me a way to get:
I have to select the count of the status as 1 starting today in dept_date and continue in dept_date till I find a value in status column as 2 or 3.
Please let me know if my question is clear.
July 06, 2010 - 12:52 pm UTC
it is not clear
and it didn't come with a create table and inserts to develop a sample query against either.
it sounds like, it just SOUNDS like you want to
logically partition the data by dept, to get a result by dept - not a single result for all records.
then, within a dept, look for a record with status = 1 and after sorting by dept_date within dept - count how many records there are until you see a status of 2 or 3.
keep doing that for every status=1 record you find, reseting counters/etc when you get to a new dept value.
query or calculated column ?
A reader, July 08, 2010 - 9:53 am UTC
I have table like
Create table TRADE( id number primary key,
price number(10,4),
seller_id number references user,
buyer_id number references user,
date timestamp);
create table user( user_id number primary key, fname varchar2(100));
now, I want to find # of trades done by users (a group by user).
eg.
user_id, count(*)
1 10
2 7
One guru told me to have two rows for each trade! but didn't convince me yet.. any suggestions ?
also, you can't do obvious becuase the "front-end TOOL" does not allow me to filter on two column (i.e or is not allowd.).. like
SELECT u.USER_ID, COUNT (*)
FROM trade tf, users u
WHERE u.ID = TF.BUYER_ID OR u.id = TF.SELLER_ID
GROUP BY u.USER_ID
Thanks
July 08, 2010 - 12:57 pm UTC
ops$tkyte%ORA11GR2> insert into users (user_id, fname ) values ( 100, 'x' );
1 row created.
ops$tkyte%ORA11GR2> insert into users (user_id, fname ) values ( 200, 'y' );
1 row created.
ops$tkyte%ORA11GR2> insert into users (user_id, fname ) values ( 300, 'z' );
1 row created.
ops$tkyte%ORA11GR2> insert into trade(id,seller_id,buyer_id,dt) values ( 1, 100, 200, sysdate );
1 row created.
ops$tkyte%ORA11GR2> insert into trade(id,seller_id,buyer_id,dt) values ( 2, 200, 300, sysdate );
1 row created.
ops$tkyte%ORA11GR2> insert into trade(id,seller_id,buyer_id,dt) values ( 3, 200, 100, sysdate );
1 row created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select decode(r,1,seller_id,2,buyer_id), count(*)
2 from trade, (select 1 r from dual union all select 2 r from dual)
3 group by decode(r,1,seller_id,2,buyer_id)
4 order by 1
5 /
DECODE(R,1,SELLER_ID,2,BUYER_ID) COUNT(*)
-------------------------------- ----------
100 2
200 3
300 1
A reader, July 08, 2010 - 4:20 pm UTC
Thanks Tom. So, the guru was a real Guru.
This certainly fine. but what I was looking for is a calculated column.
trade_id trade_type buyer seller calculated_column
--------- ----------- ----- ------ ------------------
1 Basic 100 200 some function (that lets me group by user_id)
is this even possible ? because if I cross join all other cube's measures will be wrong.
July 08, 2010 - 4:25 pm UTC
you cannot invoke magic and say "some function" without telling me what you mean
This makes no sense to me, give example. You already have the user id in buyer and seller, all I did was turn each row into two without storing two rows (which is the right approach in my opinion).
What 'function', what 'magic needs to go here' do you need???
A reader, July 09, 2010 - 11:11 am UTC
Yep. I need magic :)
so, I have a cube. Trade Fact table , users is one of the dimention together with product and other dimentions.
so, now this cube generates, counts by date/quarter/year and by region and by products.. all aggregations and counts.
now my business user want how many trades deutsche bank did over last quarter for this product/ in North America.
so, I need to count, both side (buy/sell)trades , together with grouping by product/region etc. in some varient of crystal reports tool.
so to support counts on products/region like dim's, I can't have 2 rows (even logical) in result set.
may be you can suggest me a new star desgign for trade fact ?
July 09, 2010 - 4:53 pm UTC
why can't you have two rows, of course you can, I just showed you how. now, your job is to show me why it cannot be. I don't see why it cannot.
You told me:
I have a table:
Create table TRADE( id number primary key,
price number(10,4),
seller_id number references user,
buyer_id number references user,
date timestamp);
I created a query against that table that will in fact do what you ask. Now, you explain why that won't work please. Then maybe we can continue.
your help
lincoln, July 14, 2010 - 6:56 am UTC
Am writing you with full of awereness that you will give a sympathetic and favourable consideration.
please i need dumps on oracle oca. many thanks
A reader, July 18, 2010 - 11:06 pm UTC
Hi sir,
Let say, there will be a change on disk and I want to measure the dml rate on the disk.
Do you have a script which creates a big table and do millions of inserts,updates, deletes, etc ?
I want to run that script before and after the change.
regards
July 19, 2010 - 2:00 pm UTC
I usually write em as I need em. They are not "hard" to concoct.
A reader, July 19, 2010 - 10:45 pm UTC
Thanks sir,
Do you think below for loop is good way to bechmark dml rate or Is there anything else that you suggest?
create table temp ( a int);
begin
2 for i in 1..2000000 loop
3 insert into temp values (i);
4 commit;
5 update temp set a=a+i where a<10000;
6 commit;
7 delete from temp where rownum<5;
8 commit;
9 end loop;
10 end;
11 /
July 23, 2010 - 6:46 am UTC
not really - think about what it does. Does it simulate ANYTHING similar to what you will be doing in real life?
You insert a single row
then update it (but only the first 10,000 times through a loop of 2,000,000 - the other 2,000,000-10,000 times you update nothing)
and then you delete it
that table will have at most 1 row at any point in time. does this reflect your natural state of being in real life?
setting up a reasonable test is going to take time, programming and almost certainly multiple sessions.
SQL Function to check rows
Abhisek, July 21, 2010 - 6:25 am UTC
Hi Tom,
I have a table.
create table demo(e_no number, e_name varchar2(10));
select * from demo;
insert into demo values(1,'A');
insert into demo values(1,'B');
insert into demo values(2,'A');
insert into demo values(3,'A');
insert into demo values(3,'A');
insert into demo values(4,'B');
Now I want to write a function that checks for the existence of 'A' in the table for the given e_id. If there are different rows, other than 'A' for e_no e.g. e_no=1, it will return me FALSE.
Now if there are one or more rows for e_no with 'A' only e.g. e_no = 2 or 3, it will return me TRUE.
I am trying this query:
select * from demo x where x.e_name = 'A' and x.e_no=1
and exists(select 1 from demo j where j.e_name != 'A' and j.e_no = x.e_no)
But it is not working as per expectation. Result should be:
RESULT E_NO
FALSE 1,4
TRUE 2,3
Please suggest.
July 23, 2010 - 8:24 am UTC
either of
ops$tkyte%ORA11GR2> select e_no, e_name, decode( count( case when e_name = 'A' then null else 1 end ) over (partition by e_no), 0, 'true', 'false' ) flag
2 from demo
3 /
E_NO E_NAME FLAG
---------- ---------- -----
1 A false
1 B false
2 A true
3 A true
3 A true
4 B false
6 rows selected.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select e_no, decode( count( case when e_name = 'A' then null else 1 end ), 0, 'true', 'false' ) flag
2 from demo
3 group by e_no
4 /
E_NO FLAG
---------- -----
1 false
2 true
4 false
3 true
will work for you
I think I solved my question
Abhisek, July 21, 2010 - 6:29 am UTC
Hi Tom,
Please discard my request as I think I have got what I wanted:
select * from demo x where x.e_name = 'A' and x.e_no=3
and not exists(select 1 from demo j where j.e_name != 'A' and j.e_no = x.e_no)
Please advise if any better ideas.
A reader, July 26, 2010 - 5:43 pm UTC
Thanks sir,
....setting up a reasonable test is going to take time, programming and almost certainly multiple sessions
Can I find a pl-sql statements on this site which performs number of dml statements to bechmark?
July 27, 2010 - 12:12 pm UTC
... Can I find a pl-sql statements on this site which performs number of dml
statements to bechmark?...
after reading this:
setting up a reasonable test is going to take time, programming and almost
certainly multiple sessions
you still believe you can just magically find something emulates what you will be doing in real life? Huh.
We don't even begin to understand what you are even thinking about benchmarking. We have no clue what your system is being designed to do. When you figure that out - you should be able to set up a sketch of a simulation that would emulate at least a little bit - what YOU are going to be doing.
Rows into columns
Neeha, October 30, 2010 - 8:41 am UTC
Hi, I have a similar problem to original post
It is related to school database. There is a table STUDENT regarding basic information and there is a table ABSENT regarding students which are absent on a day.
ABSENT table holds the absentee information of a student if he/she is absent.
Sample data is as follows....
create table student (sid number primary key, sname varchar2(10));
insert into student values (1, 'ADAMS');
insert into student values (2, 'BLAKE');
insert into student values (3, 'CLARK');
insert into student values (4, 'DOLLY');
insert into studnet values (5, 'EMMA');
create table absent (sid number, absdate date, reason varchar2(2), primary key (sid,absdate));
insert into absent values (1,'28-OCT-10','A');
insert into absent values (2,'28-OCT-10','L');
insert into absent values (3,'28-OCT-10','ML');
insert into absent values (2,'29-OCT-10','L');
insert into absent values (3,'29-OCT-10','ML');
insert into absent values (4,'29-OCT-10', 'A');
insert into absent values (2,'30-OCT-2010', 'A');
insert into absent values (3,'30-OCT-2010', 'ML');
insert into absent values (4,'30-OCT-2010','L');
commit;
Now I need results (based on current system date) for previous 7 days including sunday.
-- SUPPOSE TODAY IS 01-NOV-10
SNAME 26-OCT-2010 27-OCT-2010 28-OCT-2010 29-OCT-2010 30-OCT-2010 31-OCT-2010 01-NOV-10
ADAMS P P A P P S P
BLAKE P P L L A S P
CLARK P P ML ML ML S P
DOLLY P P P A L S P
EMMA P P P P P S P
-- SUPPOSE TODAY IS 02-NOV-10
SNAME 27-OCT-2010 28-OCT-2010 29-OCT-2010 30-OCT-2010 31-OCT-2010 01-NOV-10 02-NOV-10
ADAMS P A P P S P P
BLAKE P L L A S P P
CLARK P ML ML ML S P P
DOLLY P P A L S P P
EMMA P P P P S P P
Many thanks in advance
November 01, 2010 - 7:17 am UTC
you can fill in P and S yourself - simple extension.... here is the big part of the work:
ops$tkyte%ORA11GR2> select a.sid, a.sname, b.absdate, b.reason
2 from student a, absent b
3 where a.sid = b.sid(+)
4 /
SID SNAME ABSDATE RE
---------- ---------- --------- --
1 ADAMS 28-OCT-10 A
2 BLAKE 28-OCT-10 L
3 CLARK 28-OCT-10 ML
2 BLAKE 29-OCT-10 L
3 CLARK 29-OCT-10 ML
4 DOLLY 29-OCT-10 A
2 BLAKE 30-OCT-10 A
3 CLARK 30-OCT-10 ML
4 DOLLY 30-OCT-10 L
5 EMMA
10 rows selected.
ops$tkyte%ORA11GR2> select a.sid, a.sname,
2 max(decode( b.absdate, trunc(sysdate), b.reason )) today,
3 max(decode( b.absdate, trunc(sysdate-1), b.reason )) yesterday,
4 max(decode( b.absdate, trunc(sysdate-2), b.reason )) two_days_ago,
5 max(decode( b.absdate, trunc(sysdate-3), b.reason )) three_days_ago,
6 max(decode( b.absdate, trunc(sysdate-4), b.reason )) four_days_ago,
7 max(decode( b.absdate, trunc(sysdate-5), b.reason )) five_days_ago,
8 max(decode( b.absdate, trunc(sysdate-6), b.reason )) six_days_ago
9 from student a, absent b
10 where a.sid = b.sid(+)
11 group by a.sid, a.sname
12 order by a.sname
13 /
SID SNAME TO YE TW TH FO FI SI
---------- ---------- -- -- -- -- -- -- --
1 ADAMS A
2 BLAKE A L L
3 CLARK ML ML ML
4 DOLLY L A
5 EMMA
ops$tkyte%ORA11GR2> select 1 oc, cast(null as number) sid, cast(null as varchar2(10)) sname,
2 to_char(trunc(sysdate),'dd-mon-yyyy') today,
3 to_char(trunc(sysdate-1),'dd-mon-yyyy')yesterday,
4 to_char(trunc(sysdate-2),'dd-mon-yyyy')two_days_ago,
5 to_char(trunc(sysdate-3),'dd-mon-yyyy')three_days_ago,
6 to_char(trunc(sysdate-4),'dd-mon-yyyy')four_days_ago,
7 to_char(trunc(sysdate-5),'dd-mon-yyyy')five_days_ago,
8 to_char(trunc(sysdate-6),'dd-mon-yyyy')six_days_ago
9 from dual
10 union all
11 select 2 oc, a.sid, a.sname,
12 max(decode( b.absdate, trunc(sysdate), b.reason )) today,
13 max(decode( b.absdate, trunc(sysdate-1), b.reason )) yesterday,
14 max(decode( b.absdate, trunc(sysdate-2), b.reason )) two_days_ago,
15 max(decode( b.absdate, trunc(sysdate-3), b.reason )) three_days_ago,
16 max(decode( b.absdate, trunc(sysdate-4), b.reason )) four_days_ago,
17 max(decode( b.absdate, trunc(sysdate-5), b.reason )) five_days_ago,
18 max(decode( b.absdate, trunc(sysdate-6), b.reason )) six_days_ago
19 from student a, absent b
20 where a.sid = b.sid(+)
21 group by a.sid, a.sname
22 order by oc, sname
23 /
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
---------- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 01-nov-2010 31-oct-2010 30-oct-2010 29-oct-2010 28-oct-2010 27-oct-2010 26-oct-2010
2 1 ADAMS A
2 2 BLAKE A L L
2 3 CLARK ML ML ML
2 4 DOLLY L A
2 5 EMMA
6 rows selected.
How to get Sunday
Neeha, November 01, 2010 - 9:13 am UTC
Hi thanks, I manage to get 'P' as follows
SELECT 1 oc, CAST (NULL AS NUMBER) SID, CAST (NULL AS VARCHAR2 (10)) sname,
TO_CHAR (TRUNC (SYSDATE), 'dd-mon-yyyy') today,
TO_CHAR (TRUNC (SYSDATE - 1), 'dd-mon-yyyy') yesterday,
TO_CHAR (TRUNC (SYSDATE - 2), 'dd-mon-yyyy') two_days_ago,
TO_CHAR (TRUNC (SYSDATE - 3), 'dd-mon-yyyy') three_days_ago,
TO_CHAR (TRUNC (SYSDATE - 4), 'dd-mon-yyyy') four_days_ago,
TO_CHAR (TRUNC (SYSDATE - 5), 'dd-mon-yyyy') five_days_ago,
TO_CHAR (TRUNC (SYSDATE - 6), 'dd-mon-yyyy') six_days_ago
FROM DUAL
UNION ALL
SELECT 2 oc, a.SID, a.sname,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE), b.reason)),'P') today,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 1), b.reason)),'P') yesterday,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 2), b.reason)),'P') two_days_ago,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 3), b.reason)),'P') three_days_ago,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 4), b.reason)),'P') four_days_ago,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 5), b.reason)),'P') five_days_ago,
nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 6), b.reason)),'P') six_days_ago
FROM student a, absent b
WHERE a.SID = b.SID(+)
GROUP BY a.SID, a.sname
ORDER BY oc, sname
/
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
----- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 01-nov-2010 31-oct-2010 30-oct-2010 29-oct-2010 28-oct-2010 27-oct-2010 26-oct-2010
2 1 ADAMS P P P P A P P
2 2 BLAKE P P A L L P P
2 3 CLARK P P ML ML ML P P
2 4 DOLLY P P L A P P P
2 5 EMMY P P P P P P P
but the following should be for 'S' but no success, pleeease help again
SELECT 1 oc, CAST (NULL AS NUMBER) SID, CAST (NULL AS VARCHAR2 (10)) sname,
TO_CHAR (TRUNC (SYSDATE), 'dd-mon-yyyy') today,
TO_CHAR (TRUNC (SYSDATE - 1), 'dd-mon-yyyy') yesterday,
TO_CHAR (TRUNC (SYSDATE - 2), 'dd-mon-yyyy') two_days_ago,
TO_CHAR (TRUNC (SYSDATE - 3), 'dd-mon-yyyy') three_days_ago,
TO_CHAR (TRUNC (SYSDATE - 4), 'dd-mon-yyyy') four_days_ago,
TO_CHAR (TRUNC (SYSDATE - 5), 'dd-mon-yyyy') five_days_ago,
TO_CHAR (TRUNC (SYSDATE - 6), 'dd-mon-yyyy') six_days_ago
FROM DUAL
UNION ALL
SELECT 2 oc, a.SID, a.sname,
decode(to_char(sysdate,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE), b.reason)),'P')) today,
decode(to_char(sysdate-1,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 1), b.reason)),'P')) yesterday,
decode(to_char(sysdate-2,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 2), b.reason)),'P')) two_days_ago,
decode(to_char(sysdate-3,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 3), b.reason)),'P')) three_days_ago,
decode(to_char(sysdate-4,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 4), b.reason)),'P')) four_days_ago,
decode(to_char(sysdate-5,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 5), b.reason)),'P')) five_days_ago,
decode(to_char(sysdate-6,'day'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE - 6), b.reason)),'P')) six_days_ago
FROM student a, absent b
WHERE a.SID = b.SID(+)
GROUP BY a.SID, a.sname
ORDER BY oc, sname
/
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
----- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 01-nov-2010 31-oct-2010 30-oct-2010 29-oct-2010 28-oct-2010 27-oct-2010 26-oct-2010
2 1 ADAMS P P P P A P P
2 2 BLAKE P P A L L P P
2 3 CLARK P P ML ML ML P P
2 4 DOLLY P P L A P P P
2 5 EMMY P P P P P P P
Zahir Mohideen, November 01, 2010 - 9:21 am UTC
Assuming you are in 11g , You should be able to use partition outer join and pivot to get the desired result.
( Partition Outer Join was introduced in 10g , Pivot was introduced in 11g)
SELECT sname,
DAY1 ,
DAY2,
DAY3,
DAY4,
DAY5,
DAY6,
DAY7
FROM
(SELECT SName ,
NVL(reason, ' ' ) reason,
row_number() over ( partition BY sname order by listdatetime DESC ) rn
FROM
(SELECT a.sid,
a.sname,
b.absdate,
b.reason
FROM student a,
absent b
WHERE a.sid = b.sid(+)
AND absdate >= Sysdate - 7
) Load PARTITION BY (sName)
RIGHT OUTER JOIN
(SELECT to_date(sysdate - level + 1 )listdatetime
FROM Dual
CONNECT BY Level <= 7
) Tim
ON ( TRUNC(absdate ) = TRUNC(Listdatetime) )
) Pivot ( MAX(reason) FOR Rn IN ( 1 AS Day1 , 2 AS Day2 , 3 AS Day3 , 4 AS Day4 , 5 AS Day5 , 6 AS Day6 , 7 AS Day7 ) )
ORDER BY 1 ;
SNAME DAY1 DAY2 DAY3 DAY4 DAY5 DAY6 DAY7
---------- ---- ---- ---- ---- ---- ---- ----
ADAMS A
BLAKE A L L
CLARK ML ML ML
DOLLY L A
Sunday
Michel Cadot, November 01, 2010 - 11:48 am UTC
but the following should be for 'S' but no success, pleeease help again
Use format 'fmday', 'day' format is padded with blanks up to the largest day name.
Regards
Michel
A reader, November 01, 2010 - 12:15 pm UTC
yes i tried but does not seem to work
November 01, 2010 - 3:49 pm UTC
show your work
Sunday
Michel Cadot, November 01, 2010 - 12:53 pm UTC
It works for me
SQL> SELECT 1 oc, CAST (NULL AS NUMBER) SID, CAST (NULL AS VARCHAR2 (10)) sname,
2 TO_CHAR (TRUNC (SYSDATE), 'dd-mon-yyyy') today,
3 TO_CHAR (TRUNC (SYSDATE - 1), 'dd-mon-yyyy') yesterday,
4 TO_CHAR (TRUNC (SYSDATE - 2), 'dd-mon-yyyy') two_days_ago,
5 TO_CHAR (TRUNC (SYSDATE - 3), 'dd-mon-yyyy') three_days_ago,
6 TO_CHAR (TRUNC (SYSDATE - 4), 'dd-mon-yyyy') four_days_ago,
7 TO_CHAR (TRUNC (SYSDATE - 5), 'dd-mon-yyyy') five_days_ago,
8 TO_CHAR (TRUNC (SYSDATE - 6), 'dd-mon-yyyy') six_days_ago
9 FROM DUAL
10 UNION ALL
11 SELECT 2 oc, a.SID, a.sname,
12 decode(to_char(sysdate,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE),
13 b.reason)),'P')) today,
14 decode(to_char(sysdate-1,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
15 1), b.reason)),'P')) yesterday,
16 decode(to_char(sysdate-2,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
17 2), b.reason)),'P')) two_days_ago,
18 decode(to_char(sysdate-3,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
19 3), b.reason)),'P')) three_days_ago,
20 decode(to_char(sysdate-4,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
21 4), b.reason)),'P')) four_days_ago,
22 decode(to_char(sysdate-5,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
23 5), b.reason)),'P')) five_days_ago,
24 decode(to_char(sysdate-6,'fmday'),'sunday','S', nvl(MAX (DECODE (b.absdate, TRUNC (SYSDATE -
25 6), b.reason)),'P')) six_days_ago
26 FROM student a, absent b
27 WHERE a.SID = b.SID(+)
28 GROUP BY a.SID, a.sname
29 ORDER BY oc, sname
30 /
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
---------- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 01-nov-2010 31-oct-2010 30-oct-2010 29-oct-2010 28-oct-2010 27-oct-2010 26-oct-2010
2 1 ADAMS P S P P P P P
2 2 BLAKE P S A P P P P
2 3 CLARK P S ML P P P P
2 4 DOLLY P S L P P P P
Regards
Michel
How to get Sunday
Neeha, November 02, 2010 - 8:20 am UTC
Ahhh...
I got the point.
I know what fill mode but I really do not understand how this "fmday" gives results and "day" does not
SQL> select TO_CHAR (SYSDATE, 'fmday') from dual;
TO_CHAR(S
---------
tuesday
SQL> select TO_CHAR (SYSDATE, 'day') from dual;
TO_CHAR(S
---------
tuesday
SQL> select TO_CHAR (SYSDATE-2, 'day') from dual
TO_CHAR(S
---------
sunday
SQL> select TO_CHAR (SYSDATE-2, 'day') from dual;
TO_CHAR(S
---------
sunday
I did not see anything different here
Any idea??
How to get Sunday
Neeha, November 02, 2010 - 8:23 am UTC
Last line should be read as
select TO_CHAR (SYSDATE-2, 'fmday') from dual
How to get Sunday
Neeha, November 02, 2010 - 8:31 am UTC
Oh IC I got your point now Cadet
i was not getting "fmday" properly. got that now
thanks you both tom and cadet
Thanks very much
insertion problem
supriya, November 08, 2010 - 3:04 am UTC
Respected Tom,
Please guide to how to solve.
In my organization, I have a table and in that there is a column named "code".
I want to restrict some insertion to that particular column.
suppose that code column values are 12 and 1245
then i cant insert the value 12,1245, 1 ,124 and so on but i can insert 2 ,123,15,12456 and so on.
that means the new values should not be any substring of the existing data from left.
making that column primary key and then I had a logic to compare the existing value which are longer than the new value and then to perform this.But dont know how to make it happen correctly.
Hope you guys know it simply.
waiting for your reply
regards,
Supriya
November 08, 2010 - 8:23 am UTC
I have a table and in that there is a column named "code".
I want to restrict some insertion to that particular column.
suppose that code column values are 12 and 1245
then i cant insert the value 12,1245, 1 ,124 and so on but i can insert 2
,123,15,12456 and so on.
that makes no sense so far. Everything was indicating that there is A SINGLE COLUMN code - but then we have 12 and 1245 (which must be ROWS in the table - since code is a column column) and the requirement that a row can only be inserted if it is a substr of some existing data.
If you are saying "if you insert 12, then you can insert 1245 - but if you inserted 1245 first and then 12 it should fail" - I seriously question this rule and its true validity in real life.
Please clarify in lots more detail what you really mean here.
insertion problem
supriya, November 08, 2010 - 12:38 pm UTC
Hi Tom,
Thanks a lot for having a look in to my issue posted above,and sorry for the confusion.
I have many other column along with "code" column .
Actually the newer employee's code should not be a substring of the existing employee's code.
But anyway I tried myself and found the solution and it is working fine.
The code I am currently using is
insert into table_name(id,column2,column3) select 123,'abc','def' from dual where 123 not in(select substr(id,1,3) from table_name);
and it is working fine.
thank you tom,
Supriya
November 08, 2010 - 1:08 pm UTC
think about this for a moment and ask yourself "what happens when two people insert at the same time"
this logic "won't fly" without a lock table command. You'd have to SERIALIZE inserts into your table in order for this to work.
Why is it OK for old data to be a substr of something but not new data?
Meaning, if you insert 12 and then insert 123 - that'll work. But if you insert 123 - then 12 would fail.
You would still end up with the chance of having 12 and 123 in there - your rule doesn't make sense.
Unless there is more information you are with-holding from us that would be useful?
In my opinion - either
12
123
is allow OR NOT. given your rule it is sometimes allowed and sometimes NOT - depending totally on the order of insertion - that doesn't compute.
Need Help again
Neeha, December 07, 2010 - 11:36 am UTC
Referring to a previous post sent by me
>> How to get Sunday November 2, 2010 - 8am Central time zone
I need some further assistance;
1* select sid,reason,absdate,to_char(absdate,'dd mm yyyy hh mi ss') from absent
SQL> /
SID RE ABSDATE TO_CHAR(ABSDATE,'DD
---------- -- --------- -------------------
1 A 28-OCT-10 28 10 2010 12 00 00
2 L 28-OCT-10 28 10 2010 12 00 00
3 ML 28-OCT-10 28 10 2010 12 00 00
2 L 29-OCT-10 29 10 2010 12 00 00
3 ML 29-OCT-10 29 10 2010 12 00 00
4 A 29-OCT-10 29 10 2010 12 00 00
2 A 30-OCT-10 30 10 2010 12 00 00
3 ML 30-OCT-10 30 10 2010 12 00 00
4 L 30-OCT-10 30 10 2010 12 00 00
1 A 06-DEC-10 06 12 2010 07 18 01
2 A 06-DEC-10 06 12 2010 07 18 20
2 L 07-DEC-10 07 12 2010 12 00 00
3 ML 07-DEC-10 07 12 2010 12 00 00
4 L 07-DEC-10 07 12 2010 12 00 00
5 L 07-DEC-10 07 12 2010 10 15 00
15 rows selected.
and then I run the query suggested by you
SELECT 1 oc, CAST (NULL AS NUMBER) SID, CAST (NULL AS VARCHAR2 (10)) sname,
TO_CHAR (TRUNC (SYSDATE), 'dd-mon-yyyy') today,
TO_CHAR (TRUNC (SYSDATE - 1), 'dd-mon-yyyy') yesterday,
TO_CHAR (TRUNC (SYSDATE - 2), 'dd-mon-yyyy') two_days_ago,
TO_CHAR (TRUNC (SYSDATE - 3), 'dd-mon-yyyy') three_days_ago,
TO_CHAR (TRUNC (SYSDATE - 4), 'dd-mon-yyyy') four_days_ago,
TO_CHAR (TRUNC (SYSDATE - 5), 'dd-mon-yyyy') five_days_ago,
TO_CHAR (TRUNC (SYSDATE - 6), 'dd-mon-yyyy') six_days_ago
FROM DUAL
UNION ALL
SELECT 2 oc, a.SID, a.sname,
DECODE (TO_CHAR (SYSDATE, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE), b.reason)),'P')) today,
DECODE (TO_CHAR (SYSDATE - 1, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 1), b.reason)),'P')) yesterday,
DECODE (TO_CHAR (SYSDATE - 2, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 2), b.reason)),'P')) two_days_ago,
DECODE (TO_CHAR (SYSDATE - 3, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 3), b.reason)),'P')) three_days_ago,
DECODE (TO_CHAR (SYSDATE - 4, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 4), b.reason)),'P')) four_days_ago,
DECODE (TO_CHAR (SYSDATE - 5, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 5), b.reason)),'P')) five_days_ago,
DECODE (TO_CHAR (SYSDATE - 6, 'fmday'),'sunday', 'S', NVL (MAX (DECODE (b.absdate, TRUNC (SYSDATE - 6), b.reason)),'P')) six_days_ago
FROM student a, absent b
WHERE a.SID = b.SID(+)
GROUP BY a.SID, a.sname
ORDER BY oc, sname
/
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
---------- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 07-dec-2010 06-dec-2010 05-dec-2010 04-dec-2010 03-dec-2010 02-dec-2010 01-dec-2010
2 1 ADAMS P P S P P P P
2 2 BLAKE L P S P P P P
2 3 CLARK ML P S P P P P
2 4 DOLLY L P S P P P P
2 5 EMMY P P S P P P P
6 rows selected.
you can see from the absent table does not show the following records accordingly
1 A 06-DEC-10 06 12 2010 07 18 01
2 A 06-DEC-10 06 12 2010 07 18 20
5 L 07-DEC-10 07 12 2010 10 15 00
because these were inserted with timestamp. How should i change my query to get
OC SID SNAME TODAY YESTERDAY TWO_DAYS_AG THREE_DAYS_ FOUR_DAYS_A FIVE_DAYS_A SIX_DAYS_AG
---------- ---------- ---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
1 07-dec-2010 06-dec-2010 05-dec-2010 04-dec-2010 03-dec-2010 02-dec-2010 01-dec-2010
2 1 ADAMS P A S P P P P
2 2 BLAKE L A S P P P P
2 3 CLARK ML P S P P P P
2 4 DOLLY L P S P P P P
2 5 EMMY L P S P P P P
6 rows selected.
December 07, 2010 - 11:58 am UTC
what does the timestamp have to do with anything?
if you want me to look at anything, you'll supply create tables and inserts and a clear description of the issue.
Rows into column
Neeha, December 07, 2010 - 10:31 pm UTC
Here is data on this page itself.
>>Rows into columns October 30, 2010 - 8am Central time zone
But at the moment i am inserting through forms with datetime datatype in forms (and date column in database) upto seconds.
Hope you understand my question.
Sorry for any trouble.
December 08, 2010 - 9:34 am UTC
I don't look up or down, make it easy for us, supply everything - including the question - in one concise, detailed, complete post.
Just like I do.
Better way to write this update ?
V.Hari, December 08, 2010 - 12:45 pm UTC
Mr.Oracle,
Is there a better way to write this update query,
create table E1 as select rownum as EMPNO, 'EMPNO'||ROWNUM as EMPNAME, trunc(log(2,rownum)) as DEPTNO,(rownum+1000) as SALARY from dual connect by level < 10;
create table E2 as select rownum as EMPNO, 'EMPNO'||ROWNUM as EMPNAME, trunc(log(2,rownum)) as DEPTNO,(rownum+10000) as SALARY from dual connect by level < 6;
create table E3 as select rownum as EMPNO, 'EMPNO'||ROWNUM as EMPNAME, trunc(log(2,rownum)) as DEPTNO,(rownum+20000) as SALARY from dual connect by level < 4;
select * from E1;
EMPNO EMPNAME DEPTNO SALARY
1 EMPNO1 0 1001
2 EMPNO2 1 1002
3 EMPNO3 1 1003
4 EMPNO4 1 1004
5 EMPNO5 2 1005
6 EMPNO6 2 1006
7 EMPNO7 2 1007
8 EMPNO8 2 1008
9 EMPNO9 3 1009
UPDATE E1 aa
SET salary =
(SELECT MAX (b.salary)
FROM E2 a, E3 b
WHERE A.EMPNO = B.EMPNO
AND AA.EMPNO = B.EMPNO)
WHERE EXISTS
(SELECT 1
FROM E2 a, E3 b
WHERE A.EMPNO = B.EMPNO
AND AA.EMPNO = B.EMPNO);
select * from E1;
EMPNO EMPNAME DEPTNO SALARY
1 EMPNO1 0 20001
2 EMPNO2 1 20002
3 EMPNO3 1 20003
4 EMPNO4 1 1004
5 EMPNO5 2 1005
6 EMPNO6 2 1006
7 EMPNO7 2 1007
8 EMPNO8 2 1008
9 EMPNO9 3 1009
Is there a better way to write this query(as single update query itself),to avoid selection twice on the tables.
Thanks for your time Guru.
Appreciate your services to the oracle community.
Regards
V.Hari
December 08, 2010 - 12:52 pm UTC
ops$ora11gr2%ORA11GR2> merge into e1
2 using (select empno, max(salary) salary
3 from e3
4 where empno in (select empno from e2)
5 group by empno) new_e3
6 on ( e1.empno = new_e3.empno )
7 when matched then update set salary = new_e3.salary;
3 rows merged.
appears to be equivalent and will tend to use nice big full scans and hash joins instead of indexes.
Thanks
V.Hari, December 08, 2010 - 1:13 pm UTC
Mr.Oracle,
Thanks for your time and effort. You just thought me one more query rewriting option.
Added indexes's to the above tables E1,E2,E3
create index idx_e1 on E1(EMPNO);
create index idx_e2 on E2(EMPNO);
create index idx_e3 on E3(EMPNO);
UPDATE E1 aa
SET salary =
(SELECT MAX (b.salary)
FROM E2 a, E3 b
WHERE A.EMPNO = B.EMPNO
AND AA.EMPNO = B.EMPNO)
WHERE EXISTS
(SELECT 1
FROM E2 a, E3 b
WHERE A.EMPNO = B.EMPNO
AND AA.EMPNO = B.EMPNO);
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 18 | 7 (29)| 00:00:01 |
| 1 | UPDATE | E1 | | | | |
| 2 | MERGE JOIN SEMI | | 1 | 18 | 4 (25)| 00:00:01 |
| 3 | INDEX FULL SCAN | IDX_E1 | 9 | 45 | 1 (0)| 00:00:01 |
|* 4 | SORT UNIQUE | | 3 | 39 | 2 (50)| 00:00:01 |
| 5 | VIEW | VW_SQ_1 | 3 | 39 | 1 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 3 | 12 | 1 (0)| 00:00:01 |
| 7 | INDEX FULL SCAN | IDX_E3 | 3 | 6 | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | IDX_E2 | 1 | 2 | 0 (0)| 00:00:01 |
| 9 | SORT AGGREGATE | | 1 | 8 | | |
| 10 | NESTED LOOPS | | | | | |
| 11 | NESTED LOOPS | | 1 | 8 | 2 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | IDX_E2 | 1 | 2 | 1 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | IDX_E3 | 1 | | 0 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| E3 | 1 | 6 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("AA"."EMPNO"="ITEM_1")
filter("AA"."EMPNO"="ITEM_1")
8 - access("A"."EMPNO"="B"."EMPNO")
12 - access("A"."EMPNO"=:B1)
13 - access("B"."EMPNO"=:B1)
filter("A"."EMPNO"="B"."EMPNO")
merge into e1
using (select empno, max(salary) salary
from e3
where empno in (select empno from e2)
group by empno) new_e3
on ( e1.empno = new_e3.empno )
when matched then update set salary = new_e3.salary;
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 3 | 48 | 5 (20)| 00:00:01 |
| 1 | MERGE | E1 | | | | |
| 2 | VIEW | | | | | |
| 3 | MERGE JOIN | | 3 | 138 | 5 (20)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID | E1 | 9 | 180 | 2 (0)| 00:00:01 |
| 5 | INDEX FULL SCAN | IDX_E1 | 9 | | 1 (0)| 00:00:01 |
|* 6 | SORT JOIN | | 3 | 78 | 3 (34)| 00:00:01 |
| 7 | VIEW | | 3 | 78 | 2 (0)| 00:00:01 |
| 8 | SORT GROUP BY | | 3 | 24 | 2 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 3 | 24 | 2 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID| E3 | 3 | 18 | 2 (0)| 00:00:01 |
| 11 | INDEX FULL SCAN | IDX_E3 | 3 | | 1 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | IDX_E2 | 1 | 2 | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("E1"."EMPNO"="NEW_E3"."EMPNO")
filter("E1"."EMPNO"="NEW_E3"."EMPNO")
12 - access("EMPNO"="EMPNO")
Not a considerable difference, But yes will test it with huge data.
Thanks again Guru.
Regards
V.Hari
December 08, 2010 - 1:39 pm UTC
you do NOT want any indexes to be used (unless they were being used with an index fast full scan - that is, the index is replacing the table entirely ) - that would be painfully slow
Thanks
V.Hari, December 08, 2010 - 1:48 pm UTC
Mr.Oracle,
Thanks guru.
Regards
V.Hari
Best Wishes from SQL
Rajeshwaran, Jeyabal, December 24, 2010 - 10:58 pm UTC
Tom:
Wish you a Happy Christmas & Happy long Week Ends. Here is the SQL Tree to Wish you “ Merry Christmas “
scott@10G> select decode
2 (sign (floor (maxwidth/2) - rownum)
3 ,1, lpad (' ', floor (maxwidth/2) - (rownum -1))
4 ||rpad ('*', 2 * (rownum -1), ' *')
5 ,lpad ('* * *', floor (maxwidth/2) + 3)) "Merry Christmas"
6 from dual
7 , (select 40 as maxwidth from dual)
8 connect by level < floor (maxwidth/2) + 5
9 /
Merry Christmas
-----------------------------------------------------------------------
*
* *
* * *
* * * *
* * * * *
* * * * * *
* * * * * * *
* * * * * * * *
* * * * * * * * *
* * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * * *
* * * * * * * * * * * * *
* * * * * * * * * * * * * *
* * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * *
* * *
* * *
* * *
* * *
* * *
24 rows selected.
http://rajeshwaranbtech.blogspot.com/2010/02/sql-talking.html
Kumar, January 10, 2011 - 7:46 am UTC
create table inventory (uid number, pnum varchar2(10), condId number);
insert into student values (1, 'p001', 1);
insert into student values (2, 'p001',2);
insert into student values (3, 'p001',3);
insert into student values (4, 'p002',2);
insert into studnet values (5, 'p002',3);
insert into student values (6, 'p002',4);
insert into studnet values (7, 'p003',4);
insert into student values (8, 'p004',3);
insert into studnet values (9, 'p005',2);
------Condition is master table for pnum condition
create table condition (condId number, cond varchar2(10));
insert into student values (1, 'SV');
insert into student values (2, 'OH');
insert into student values (3, 'NS');
insert into student values (4, 'NE');
--assume condition priority is in the ascending order
we have a web application to search atmost 20 pnums associated with their conditions at a time.
I need to have a query to fetch all the specified pnums associated with their conditions.
Suppose i have searched for 4 parts - 1) p001 with 1
2) p002 with 3
3) p003 with 4
4) p005 with 2
i need UIDS from the first table satisfying condition but in its condition priority, means
if i have searched for (pnum,condition) (p001,1) it sd fetch all the pnum's(p001) uid whose condition is 1,2,3,4.
if i have searched for (pnum,condition) (p001,2) it sd fetch all the pnum's(p001) uid whose condition is 2,3,4.
if i have searched for (pnum,condition) (p001,3) it sd fetch all the pnum's(p001) uid whose condition is 3,4.
January 10, 2011 - 8:25 am UTC
web application is too vague.
Can this web application call stored procedures.
Can it deal with ref cursors
Can it do collections
what is it written in.
Kumar, January 10, 2011 - 8:12 am UTC
Just want to complete the last question........hit enter by mistake
we have lots of data approx 80000 in inventory so dont want to use IN operator or much OR clause
just want to make example clear:
i have searched for three parts in one go
1) p001 with 1
3) p002 with 3
4) p005 with 2
Need result:
UID pnum
1 p001
2 p001
3 p001
5 p002
9 p005
cos p005 has only record in ponum table
Please help
Thanks in advance
January 10, 2011 - 8:27 am UTC
80,000 is teeny tiny
IN might be the way to do this. OR is not evil. IN is an OR and OR can be brutally efficient. I will not rule anything out.
but again - 80,000 is teeny.
I don't get what the second table is for either - it doesn't seem to come into use. Why is it there - just to confuse?
Kumar
Kumar, January 12, 2011 - 1:16 am UTC
Thanks For the response......
both the tables has lot of columns, I have just provided you the insert script for necessary columns....
what i have thought is like in WHERE condition...
WHERE (PNUM='p001' AND condition IN ())
OR (PNUM='p001' AND condition IN ())
OR (PNUM='p001' AND condition IN ())
We have to show some nested level of tables on front end that might kill the performance.
What i mean to say is we have about 1 lakhs of distinct data and it can be repeated for all the conditions.....
please help....Can we do it by analytics so that it cd be faster....if yes please show me teh query.
Thanks
Kumar
Kumar, January 12, 2011 - 1:37 am UTC
Sorry for the incomplete response...
web application is in JAVA....it can call stored procedure and handle cursors also.......
Please show me some way for preparing query so taht we cd save some time for preparing the nested levels in java also.....
Hope i have made you a better understanding of my requirement...
Thanks again in advance.
January 12, 2011 - 10:56 am UTC
easiest way:
create global temporary table foo ( pnum varchar2(10), condid number ) on commit delete rows;
exec dbms_stats.set_table_stats( user, 'FOO', numrows => 10 );
have java application BULK INSERT into foo all of the pnums/condids they are interested in.
then
select * from your_table, foo where your_table.pnum = foo.pnum and your_table.condid >= foo.condid;
Kumar
Kumar, January 13, 2011 - 7:01 am UTC
I am very new to oracle.....
could you please give some example or steps for teh first three lines....creating a global table, bulk insert from java and then that stats thing or please provide asktom url if its been discussed already ...
I will be very thankful to you..
Thanks in advance
January 13, 2011 - 10:57 am UTC
well, the create for the global temporary table and set up for it - that has been posted.
Hopefully, a java programmer building a web based database application would already know how to execute a query and get the results and do something with them - so step three is done. (if not, please do take the time out to take a course, a class, get some education on the tools you are using)
so that just leaves the bulk insert into the global temporary table. You should probably already know this (it is not oracle specific necessarily, it is just JDBC) and that is documented here
http://docs.oracle.com/docs/cd/E11882_01/java.112/e16548/oraperf.htm#g1068570
Kumar
Kumar, January 14, 2011 - 8:48 am UTC
Thanks for your quick and valuable response...
we have some more complexity over here introduced by the user...
we have atmost 20 pnum + condition for each pnum search on front end and user can do a ((wild card search on pnum) + condition) also, means suppose i have searched 10 parts with wildcard and 10 condition for them and rest 10 is with full pnums and condition. how can we do that with that bulk insert now.......i guess we have to go for a procedure....
What do you suggest or you may have some better idea to do that.....cd you please show me some way to do this effectively......
thanks in advance
January 14, 2011 - 12:11 pm UTC
explain yourself fully and more clearly. I did not follow this at all. And please don't dumb down the question - all that does is waste your time and my time since I give you an answer to a question you didn't really mean to ask.
kumar
kumar, January 14, 2011 - 10:22 pm UTC
first of all sorry for not making myself clear....
what i mean to say is
suppose i have searched for four parts in one go
1) p001 with 1
2) p002 with 3
3) p005 with 2
4) p15* with 1 from the front end means like p15% with condid 1 in oracle...
then we can't have bulk insert for the 4th case.....
then above suggestion will fail
hope this time i have made you clear...
thanks
kumar
kumar, January 17, 2011 - 11:10 am UTC
it wd be very nice to me if you could please suggest something on the above problem............
January 23, 2011 - 3:21 pm UTC
I occasionally take time off - as I did last week. It would be really nice if I didn't work on my time off :) so, I did not.
use LIKE if you stick wild cards in there:
select * from your_table, foo where your_table.pnum LIKE foo.pnum and your_table.condid >= foo.condid;
Expect it to probably be "not the fastest thing you've ever done"
eliminate letters which appears more than one times
zhang yong hui, January 24, 2011 - 6:56 am UTC
Dear tom,
I have a text string like below:
'a,b,c,a,b,d,b,e'
you can see 'a' and 'b' appears more than one times, now I want every letter appears only one times, like below:
'a,b,c,d,e'
how can I do this?
Thank you very much for your help.
February 01, 2011 - 10:10 am UTC
ops$tkyte%ORA11GR2> with data
2 as
3 (
4 select token, row_number() over (order by token) rn
5 from
6 (
7 select distinct
8 trim( substr (txt,
9 instr (txt, ',', 1, level ) + 1,
10 instr (txt, ',', 1, level+1)
11 - instr (txt, ',', 1, level) -1 ) )
12 as token
13 from (select ','||:txt||',' txt
14 from dual)
15 connect by level <=
16 length(:txt)-length(replace(:txt,',',''))+1
17 )
18 )
19 select substr(max(sys_connect_by_path(token,',')),2) scbp
20 from data
21 start with rn = 1
22 connect by prior rn = rn-1
23 /
SCBP
-------------------------------------------------------------------------------
a,b,c,d,e
eliminate words which appears more than one times
zhang yong hui, January 24, 2011 - 7:48 am UTC
Dear tom,
Sorry for my bother you again, for above question, my real situation is below.
the real text string is like:
'Republic,the,Republic,abc,Republic,the,ok,the,of,the,of,of,Africa,abc,Africa,ok,ok,ok'
how can I translate it to this:
'Republic,the,abc,of,Africa,ok'
February 01, 2011 - 10:17 am UTC
ops$tkyte%ORA11GR2> variable txt varchar2(300)
ops$tkyte%ORA11GR2> exec :txt := 'Republic,the,Republic,abc,Republic,the,ok,the,of,the,of,of,Africa,abc,Africa,ok,ok,ok'
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> with data
2 as
3 (
4 select token, row_number() over (order by min_l) rn
5 from
6 (
7 select min(l) min_l, token
8 from (
9 select level l,
10 trim( substr (txt,
11 instr (txt, ',', 1, level ) + 1,
12 instr (txt, ',', 1, level+1)
13 - instr (txt, ',', 1, level) -1 ) )
14 as token
15 from (select ','||:txt||',' txt
16 from dual)
17 connect by level <=
18 length(:txt)-length(replace(:txt,',',''))+1
19 )
20 group by token
21 )
22 )
23 select substr(max(sys_connect_by_path(token,',')),2) scbp
24 from data
25 start with rn = 1
26 connect by prior rn = rn-1
27 /
SCBP
-------------------------------------------------------------------------------
Republic,the,abc,ok,of,Africa
that'll preserve the original order if you need - else just use the prior query.
Parent Child records from a single table
RSR, February 24, 2011 - 3:34 am UTC
Hi Tom,
I have a table CASCADES with 2 columns - Root and Dependency. So the records can be multiple in a cascading order. I want to pull up all records, where I specify the Root and all associated Child records to n levels come up.
-- Script to create Test Data
SET DEFINE OFF;
CREATE TABLE CASCADES
(ROOT VARCHAR2(20),
DEPENDENCY VARCHAR2(20));
Insert into CASCADES (ROOT) Values ('North America');
Insert into CASCADES (ROOT) Values ('Europe');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Canada', 'North America');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('USA', 'North America');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Mexico', 'North America');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('British Colombia', 'Canada');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Florida', 'USA');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Vancouver', 'British Colombia');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Miami', 'Florida');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Fort Lauderdale', 'Florida');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Germany', 'Europe');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('France', 'Europe');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Paris', 'France');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Hamburg', 'Germany');
Insert into CASCADES (ROOT, DEPENDENCY) Values ('Berlin', 'Germany');
COMMIT;
Desired output if I fire SELECT for ROOT = 'Europe' would be -
ROOT DEPENDENCY
-------- --------------
Europe
Germany Europe
France Europe
Paris France
Hamburg Germany
Berlin Germany
How would I be able to do this? Thanks in advance for your help.
February 24, 2011 - 12:42 pm UTC
ops$tkyte%ORA11GR2> select rpad('*',2*level) || root r, dependency
2 from cascades
3 start with root = 'Europe'
4 connect by prior root = dependency
5 /
R DEPENDENCY
-------------------- --------------------
* Europe
* France Europe
* Paris France
* Germany Europe
* Berlin Germany
* Hamburg Germany
6 rows selected.
Rollup question
Rajeshwaran, Jeyabal, July 05, 2011 - 12:06 am UTC
create table t (x number,y number);
insert into t values(000100,5000.00);
insert into t values(000101,6000.00);
insert into t values(000102,1000.00);
insert into t values(001001,9000.00);
insert into t values(001002,2000.10);
insert into t values(002001,6520.00);
insert into t values(002002,75600.00);
insert into t values(009001,2222.22);
insert into t values(009009,5555.22);
commit;
I have account number of 6 digit (column x in table 't') so i need the total of account as give below
suppose account number is 000789,000790,000756
so i need total of all account number whoes first four digit will be 0007
and 002011,002012,002013 so i need the total of account number start with 0020
Now i need the output of my select query in the following Format
x y sum(y)
------- ------- -----
000100 5000.00
000101 6000.00
000102 1000.00
12000.00
001001 9000.00
001002 2000.10
11000.10
002001 6520.00
002002 75600.00
82120
009001 2222.22
009009 5555.22
7777.44
------------
Grand Total 112897.54
I tried this query, but i cannot get the full value of column 'x'.
rajesh@ORA11GR2> select substr(x,1,4) as x,y,sum(y)
2 from (
3 select to_char(x,'fm000000') as x,y
4 from t
5 ) group by rollup(substr(x,1,4),y)
6 /
X Y SUM(Y)
---- ---------- ----------
0001 1000 1000
0001 5000 5000
0001 6000 6000
0001 12000
0010 9000 9000
0010 2000.1 2000.1
0010 11000.1
0020 6520 6520
0020 75600 75600
0020 82120
0090 2222.22 2222.22
0090 5555.22 5555.22
0090 7777.44
112897.54
14 rows selected.
Elapsed: 00:00:00.03
rajesh@ORA11GR2>
Tom:
Can you please help me to get this result as required?
July 05, 2011 - 12:48 pm UTC
ops$tkyte%ORA11GR2> select case when grouping(new_x) = 0 and grouping(short_x) = 0 then new_x
2 when grouping(new_x) = 1 and grouping(short_x) = 0 then null
3 when grouping(new_x) = 1 and grouping(short_x) = 1 then 'Grand Total'
4 end label, sum(y)
5 from (
6 select to_char(x,'fm000000') as new_x, substr( to_char(x,'fm000000'), 1, 4 ) short_x, y
7 from t
8 ) group by grouping sets( (), (new_x,short_x), (short_x) )
9 order by short_x nulls last, new_x nulls last
10 /
LABEL SUM(Y)
----------- ----------
000100 5000
000101 6000
000102 1000
12000
001001 9000
001002 2000.1
11000.1
002001 6520
002002 75600
82120
009001 2222.22
009009 5555.22
7777.44
Grand Total 112897.54
14 rows selected.
Shayam, July 05, 2011 - 1:44 pm UTC
Hi Tom,
i need help on SQL query to get these columns in employees table.
last_name,count(hire_date) ,count(hire_date) [ where hire_date >= to_date('6/7/2002','mm-dd-yyyy')] ,
((count2/count1)*100) as percentage in one sql query
select(select count(hire_date) from employees ) as count1,
(select count(hire_date) from employees where hire_date >= to_date('7/18/2004','mm-dd-yyyy')) as count2
from dual
COUNT1 COUNT2
107 86
i need record like this:
LAST_NAME COUNT1 COUNT2 Percentage
Could you please help me on this.
Thanks for your effort...
July 05, 2011 - 4:25 pm UTC
analytics rock and roll...
count(*) over ()
will give your first count, count(expression) over () will give you your second, for example using EMP:
ops$tkyte%ORA11GR2> select ename, to_char(hiredate,'dd-mon-yyyy'),
2 count(*) over () cnt1,
3 count( case when hiredate > to_date( '08-sep-1981', 'dd-mon-yyyy' ) then hiredate end )
4 over () cnt2
5 from emp
6 /
ENAME TO_CHAR(HIR CNT1 CNT2
---------- ----------- ---------- ----------
SMITH 17-dec-1980 14 7
ALLEN 20-feb-1981 14 7
WARD 22-feb-1981 14 7
JONES 02-apr-1981 14 7
MARTIN 28-sep-1981 14 7
BLAKE 01-may-1981 14 7
CLARK 09-jun-1981 14 7
SCOTT 09-dec-1982 14 7
KING 17-nov-1981 14 7
TURNER 08-sep-1981 14 7
ADAMS 12-jan-1983 14 7
JAMES 03-dec-1981 14 7
FORD 03-dec-1981 14 7
MILLER 23-jan-1982 14 7
14 rows selected.
Shayam, July 06, 2011 - 8:43 am UTC
In this query, I am getting same result of cnt1,cnt2 and percentage for all job but i guess,it should not be same.
select distinct job,cnt1,cnt2,trunc(((cnt2/cnt1)*100),2) as percentage from (select a.job,b.dname,max(hiredate),count(*) over () cnt1,
count( case when max(hiredate) > to_date( '08-sep-1981', 'dd-mon-yyyy' ) then max(hiredate) end ) over () cnt2
from emp a ,dept b where a.deptno = b.deptno
and a.job not in ('AD_VP','FI_ACCOUNT') group by a.job,b.dname
order by a.job)
JOB CNT1 CNT2 PERCENTAGE
MANAGER 9 6 66.66
SALESMAN 9 6 66.66
ANALYST 9 6 66.66
CLERK 9 6 66.66
PRESIDENT 9 6 66.66
Thanks for your help..
July 08, 2011 - 11:16 am UTC
of course cnt1 is the same - it is the count of the number of records, which is what you asked for:
select(select count(hire_date) from employees ) as count1,
(select count(hire_date) from employees where hire_date >= to_date('7/18/2004','mm-dd-yyyy')) as
count2
from dual
You ASKED for the count(hiredate) to be added to each record.
Similarly, you ASKED for the count of records with a hiredate greater then some value to be added to each record.
The problem with the query I supplied is - it answered the question you ASKED, it does not however answer the question you meant (but did not) to ask.
So, ask the question you really mean to ask in the first place. Start over - from scratch, be detailed, be precise.
Shayam, July 06, 2011 - 2:34 pm UTC
Tom, Can you help on above query and issue
Reviewer
Sam, July 07, 2011 - 11:18 am UTC
Hi Tom,
I got stuck in the situation like...
User is providing some data by using java list to database
where i need to check that the data is present in database or not. Let for example user is providing 2000 serial number at a time and i need to check that ,it is present or not.
So can i make a query for that or do we need procedure for the same.
Please suggest me...
find the dummy data below...
Create table dummy as select object_id from all_objects
Now we need to check for 2000 object id provided by user.
Many thanks in advance
July 08, 2011 - 2:25 pm UTC
I would have them
a) bulk (batch) insert that data into a global temporary table.
b) run a query that tells us the answer, for example:
select serial# from gtt where serial# not in (select serial# from other_table);
if that query returns anything, they will be serial#'s that are not in the other table.
Folloup
Sam, July 09, 2011 - 12:00 pm UTC
Hi Tom,
Thanks for the reply.
I need to write a procedure then bulk insert in global temp table. and i have to return the data set as ref cursor from the procedure.It will be any issue if the dataset is more than 100000.
Please suggest
Or if we insert data in temporary table and use that Query in JPA to fetch the data.
Please suggest which one is good.
July 12, 2011 - 7:05 am UTC
I don't know what you are trying to ask here.
Please be a bit more clear. "I need to write a procedure then bulk insert in global temp table"?
I have (and will continue) to suggest that the client application (whatever it is written using, be it JPA, ABC, XYZ) do this:
a) bulk insert the "in list" items into a global temporary table you have created for this purpose
b) you open a query that uses that global temporary table as a subquery to filter the results.
that will work for zero to infinity sets of rows.
Follow up
Sam, July 12, 2011 - 10:22 am UTC
Hi Tom,
Sorry for the confusion. What i wanna know...how to insert data into global temporary table. Do we need to write Stored proc for that or it there is any other alternative.
From Java we are have serial number in CSV file.
We are converting it to list and passing 1000 records every time.
Please suggest how to insert 100000 records in global temp table.
Thanks
Sam
July 13, 2011 - 1:57 pm UTC
same way you insert into any table, using the INSERT statment - in batch.
Your jdbc programmers will know what to do.
if they do not, hire new ones, inserting into a table is beyond basic.
Follow up
Sam, July 14, 2011 - 10:12 am UTC
Thanks for your response Tom.
We know how to know data into global table or any table.
We have 1 Million Records.
So per your comment we need to use loop and insert data one by one. That will we again the performance issue.
That is why i asked earlier that do we need to wrote stored proc to use orable bulk insert features OR
Do you know any other alternative for the same
July 15, 2011 - 9:11 am UTC
I said above, repeatedly - BATCH
same way you insert into any table, using the INSERT statment - in batch.
a) bulk insert the "in list" items into a global temporary table you have created for this purpose
NOT one by one, in bulk, in batch
@Sam re:batch
Stew Ashton, July 15, 2011 - 11:19 am UTC
Thanks
Sam, July 15, 2011 - 12:50 pm UTC
Thanks Tom and Stew ...
Hi...
Sam, July 19, 2011 - 3:07 am UTC
Hi Tom
Need one help .
I have a emp table. and we have empno column. What i want to show as output is 2nd row will be the 1st column of 2nd query.
E.g : output will be like
Col1 Col2
1000 1007
1007 1010
1010 1100
1100
July 19, 2011 - 8:00 am UTC
ops$tkyte%ORA11GR2> select empno, lead(empno) over (order by empno) from scott.emp order by empno;
EMPNO LEAD(EMPNO)OVER(ORDERBYEMPNO)
---------- -----------------------------
7369 7499
7499 7521
7521 7566
7566 7654
7654 7698
7698 7782
7782 7788
7788 7839
7839 7844
7844 7876
7876 7900
7900 7902
7902 7934
7934
14 rows selected.
Thanks
Sam, July 20, 2011 - 11:53 am UTC
Thanks Tom for your help
Getting ORA-00935
Sam, September 30, 2011 - 10:07 am UTC
Dear Tom,
We have a report requirement Where we have to find the age to the item W.r.t item received date.
For thar purpose i have created the below query in which first we have calculated on which time period it has fallen.
After that we have findout the period count for that item using inline view we are trying to get the sum of all item.
But we are getting ORA-00935 group function is nested too deeply.
Query is :
SELECT a.circle_name,a.company_name,a.location_name,a.item_group_name,a.item_name,a.item_desc,sum(nvl(a.days_0,0),sum(nvl(a.days_30,0),
sum(nvl(a.days_60,0),sum(nvl(a.days_90,0),sum(nvl(a.days_120,0),sum(nvl(a.days_150,0),sum(nvl(a.days_180,0),sum(nvl(a.days_210,0),
sum(nvl(a.days_240,0),sum(nvl(a.days_270,0),sum(nvl(a.days_300,0),sum(nvl(a.days_330,0),sum(nvl(a.days_360,0),sum(nvl(a.days_720,0),
sum(nvl(a.days_1080,0) from
(SELECT stk.circle_name,stk.company_name,stk.location_name,stk.item_group_name,stk.item_name,stk.item_desc,
--sum(period_count)
(CASE
WHEN period_range = '<= 30 Days'
THEN period_count
END
) AS days_0,
(CASE
WHEN period_range = '> 30 and <= 60 days'
THEN period_count
END
) AS days_30,
(CASE
WHEN period_range = '> 60 and <= 90 days'
THEN period_count
END
) AS days_60,
(CASE
WHEN period_range = '> 90 and <= 120 days'
THEN period_count
END
) AS days_90,
(CASE
WHEN period_range = '> 120 and <= 150 days'
THEN period_count
END
) AS days_120,
(CASE
WHEN period_range = '> 150 and <= 180 days'
THEN period_count
END
) AS days_150,
(CASE
WHEN period_range = '> 180 and <= 210 days'
THEN period_count
END
) AS days_180,
(CASE
WHEN period_range = '> 210 and <= 240 days'
THEN period_count
END
) AS days_210,
(CASE
WHEN period_range = '> 240 and <= 270 days'
THEN period_count
END
) AS days_240,
(CASE
WHEN period_range = '> 270 and <= 300 days'
THEN period_count
END
) AS days_270,
(CASE
WHEN period_range = '> 300 and <= 330 days'
THEN period_count
END
) AS days_300,
(CASE
WHEN period_range = '> 330 and <= 360 days'
THEN period_count
END
) AS days_330,
(CASE
WHEN period_range = '> 360 and <= 720 days'
THEN period_count
END
) AS days_360,
(CASE
WHEN period_range = '> 720 and <= 1080 days'
THEN period_count
END
) AS days_720,
(CASE
WHEN period_range = '> 1080 days'
THEN period_count
END
) AS days_1080 from (
SELECT circle_name,company_name,location_name,item_group_name,item_name,item_desc,
period_range,count(period_range) as period_count from (
SELECT circle.circle_name, company.company_name,
warehouse.location_name, itemgrp.item_group_name,
item.item_name, item.item_desc, SYSDATE,
NVL ((SELECT SUM (stock_count)
FROM cpos_invt_trn_item_tst_stock stk
WHERE item_location_id = asso.item_location_id),0 ) AS Stock_count,
CASE
WHEN rec.material_receipt_date >
SYSDATE - 30
AND rec.material_receipt_date <= SYSDATE - 1
THEN '<= 30 Days'
WHEN rec.material_receipt_date > SYSDATE - 60
AND rec.material_receipt_date <= SYSDATE - 31
THEN '> 30 and <= 60 days'
WHEN rec.material_receipt_date > SYSDATE - 90
AND rec.material_receipt_date <= SYSDATE - 61
THEN '> 60 and <= 90 days'
WHEN rec.material_receipt_date > SYSDATE - 120
AND rec.material_receipt_date <= SYSDATE - 91
THEN '> 90 and <= 120 days'
WHEN rec.material_receipt_date > SYSDATE - 150
AND rec.material_receipt_date <= SYSDATE - 121
THEN '> 120 and <= 150 days'
WHEN rec.material_receipt_date > SYSDATE - 180
AND rec.material_receipt_date <= SYSDATE - 151
THEN '> 150 and <= 180 days'
WHEN rec.material_receipt_date > SYSDATE - 210
AND rec.material_receipt_date <= SYSDATE - 181
THEN '> 180 and <= 210 days'
WHEN rec.material_receipt_date > SYSDATE - 240
AND rec.material_receipt_date <= SYSDATE - 211
THEN '> 210 and <= 240 days'
WHEN rec.material_receipt_date > SYSDATE - 270
AND rec.material_receipt_date <= SYSDATE - 241
THEN '> 240 and <= 270 days'
WHEN rec.material_receipt_date > SYSDATE - 300
AND rec.material_receipt_date <= SYSDATE - 271
THEN '> 270 and <= 300 days'
WHEN rec.material_receipt_date > SYSDATE - 330
AND rec.material_receipt_date <= SYSDATE - 301
THEN '> 300 and <= 330 days'
WHEN rec.material_receipt_date > SYSDATE - 360
AND rec.material_receipt_date <= SYSDATE - 331
THEN '> 330 and <= 360 days'
WHEN rec.material_receipt_date > SYSDATE - 720
AND rec.material_receipt_date <= SYSDATE - 360
THEN '> 360 and <= 720 days'
WHEN rec.material_receipt_date > SYSDATE - 1080
AND rec.material_receipt_date <= SYSDATE - 720
THEN '> 720 and <= 1080 days'
WHEN rec.material_receipt_date < SYSDATE - 1080
THEN '> 1080 days'
ELSE '0'
END period_range
FROM cpos_cmn_mst_circle circle,
cpos_cmn_mst_region_state state,
cpos_cmn_mst_location warehouse,
cpos_cmn_mst_company company,
cpos_invt_mst_item item,
cpos_invt_mst_item_grp itemgrp,
cpos_invt_trn_material_rec_srn srn,
cpos_invt_trn_material_rec_dtl dtl,
cpos_invt_trn_material_receipt rec,
CPOS_INVT_ITEM_LOCATION_ASSO asso
WHERE rec.circle_id = circle.circle_id
AND circle.circle_id = state.circle_id
AND state.region_state_id = warehouse.region_state_id
AND rec.location_id = warehouse.location_id
AND rec.company_id = company.company_id
AND itemgrp.item_group_id = item.item_group_id
AND srn.circle_id = circle.circle_id
AND rec.material_receipt_id = dtl.material_receipt_id
AND dtl.item_id = item.item_id
AND dtl.material_rec_dtl_id = srn.material_rec_dtl_id
AND asso.ITEM_ID = item.item_id
AND asso.CIRCLE_ID = circle.circle_id
AND asso.LOCATION_ID = warehouse.location_id
AND circle.circle_id = 4 )
GROUP BY circle_name,
company_name,
location_name,
item_group_name,
item_name,
item_desc,period_range)stk) a
GROUP BY a.circle_name, a.company_name,
a.location_name,
a.item_group_name,
a.item_name,
a.item_desc,period_range.
Please help me how to short out this error.
Many thanks in advance.
Sam
September 30, 2011 - 7:07 pm UTC
a) whittle the query down - removing anything not relevant to the problem. that is, make it reproduce with a LOT LESS SQL.
b) supply the schema so we can actually run the query too.
I am not a sql compiler - believe it or not. I too rely on the database engine to parse and compile the sql for me.
Hi...
Sam, September 30, 2011 - 11:41 pm UTC
I have mentioned the ORA error.
IF you have any idea then suggest else leave it.
Don't give explanation like that
October 01, 2011 - 5:19 am UTC
don't ask a question like that
there, now we are even.
not really though, I wasn't trying to get help - I was trying to give it.
You reap what you sow.
Tell you what, when you can tell me why this query gets this error:
select * from t;
I'm getting an ora-1400 on that.
I'll tell you what your problem is. We both have the same amount of context - that is - NOTHING.
(you would need to have a test case from me that reproduces the issue to answer the question... otherwise you'd have to guess at every possible cause and you'd miss quite a few)
why do people expect just to get immediate answers these days? what happened to critical thinking, explaining yourself, setting things up, developing test cases?? It is after all what people in Information Technology, software engineering, are supposed to excel at.
Man up and do some work here. Make is easy for people to address your issues.
how to show data in hourly basis in 24 hours
Chetan, October 04, 2011 - 3:04 am UTC
Hi, Tom
Actually in my project there is SCADA software where it dump data in every second in database. So what i have to do is ,i have to show data in hourly basis i.e to say when data is dump in database at 1'o clock of morning then it must show data at 1'o clock of morning and if it dump data at 2'o clock then it must show data of both 1'o clock as well as 2'o clock of morning like wise upto 24 hours it will show all 24 hours data...
this is Query which i am working on but it not working as i want:
string qry = "SELECT * FROM " + item1 + " with (NOLOCK) WHERE (CURDATE BETWEEN DATEDIFF(day, - 0, GETDATE()) AND GETDATE()) ORDER BY CONVERT(VARCHAR(MAX),CURTIME,108) ASC";
October 04, 2011 - 11:31 am UTC
why would you be asking me to solve a sqlserver problem you are having??!?!?!?
Alexander, October 04, 2011 - 12:02 pm UTC
What ever happened to that delete button you have Tom....:)
Rounding issue
Rahul, October 18, 2011 - 12:43 pm UTC
need to round if 1.36 then 1.5 if 1.52 then 2 if 1.5 then 1.5 just an example the number can be different
October 18, 2011 - 5:03 pm UTC
why not write it as a specification then? why give an example that could lead people to confusion?
so, what would 1.15 round to? what about 1.25? do we always round up to the nearest half?
write this as a specification - not as a query by example.
I need Table structure,query & PL/SQL Code for below stmt
Pal, February 09, 2012 - 4:37 am UTC
Annual Leave Plan is prepared by supervisor with employee's inputs. When an employee requests for leave, it should be validated against this plan.
February 09, 2012 - 5:47 am UTC
thank you for making me laugh out loud. I really appreciate that.
Send me $5,000 and I'll send you a prototype - ok :)
seriously, what are you thinking????
Alexander, February 09, 2012 - 8:55 am UTC
That's just downright scary.
Insert INTO Select SQL query
PAGAR, February 09, 2012 - 12:25 pm UTC
Hi Tom,
Hope you are doing great !
I am working on mapping script to migrate data from staging tables into new tables in Oracle 11g.My
mapping scripts have INSERT INTO SELECT statments mostly . Please could you let me know how can I
do a commit after every 1 million records . I am asking this because , we can have a scenario where
Insert into select fails after 1 million 1 record in some case and all inserted data gets rollback
. So again we have to start from beginning.
Insert INTO Select SQL query
Rajeshwaran, Jeyabal, March 07, 2012 - 1:57 am UTC
Please could you let me know how can I do a commit after every 1 million records
Dont ever do that, that would be slow-by-slow approach.
Insert into select fails after 1 million 1 record in some case and all inserted data gets rollback . So again we have to start from beginning
- What error you are getting, show us the error message along the script you have - (Are you running out of tablespace if yes, then increase its size) - That would be helpful for any one to address your issue.
Sequence
A reader, March 19, 2012 - 9:19 pm UTC
Hi Tom,
What would be the way to generate an alphanumeric sequence.
I want my Sequence to look like
FA000
FA001
...
FA999
FB000
FB001
...
FB999
...
FY999
So my first character 'F' is fixed. Second chacter 'A' when the last three characters are between 000 to 999 and then it continues as second chacter to 'B' in the same way it goes till FY999
For the sequence Start is FA000 and end is FY999
Please suggest the possible way.
March 20, 2012 - 9:09 am UTC
ops$tkyte%ORA11GR2> create sequence s start with 0 minvalue 0 maxvalue 25000;
Sequence created.
ops$tkyte%ORA11GR2> select 'F' || chr( ascii('A')+trunc(s.nextval/1000) ) || to_char( mod(s.nextval,1000), 'fm000' )
2 from all_objects;
'F'||C
------
FA000
FA001
FA002
...
FA998
FA999
FB000
FB001
...
FY998
FY999
ERROR:
ORA-08004: sequence S.NEXTVAL exceeds MAXVALUE and cannot be instantiated
25000 rows selected.
Just please remember, there WILL BE GAPS - that is unavoidable with sequences, there will be some gaps in real life. Things roll back, things age out of the shared pool. there will be gaps (unless you want to *serialize* of course)
Solution to above question
A reader, March 20, 2012 - 3:42 am UTC
Hi Tom,
Forgot to add the solution I had used:
CREATE SEQUENCE myseq MINVALUE 0 START WITH 0 INCREMENT BY 1 MAXVALUE 24999 NOCYCLE;
SELECT 'F' || CONCAT (CHR (ASCII ('A') + MOD ( (myseq.NEXTVAL / 1000), 25)),
LPAD(TO_CHAR (MOD (myseq.CURRVAL, 1000)),3,0))
COMPOSITE_KEY
FROM DUAL
CONNECT BY LEVEL <= 25000;
select myseq.NEXTVAL from dual; --Throws Error
drop sequence myseq;
Is this going to be fine for multiple user accessing the application? We have records till 25000 rows so I am putting a constraint on sequence.
Please suggest.
SQL Query for dates
A reader, March 22, 2012 - 7:43 pm UTC
Hi Tom,
I have a requirement for coparing difference in dates.
I have to calculate the missing dates between the leading rows for a given col1 value. For the first row I have to take the difference of the parameter1 and the value of first record.
If my last row column2 value is greater than parameter2 I consider that as NULL. Please run the query attached to understand the results.
I would need the same results but if you have an effective way, please do let me know.
create table t ( col1 varchar2(30),col2 date, col3 date );
CASE 1:
insert into t values ( 'ABC', to_date( '01-JAN-2005'), to_date( '31-DEC-2012') );
CASE 2:
insert into t values ( 'DEF', to_date( '01-JAN-2005'), to_date( '31-DEC-2012') );
insert into t values ( 'DEF', to_date( '01-JAN-2013'), to_date( '31-DEC-2020') );
insert into t values ( 'DEF', to_date( '01-MAR-2021'), to_date( '31-MAR-2021') );
insert into t values ( 'DEF', to_date( '01-JUN-2021'), to_date( '31-JUL-2021') );
I will pass the start date and end date and the col1 value.
Case 1: col1 value='ABC' , paramter1 = to_date('20060202','YYYYMMDD') and parameter2 = to_date('20021201','YYYYMMDD')
RESULT:
COL1 COL2 COl3
ABC 01-DEC-2002 31-DEC-2004
Case 2: col1 value='DEF' paramter1 = to_date('20031201','YYYYMMDD') and parameter2 = to_date('20220303','YYYYMMDD')
Result
COL1 COL2 COl3
DEF 01-DEC-2003 31-DEC-2004
DEF 01-JAN-2021 28-FEB-2021
DEF 01-APR-2021 31-MAY-2021
DEF 01-AUG-2021 03-MAR-2022
I want the results like:
with tbl1 as (
--Get all the rows with the max and min of the record count
select col1,col2, col3, MAX(rownum) OVER(partition by col1 ) max_RowCount,
MIN(rownum) OVER(partition by col1) min_RowCount
FROM t
where col1='DEF'
and col2 <= to_date('20220303','YYYYMMDD')
and col3 >= to_date('20031201','YYYYMMDD')
)
SELECT col1,
CASE WHEN rownum = max_RowCount and max_RowCount != 1 THEN
--Check if the last row date is smaller than the second parameter. If yes, then we set it as NULL as it is not expired
CASE WHEN LAST_VALUE(col3) OVER (partition by col1 ORDER BY col1) < to_date('20220303','YYYYMMDD') THEN
to_date('20220303','YYYYMMDD') - DECODE( to_date('20220303','YYYYMMDD') - LAST_VALUE(col3) OVER (partition by col1 ORDER BY col1),1,NULL,
( to_date('20220303','YYYYMMDD') - LAST_VALUE(col3) OVER (partition by col1 ORDER BY col1)) - 1)
ELSE
NULL
END
--Check if there are more than one rows for the id, then set the value for the first row
WHEN rownum = min_RowCount and max_RowCount != 1 then
to_date('20031201','YYYYMMDD')
--Check if there are only one row for the id, then set the value for the first row
WHEN rownum = min_RowCount and max_RowCount = 1 then
CASE WHEN FIRST_VALUE(col2) OVER (partition by col1 ORDER BY col1) = to_date('20031201','YYYYMMDD') THEN
NULL
ELSE
to_date('20031201','YYYYMMDD')
END
ELSE
--Get the difference of the leading row date
LEAD(col2, 1) OVER (partition by col1 ORDER BY col1) - DECODE(
-- if the leading date difference is 1 then set is NULL as that was the next day
(LEAD(col2, 1) OVER (partition by col1 ORDER BY col1) -col3)
,1,NULL,
(LEAD(col2, 1) OVER (partition by col1 ORDER BY col1) -col3)-1)
END start_dates_gap,
CASE WHEN rownum = max_RowCount and max_RowCount != 1 THEN
CASE WHEN LAST_VALUE(col3) OVER (partition by col1 ORDER BY col1) < to_date('20220303','YYYYMMDD') THEN
to_date('20220303','YYYYMMDD')
ELSE
NULL
END
WHEN rownum = min_RowCount and max_RowCount != 1 THEN
to_date('20031201','YYYYMMDD') + (FIRST_VALUE(col2) OVER (partition by col1 ORDER BY col1) - to_date('20031201','YYYYMMDD')) - 1
WHEN rownum = min_RowCount and max_RowCount = 1 THEN
CASE WHEN LAST_VALUE(col2) OVER (partition by col1 ORDER BY col1) = to_date('20031201','YYYYMMDD') THEN
NULL
ELSE
to_date('20031201','YYYYMMDD') + (FIRST_VALUE(col2) OVER (partition by col1 ORDER BY col1) - to_date('20031201','YYYYMMDD')) - 1
END
ELSE
LEAD(col2, 1) OVER (partition by col1 ORDER BY col1) - DECODE(
(LEAD(col2, 1) OVER (partition by col1 ORDER BY col1) -col3)
,1,NULL,1)
END end_dates_gap
FROM tbl1
where col1='DEF'
and col2 <= to_date('20220303','YYYYMMDD')
and col3 >= to_date('20031201','YYYYMMDD')
;
March 22, 2012 - 7:57 pm UTC
I don't get this:
CASE 2:
insert into t values ( 'DEF', to_date( '01-JAN-2005'), to_date( '31-DEC-2012')
);
insert into t values ( 'DEF', to_date( '01-JAN-2013'), to_date( '31-DEC-2020')
);
insert into t values ( 'DEF', to_date( '01-MAR-2021'), to_date( '31-MAR-2021')
);
insert into t values ( 'DEF', to_date( '01-JUN-2021'), to_date( '31-JUL-2021')
);
I will pass the start date and end date and the col1 value.
Case 2: col1 value='DEF' paramter1 = to_date('20031201','YYYYMMDD') and parameter2 = to_date('20220303','YYYYMMDD')
Result
COL1 COL2 COl3
DEF 01-DEC-2003 31-DEC-2004
DEF 01-JAN-2021 28-FEB-2021
DEF 01-APR-2021 31-MAY-2021
DEF 01-AUG-2021 03-MAR-2022
what happened to
insert into t values ( 'DEF', to_date( '01-JAN-2005'), to_date( '31-DEC-2012')
);
insert into t values ( 'DEF', to_date( '01-JAN-2013'), to_date( '31-DEC-2020')
);
and where did 31-dec-2004 magically appear from? In fact, where did any of those dates come from in the output?
explain it in text, not by code, be very very clear - be very very detailed, use psuedo code, use whatever - but explain it.
Follow up
A reader, March 22, 2012 - 8:52 pm UTC
Sorry for the confusion.
I will explain here. There was a typo in date. Now it is fine
CASE 1:
1. insert into t values ( 'ABC', to_date( '01-JAN-2005'), to_date( '31-DEC-2012') );
Here col1 value='ABC' , paramter1 = to_date('20021201','YYYYMMDD') and parameter2 = to_date('20060202','YYYYMMDD')
Explanation
1. Here I have passed parameter1 = to_date('20021201','YYYYMMDD'). The first row here has the data = to_date( '01-JAN-2005') which has difference in days. so I calculate the min and max of the difference here.
COL1 start_date_gap end_date_gap
ABC 01-DEC-2002 31-DEC-2004
Since the second parameter passed is to_date('20060202','YYYYMMDD') which is smaller than col3 value of row to_date( '31-DEC-2012') so we dont consider it.
CASE 2:
1. insert into t values ( 'DEF', to_date( '01-JAN-2005'), to_date( '31-DEC-2012') );
2. insert into t values ( 'DEF', to_date( '01-JAN-2013'), to_date( '31-DEC-2020') );
3. insert into t values ( 'DEF', to_date( '01-MAR-2021'), to_date( '31-MAR-2021') );
4. insert into t values ( 'DEF', to_date( '01-JUN-2021'), to_date( '31-JUL-2021') );
Here col1 value='DEF' paramter1 = to_date('20031201','YYYYMMDD') and parameter2 = to_date('20220303','YYYYMMDD')
Result
COL1 COL2 COl3
DEF 01-DEC-2003 31-DEC-2004
DEF 01-JAN-2021 28-FEB-2021
DEF 01-APR-2021 31-MAY-2021
DEF 01-AUG-2021 03-MAR-2022
Explanation:
1. My requirement is to see if any date is missing from the leading row. As you can see my first row has value as '01-JAN-2005' and I pass the first paramter as to_date('20031201','YYYYMMDD')
which means there is a gap between these dates.
col1 start_date_gap end_date_gap
DEF 01-DEC-2003 31-DEC-2004 --Min and Max Dates between Difference between the first row data and the first parameter (in this case it is: 01-DEC-2003 to 31-DEC-2004)
2. Now if you see the col3 of first row is '31-DEC-2012' and the next row's col1 value='01-JAN-2013' so effectively next day and we don't consider it as a blank and hence not considered.
3. Now if you see the col3 of second row is '31-DEC-2020' and the next row's(third) col1 value='01-MAR-2021' so effectively there is a difference in days between these two dates. so we consider it.
col1 start_date_gap end_date_gap
DEF 01-DEC-2003 31-DEC-2004 --Difference between the first row data and the first parameter
DEF 01-JAN-2021 28-FEB-2021 --Difference between the second row's col3 value and third row col2
4. 3. Now if you see the col3 of third row is '31-MAR-2021' and the next row's(fourth) col1 value='01-JUN-2021' so effectively there is a difference in days between these two dates. so we consider it.
col1 start_date_gap end_date_gap
DEF 01-DEC-2003 31-DEC-2004 --Difference between the first row data and the first parameter
DEF 01-JAN-2021 28-FEB-2021 --Difference between the second row's col3 value and third row col2
DEF 01-APR-2021 31-MAY-2021 --Difference between the third row's col3 value and fourth row col2
5. Now if you see that I passed my parameter2 = to_date('20220303','YYYYMMDD') which is greater than the last rows col3 value(fourth) to_date( '31-JUL-2021') so I consider it as well
col1 start_date_gap end_date_gap
DEF 01-DEC-2003 31-DEC-2004 --Difference between the first row data and the first parameter
DEF 01-JAN-2021 28-FEB-2021 --Difference between the second row's col3 value and third row col2
DEF 01-APR-2021 31-MAY-2021 --Difference between the third row's col3 value and fourth row col2
DEF 01-AUG-2021 03-MAR-2022 --Difference between the final row's col3 value and parameter2
Hope it is now clear.
On "SQL Query for dates"
Stew Ashton, March 23, 2012 - 6:28 am UTC
Exactly the same question was asked three years ago:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:529176000346581356#1522726800346557329 SQL> variable col1 varchar2(32)
SQL> variable datestring1 varchar2(32)
SQL> variable datestring2 varchar2(32)
SQL> exec :col1 := 'ABC';
anonymous block completed
SQL> exec :datestring1 := '20021201';
anonymous block completed
SQL> exec :datestring2 := '20060202';
anonymous block completed
SQL> with data as (
select col2, col3 from t where col1 = :col1
union all
select to_date(:datestring1,'YYYYMMDD') - 1, to_date(:datestring1,'YYYYMMDD') - 1
from dual
union all
select to_date(:datestring2, 'YYYYMMDD') + 1, to_date(:datestring2, 'YYYYMMDD') + 1
from dual
)
SELECT * FROM (
SELECT MAX(col3) OVER (ORDER BY col2) + 1 D, LEAD(col2) OVER (ORDER BY col2) - 1 E
FROM data
)
WHERE D < E;
D E
--------- ---------
01-DEC-02 31-DEC-04
SQL> exec :col1 := 'DEF';
anonymous block completed
SQL> exec :datestring1 := '20031201';
anonymous block completed
SQL> exec :datestring2 := '20220303';
anonymous block completed
SQL> with data as (
select col2, col3 from t where col1 = :col1
union all
select to_date(:datestring1,'YYYYMMDD') - 1, to_date(:datestring1,'YYYYMMDD') - 1 from dual
union all
select to_date(:datestring2, 'YYYYMMDD') + 1, to_date(:datestring2, 'YYYYMMDD') + 1 from dual
)
SELECT * FROM (
SELECT MAX(col3) OVER (ORDER BY col2) + 1 D, LEAD(col2) OVER (ORDER BY col2) - 1 E
FROM data
)
WHERE D < E;
D E
--------- ---------
01-DEC-03 31-DEC-04
01-JAN-21 28-FEB-21
01-APR-21 31-MAY-21
01-AUG-21 03-MAR-22
Three year old bug :(
Stew Ashton, March 23, 2012 - 7:37 am UTC
I think there has been a bug in my solution all this time: I should include only those date ranges that fall within, or overlap, the boundaries of start date and finish date.
with data as (
select col2, col3 from t where col1 = :col1
and col3 >= to_date(:datestring1,'YYYYMMDD')
and col2 <= to_date(:datestring2, 'YYYYMMDD')
union all
select to_date(:datestring1,'YYYYMMDD') - 1, to_date(:datestring1,'YYYYMMDD') - 1
from dual
union all
select to_date(:datestring2, 'YYYYMMDD') + 1, to_date(:datestring2, 'YYYYMMDD') + 1
from dual
)
SELECT * FROM (
SELECT MAX(col3) OVER (ORDER BY col2) + 1 D, LEAD(col2) OVER (ORDER BY col2) - 1 E FROM data
)
WHERE D < E
/
March 23, 2012 - 8:39 am UTC
thanks!
Thanks
A reader, March 23, 2012 - 12:18 pm UTC
Thanks Stew and Tom!! You guys are wonderful..
How to be called wonderful
Stew Ashton, March 23, 2012 - 12:26 pm UTC
1) Learn everything from Tom;
2) Copy Antony Boucher's solution and put a bug in it;
3) Be called "wonderful".
...but I'll take it anyway :) Thanks!
March 24, 2012 - 10:16 am UTC
I laughed out loud...
Your solutions are both elegant and efficient - I always appreciate it when I see you've been alone with a neat analytic query or something.
datatype mismatch
Anand, July 03, 2012 - 1:50 am UTC
Hi Tom,
Is there any way to find out that query executing in online or batch having datatype mismatch between column and variable bind ?
As we have found in few place in our code where this type of issue is there due which we have bad performance.
Like we have column xyz of char for table t and using variable in function as varchar(var_l_xyz).
select * from t where xyz = var_l_xyz ;
July 03, 2012 - 8:33 am UTC
Just enable warnings
ops$tkyte%ORA11GR2> drop table t;
Table dropped.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter session set Plsql_Warnings = 'enable:all';
Session altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table t
2 ( x varchar2(20) constraint t_pk primary key,
3 y varchar2(30)
4 );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t
2 select user_id, username
3 from all_users;
49 rows created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace procedure p authid definer
2 as
3 l_rec t%rowtype;
4 l_key number := 5;
5 begin
6 select * into l_rec from t where x = l_key;
7 for x in (select plan_table_output
8 from TABLE( dbms_xplan.display_cursor() ) )
9 loop
10 dbms_output.put_line( '.'||x.plan_table_output );
11 end loop;
12 end;
13 /
SP2-0804: Procedure created with compilation warnings
ops$tkyte%ORA11GR2> show errors
Errors for PROCEDURE P:
LINE/COL ERROR
-------- -----------------------------------------------------------------
6/42 PLW-07204: conversion away from column type may result in
sub-optimal query plan
<b>shows we are comparing a string to a number...</b>
ops$tkyte%ORA11GR2> exec p
.SQL_ID 18796jgha0hwz, child number 0
.-------------------------------------
.SELECT * FROM T WHERE X = :B1
.
.Plan hash value: 1601196873
.
.--------------------------------------------------------------------------
.| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
.--------------------------------------------------------------------------
.| 0 | SELECT STATEMENT | | | | 3 (100)| |
.|* 1 | TABLE ACCESS FULL| T | 1 | 29 | 3 (0)| 00:00:01 |
.--------------------------------------------------------------------------
.
.Predicate Information (identified by operation id):
.---------------------------------------------------
.
. 1 - filter(TO_NUMBER("X")=:B1)
.
.Note
.-----
. - dynamic sampling used for this statement (level=2)
.
PL/SQL procedure successfully completed.
<b>and that shows the dreadful outcome of that - an implicit conversion... removes the ability to use that index... causes general performance nightmares period.</b>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace procedure p authid definer
2 as
3 l_rec t%rowtype;
4 l_key varchar2(5) := '5';
5 begin
6 select * into l_rec from t where x = l_key;
7 for x in (select plan_table_output
8 from TABLE( dbms_xplan.display_cursor() ) )
9 loop
10 dbms_output.put_line( '.'||x.plan_table_output );
11 end loop;
12 end;
13 /
Procedure created.
ops$tkyte%ORA11GR2> show errors
No errors.
ops$tkyte%ORA11GR2> exec p
.SQL_ID 18796jgha0hwz, child number 1
.-------------------------------------
.SELECT * FROM T WHERE X = :B1
.
.Plan hash value: 1303508680
.
.------------------------------------------------------------------------------------
.| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
.------------------------------------------------------------------------------------
.| 0 | SELECT STATEMENT | | | | 1 (100)| |
.| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 29 | 1 (0)| 00:00:01 |
.|* 2 | INDEX UNIQUE SCAN | T_PK | 1 | | 1 (0)| 00:00:01 |
.------------------------------------------------------------------------------------
.
.Predicate Information (identified by operation id):
.---------------------------------------------------
.
. 2 - access("X"=:B1)
.
PL/SQL procedure successfully completed.
datatype mismatch
Anand, July 03, 2012 - 11:55 pm UTC
Hi Tom,
i have tried to check all the object present in our dev environment having datatype mismatch including the example you have show but after that no error shown.i think i have not done it correctly.Please suggest.
Below is the script i have used :
SQL> declare
2 var_l_alter VARCHAR2(60);
3 var_l_validate_command VARCHAR2(500);
4 begin
5 var_l_alter := 'alter session set Plsql_Warnings = ' || '''enable:all''';
6 EXECUTE IMMEDIATE (var_l_alter);
7 for rec_data in (select object_name, object_type, object_id
8 from user_objects
9 ) loop
10 IF (rec_data.object_type = 'PACKAGE BODY') THEN
11 var_l_validate_command := 'ALTER ' || 'PACKAGE ' ||
12 rec_data.object_name || ' COMPILE' ||
13 ' BODY';
14 elsif (rec_data.object_type = 'TYPE BODY') THEN
15 var_l_validate_command := 'ALTER ' || 'TYPE ' || rec_data.object_name ||
16 ' COMPILE ' || ' BODY';
17 else
18 var_l_validate_command := 'ALTER ' || rec_data.object_type || ' ' ||
19 rec_data.object_name || ' COMPILE';
20 end if;
21 end loop;
22 end;
23 /
PL/SQL procedure successfully completed
SQL> show errors
No errors for PACKAGE BODY DEV3HOST.THEN
SQL>
July 05, 2012 - 7:00 am UTC
well, it would appear that you are not actually executing any of the commands you've built in the loop for starters....
SQL
Kumar, July 05, 2012 - 4:38 am UTC
Hi Tom,
Need your help....
CREATE TABLE EVENT
(
EVNT_SEQ_ID NUMBER PRIMARY KEY,
EVNT_ID NUMBER,
EVNT_STRT_DT DATE,
EVNT_CMNT VARCHAR2(20),
EVNT_STTS VARCHAR2(25)
);
INSERT INTO EVENT VALUES (1, 101, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (2, 101, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (3, 101, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (4, 101, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (5, 101, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (6, 101, SYSDATE-35, 'New Upload', 'D');
INSERT INTO EVENT VALUES (7, 102, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (8, 102, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (9, 102, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (10, 102, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (11, 102, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (12, 102, SYSDATE-35, NULL, 'D');
INSERT INTO EVENT VALUES (13, 103, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (14, 103, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (15, 103, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (16, 103, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (17, 103, SYSDATE-40, 'XXXX', 'A');
COMMIT;
1) I need to find the events or evnt_id which has max(evnt_seq_id) has evnt_stts as 'D'. we are not filtering here on the basis of comment column. This can be done by following query...<b>
SELECT * FROM EVENT
WHERE EVNT_SEQ_ID IN
(SELECT MAX(EVNT_SEQ_ID) FROM EVENT GROUP BY EVNT_ID)
AND (EVNT_STTS = 'D')
ORDER BY EVNT_ID asc;</b>
I need your help for the second requirement which is as follows :
The latest record(based on evnt_strt_dt) but the id sd be less than the id from above query and stts as 'D' and comment as 'New Upload'(we need to consider comment column here). e.g.,
I sd get result as :-
EVNT_SEQ_ID EVNT_ID EVNT_STRT_DT EVNT_CMNT EVNT_STTS
6 101 31-MAY-12 New Upload D
10 102 21-MAY-12 New Upload D
The event id 101 has two record satisfying the criteria of stts and comment but we need to take the latest record order by dt.
event 103 won't come up in the result set cos the first query itself doesn't return the record. that means we need to incorporate the first query also here in this requirement.
Please help me in doing this....
Thanks a lot..
July 05, 2012 - 8:58 am UTC
How can evnt_seq_id = 6 be in both answers? 6 is not less than 6
SQL
Kumar, July 05, 2012 - 4:45 am UTC
By mistake the out put i want is pasted wrong in the above query.
I want the output to be..
EVNT_SEQ_ID EVNT_ID EVNT_STRT_DT EVNT_CMNT EVNT_STTS
4 101 31-MAY-12 New Upload D
10 102 21-MAY-12 New Upload D
July 05, 2012 - 9:04 am UTC
I am making the reasonable assumption that evnt_seq_id is unique within an evnt_id
ops$tkyte%ORA11GR2> select *
2 from (
3 select e.*, row_number() over (partition by evnt_id order by evnt_seq_id DESC) rn
4 from event e
5 )
6 where evnt_stts = 'D'
7 and rn = 1
8 order by evnt_id
9 /
EVNT_SEQ_ID EVNT_ID EVNT_STRT EVNT_CMNT EVNT_ RN
----------- ---------- --------- -------------------- ----- ---
6 101 31-MAY-12 New Upload D 1
12 102 31-MAY-12 D 1
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select *
2 from (
3 select x.*,
4 min(rn) over (partition by evnt_id) min_rn,
5 dense_rank() over (partition by evnt_id order by evnt_seq_id) dr
6 from (
7 select e.*, row_number() over (partition by evnt_id order by evnt_seq_id DESC) rn
8 from event e
9 ) x
10 where evnt_stts = 'D'
11 )
12 where min_rn = 1
13 and dr = 2
14 order by evnt_id
15 /
EVNT_SEQ_ID EVNT_ID EVNT_STRT EVNT_CMNT EVNT_ RN MIN_RN DR
----------- ---------- --------- -------------------- ----- --- ------ ---
4 101 21-MAY-12 New Upload D 3 1 2
10 102 21-MAY-12 New Upload D 3 1 2
SQL
kumar, July 06, 2012 - 2:14 am UTC
Thanks for the quick response.....
In continuation with the query... actually i have to consider the
Comment column too for the requirement as it sd be 'New Upload'.
Tried the query to modify to suit my requirement but not able to figure it out
Getting wrong data for evnt_id 104.... it sd be row with seq_id 18.
ops$tkyte%ORA11GR2> select *
2 from (
3 select x.*,
4 min(rn) over (partition by evnt_id) min_rn,
5 dense_rank() over (partition by evnt_id order by evnt_seq_id) dr
6 from (
7 select e.*, row_number() over (partition by evnt_id order by evnt_seq_id DESC) rn
8 from event e
9 ) x
10 where evnt_stts = 'D'
AND evnt_cmnt = 'New Upload'
11 )
12 where min_rn = 1
13 and dr = 2
14 order by evnt_id
15 /
Just including some more sample data :
CREATE TABLE EVENT
(
EVNT_SEQ_ID NUMBER PRIMARY KEY,
EVNT_ID NUMBER,
EVNT_STRT_DT DATE,
EVNT_CMNT VARCHAR2(20),
EVNT_STTS VARCHAR2(25)
);
INSERT INTO EVENT VALUES (1, 101, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (2, 101, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (3, 101, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (4, 101, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (5, 101, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (6, 101, SYSDATE-35, 'New Upload', 'D');
INSERT INTO EVENT VALUES (7, 102, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (8, 102, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (9, 102, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (10, 102, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (11, 102, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (12, 102, SYSDATE-35, NULL, 'D');
INSERT INTO EVENT VALUES (13, 103, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (14, 103, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (15, 103, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (16, 103, SYSDATE-45, 'New Upload', 'D');
INSERT INTO EVENT VALUES (17, 103, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (18, 104, SYSDATE-60, 'New Upload', 'D');
INSERT INTO EVENT VALUES (19, 104, SYSDATE-55, 'XXXX', 'C');
INSERT INTO EVENT VALUES (20, 104, SYSDATE-50, 'YYYY', 'A');
INSERT INTO EVENT VALUES (21, 104, SYSDATE-45, 'XXXX', 'D');
INSERT INTO EVENT VALUES (22, 104, SYSDATE-40, 'XXXX', 'A');
INSERT INTO EVENT VALUES (23, 104, SYSDATE-35, 'New Upload', 'D');
INSERT INTO EVENT VALUES (24, 105, SYSDATE-60, 'New Upload', 'D');
COMMIT;
The latest record(based on evnt_strt_dt) but the id sd be less than the id from the first query suggested
by you and stts as 'D' and comment as 'New Upload'(we need to consider comment column here)
. e.g.,
EVNT_SEQ_ID EVNT_ID EVNT_STRT EVNT_CMNT EVNT_
----------- ---------- --------- -------------------- ----- --- ------ ---
4 101 21-MAY-12 New Upload D
10 102 21-MAY-12 New Upload D
18 104 07-MAY-12 New Upload D
Thanks a lot....
SQL
kumar, July 06, 2012 - 5:06 am UTC
I am getting desired result by playing with your query and wanted to confirm with you whether my modification would hamper the performance..
SELECT *
FROM
(SELECT Y.*, dense_rank() over (partition BY evnt_id order by evnt_seq_id DESC) dr1
FROM (SELECT x.*,
MIN(rn) over (partition BY evnt_id) min_rn,
dense_rank() over (partition BY evnt_id order by evnt_seq_id DESC) dr
FROM
(SELECT e.*,
row_number() over (partition BY evnt_id order by evnt_seq_id DESC) rn
FROM event e
) x
WHERE evnt_stts = 'D'
) Y
WHERE (evnt_cmnt = 'New Upload' AND dr > 1) OR dr = 1
)WHERE min_rn = 1
AND dr1 = 2
ORDER BY evnt_id;
and one more clarification i want is...
I can find the events which is in status A for maximum seq_id group by evnt_id as per your query but again i need to find the latest record(based on evnt_strt_dt),stts as 'D' and comment as 'New Upload' but the id sd be less than the id from the query for max seq_id and status as 'A'
Basically i want the status to be variable in the inner part of the query.....
Thanks in advance
July 11, 2012 - 8:19 am UTC
benchmark it
You have to sort of do what you need to do in order to get the stuff you need.
even if it ran slower, so what? You need to use a query that gets you the right answer don't you.
SQL
Kumar, July 10, 2012 - 1:35 am UTC
Hi Tom,
Could you please help me in writing query for the above
requirement....
Thanks in advance
July 11, 2012 - 4:55 pm UTC
you already did I thought? you said you did???
parag, July 11, 2012 - 7:53 am UTC
Hi Tom,
I have SERVER_UID_RPT table with the following details.
table consist of records of entry & exit for the employees.
But i want to query those user which have only entry granted but the exit granted entry is not there in table for the user & vice versa users having exit granted but entry granted record for the same user is not in table,
How do i find such entries from table?
also how to find total time for a user in day or in month?
Please help me in writin a query,
CREATE TABLE "ITIP"."SERVER_UID_RPT"
( "CARD_HOLDER_NAME" VARCHAR2(100),
"EVENT" VARCHAR2(100),
"VDATE" VARCHAR2(19),
"NETWORK" VARCHAR2(100),
"DOOR" VARCHAR2(100),
"SITE" VARCHAR2(100),
"CARD_NO" NUMBER(10,0),
"PRIORITY" VARCHAR2(100),
"ISALARM" VARCHAR2(5) NOT NULL ENABLE,
"CATEGORY" VARCHAR2(100),
"LOG_DATE" VARCHAR2(19),
"EMPLOYEE_CODE" VARCHAR2(100),
"FIRST_NAME" VARCHAR2(100),
"LAST_NAME" VARCHAR2(100),
"ADDRESS1" VARCHAR2(100),
"ADDRESS2" VARCHAR2(100),
"CITY" VARCHAR2(100),
"STATE" VARCHAR2(100),
"COUNTRY" VARCHAR2(100),
"PHONE" VARCHAR2(100),
"FAX" VARCHAR2(100),
"EMAIL" VARCHAR2(100),
"DOB" VARCHAR2(19),
"DOJ" VARCHAR2(100),
"GENDER" VARCHAR2(100),
"HEIGHT" VARCHAR2(100),
"BLOODGROUP" VARCHAR2(100),
"DRIVINGLICENSE" VARCHAR2(100),
"EMERGENCY_ADDRESS1" VARCHAR2(100),
"EMERGENCY_ADDRESS2" VARCHAR2(100),
"EMERGENCY_CITY" VARCHAR2(100),
"EMERGENCY_STATE" VARCHAR2(100),
"EMERGENCY_COUNTRY" VARCHAR2(100),
"EMERGENCY_CONTACTPERSON" VARCHAR2(100),
"EMERGENCY_PHONE" VARCHAR2(100),
"DEPARTMENT_UDF" VARCHAR2(100),
"DESIGNATION_UDF" VARCHAR2(100),
"COMPANY_UDF" VARCHAR2(100),
"EMPLOYEEUID_UDF" VARCHAR2(100)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "ITIPINVENTORY" ;
ALTER TABLE "ITIP"."SERVER_UID_RPT" MODIFY ("ISALARM" NOT NULL ENABLE);
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Entry granted,06/01/2012 01:26:35,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 01:21:26,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Exit granted,06/01/2012 01:26:45,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 01:21:36,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(CHAUHAN AMIT,Entry granted,06/01/2012 05:58:43,NET 23,5TH FLOOR SERVER ROOM,Root Site,511,,0,Verinet E2/4 Cards,2012-06-01 05:53:34,S8(BMS),SANJAY,YADAV (FACILITY),,,,,,,,,2010-05-07 00:00:00,2010-05-08 00:00:00,Male,AA,O +VE,,,,,,,,,FACILITY,BMS,BNPP ISPL)
insert into table SERVER_UID_RPT values(CHAUHAN AMIT,Exit granted,06/01/2012 05:58:56,NET 23,5TH FLOOR SERVER ROOM,Root Site,511,,0,Verinet E2/4 Cards,2012-06-01 05:53:47,S8(BMS),SANJAY,YADAV (FACILITY),,,,,,,,,2010-05-07 00:00:00,2010-05-08 00:00:00,Male,AA,O +VE,,,,,,,,,FACILITY,BMS,BNPP ISPL)
insert into table SERVER_UID_RPT values(MAYEKAR PRAMOD,Entry granted,06/01/2012 08:55:07,NET 23,5TH FLOOR SERVER ROOM,Root Site,1068,,0,Verinet E2/4 Cards,2012-06-01 08:49:58,CO20088093,PRAMOD,MAYEKAR,,,,,,,,,2009-10-06 00:00:00,2010-10-06 00:00:00,Male,,,,,,,,,,,SECURITY,SECURITY SUPERVISOR,BNPP ISPL)
insert into table SERVER_UID_RPT values(MAYEKAR PRAMOD,Exit granted,06/01/2012 08:55:59,NET 23,5TH FLOOR SERVER ROOM,Root Site,1068,,0,Verinet E2/4 Cards,2012-06-01 08:50:52,CO20088093,PRAMOD,MAYEKAR,,,,,,,,,2009-10-06 00:00:00,2010-10-06 00:00:00,Male,,,,,,,,,,,SECURITY,SECURITY SUPERVISOR,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Entry granted,06/01/2012 09:07:09,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 09:02:02,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Exit granted,06/01/2012 09:07:26,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 09:02:19,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Entry granted,06/01/2012 11:34:47,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 11:29:39,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Exit granted,06/01/2012 11:35:03,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 11:29:57,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(MAYEKAR PRAMOD,Entry granted,06/01/2012 11:35:35,NET 23,5TH FLOOR SERVER ROOM,Root Site,1068,,0,Verinet E2/4 Cards,2012-06-01 11:30:28,CO20088093,PRAMOD,MAYEKAR,,,,,,,,,2009-10-06 00:00:00,2010-10-06 00:00:00,Male,,,,,,,,,,,SECURITY,SECURITY SUPERVISOR,BNPP ISPL)
insert into table SERVER_UID_RPT values(MAYEKAR PRAMOD,Exit granted,06/01/2012 11:36:12,NET 23,5TH FLOOR SERVER ROOM,Root Site,1068,,0,Verinet E2/4 Cards,2012-06-01 11:31:06,CO20088093,PRAMOD,MAYEKAR,,,,,,,,,2009-10-06 00:00:00,2010-10-06 00:00:00,Male,,,,,,,,,,,SECURITY,SECURITY SUPERVISOR,BNPP ISPL)
insert into table SERVER_UID_RPT values(M .C BIDESH,Entry granted,06/01/2012 11:49:43,NET 23,5TH FLOOR SERVER ROOM,Root Site,1020,,0,Verinet E2/4 Cards,2012-06-01 11:44:34,CO20088045,BIDESH,M .C (FACILITY ),,,,,,,,,2010-11-02 00:00:00,2011-11-02 00:00:00,Male,,B +VE,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(M .C BIDESH,Exit granted,06/01/2012 11:50:10,NET 23,5TH FLOOR SERVER ROOM,Root Site,1020,,0,Verinet E2/4 Cards,2012-06-01 11:45:02,CO20088045,BIDESH,M .C (FACILITY ),,,,,,,,,2010-11-02 00:00:00,2011-11-02 00:00:00,Male,,B +VE,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
insert into table SERVER_UID_RPT values(SONAWANE HARSHAD,Entry granted,06/01/2012 12:45:12,NET 23,5TH FLOOR SERVER ROOM,Root Site,675,,0,Verinet E2/4 Cards,2012-06-01 12:40:05,20090900(ELE),HARSHAD,SONAWANE (FACILITY ),,,,,,,,,2009-10-11 00:00:00,2010-10-11 00:00:00,Male,,,,,,,,,,,FACILITY,FACILITY,BNPP ISPL)
July 12, 2012 - 5:11 pm UTC
funny, I bet your example could have used a table with - i don't know - like 4 or 5 columns at most.
and I bet you could made inserts that actually - well, i don't know - INSERT???
how about it - make a smaller table column wise, just enough to demonstrate your issue. And supply inserts (remember, I use a different nls date format than you - you BETTER to_date those strings and use an explicit format!!!)
And also, explain how you actually find entry/exit record pairs. For example, what if someone was
a) granted entry and recorded
b) left, but the exit is missing
c) came back, but the entry is missing
d) granted exit and recorded
how would you deal with that?
Difference between time in HH:MM
parag, July 17, 2012 - 6:42 am UTC
Hi,
I'm having following table, I want the difference between the time in HH:MM:SS & another output as HH:MM with seconds rounded.
CREATE TABLE "ITIP"."TOTO"
( "N1" VARCHAR2(100),
"DT1" DATE,
"DT2" DATE
)
insert into TOTO (n1, dt1, dt2)
values ('TT', to_date('25-06-2012 10:46:56', 'dd-mm-yyyy hh24:mi:ss'), to_date('25-06-2012 10:51:53', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('TT', to_date('25-06-2012 10:51:53', 'dd-mm-yyyy hh24:mi:ss'), to_date('25-06-2012 10:51:57', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('TT', to_date('25-06-2012 10:32:06', 'dd-mm-yyyy hh24:mi:ss'), to_date('25-06-2012 10:43:09', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('TT', to_date('25-06-2012 10:43:09', 'dd-mm-yyyy hh24:mi:ss'), to_date('25-06-2012 10:53:09', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('UU', to_date('25-06-2012 10:46:56', 'dd-mm-yyyy hh24:mi:ss'), to_date('25-06-2012 21:44:39', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('UU', to_date('27-06-2012 10:51:53', 'dd-mm-yyyy hh24:mi:ss'), to_date('27-06-2012 11:46:10', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('UU', to_date('28-06-2012 10:32:06', 'dd-mm-yyyy hh24:mi:ss'), to_date('28-06-2012 18:45:32', 'dd-mm-yyyy hh24:mi:ss'));
insert into TOTO (n1, dt1, dt2)
values ('UU', to_date('29-06-2012 14:22:13', 'dd-mm-yyyy hh24:mi:ss'), to_date('29-06-2012 17:24:20', 'dd-mm-yyyy hh24:mi:ss'));
commit;
What is the output when we subtract two dates?, Also I want the difference between the time in HH:MM:SS & another output as HH:MM with seconds rounded.
Please suggest a query for the same.
Thanks
July 17, 2012 - 9:09 am UTC
I'm having following table, I want the difference between the time in HH:MM:SS
& another output as HH:MM with seconds rounded.careful with that format - MM = month, MI = minute.
If I had a dollar for every time I've seen or made that mistake :)
What is the output when we subtract two dates?it is the number of days in between the two - 1.5 would be one and a half days apart...
ops$tkyte%ORA11GR2> select n1, dt1, dt2, date_diff,
2 to_char(extract( hour from diff), 'fm00') || ':' ||
3 to_char(extract( minute from diff), 'fm00') || ':' ||
4 to_char(extract( second from diff), 'fm00') diff_fmt1,
5 to_char(extract( hour from diff), 'fm00') || ':' ||
6 to_char(extract( minute from diff)+case when extract(second from diff)>= 30 then 1 else 0 end, 'fm00') diff_fmt2
7 from (
8 select t.*, dt2-dt1 date_diff,
9 cast( cast(dt2 as timestamp)-cast(dt1 as timestamp) as interval day(0) to second(0) ) diff
10 from t
11 )
12 /
N1 DT1 DT2 DATE_DIFF DIFF_FMT1 DIFF_FM
---------- -------------------- -------------------- ---------- ----------- -------
TT 25-jun-2012 10:46:56 25-jun-2012 10:51:53 .0034375 00:04:57 00:05
TT 25-jun-2012 10:51:53 25-jun-2012 10:51:57 .000046296 00:00:04 00:00
TT 25-jun-2012 10:32:06 25-jun-2012 10:43:09 .007673611 00:11:03 00:11
TT 25-jun-2012 10:43:09 25-jun-2012 10:53:09 .006944444 00:10:00 00:10
UU 25-jun-2012 10:46:56 25-jun-2012 21:44:39 .456747685 10:57:43 10:58
UU 27-jun-2012 10:51:53 27-jun-2012 11:46:10 .037696759 00:54:17 00:54
UU 28-jun-2012 10:32:06 28-jun-2012 18:45:32 .342662037 08:13:26 08:13
UU 29-jun-2012 14:22:13 29-jun-2012 17:24:20 .126469907 03:02:07 03:02
8 rows selected.
Dependend
Anand, July 17, 2012 - 9:34 am UTC
Hi Tom,
i have table lets say test where have 2 coulmn process and reqd_process
process reqd_process
B A
C B
D C
E D
Now i want for find the who all that related to A.Means i should get all the rows which is directly or inderctly depended to 'A'.Can you please help me.
Currently we using lots of GTT table to find this and which is taking a lot of time.
July 17, 2012 - 4:24 pm UTC
no create
no inserts
no look
also, no real explanation of the logic here
it looks like your answer would be "the entire table" which is sort of a boring, bad example. Make sure there would be data in there that doesn't apply.
but explain yourself, what does directly or indirectly mean to you.
Need Help in writing Query.....
Kumar, July 20, 2012 - 4:40 am UTC
Hi Tom,
Need your help : -
create table t (evnt int, status varchar2(5), approver NUMBER);
insert into t values ( 1, 'OP', NULL);
insert into t values ( 1, 'PL', NULL);
insert into t values ( 1, 'CL', 501);
insert into t values ( 1, 'IE', NULL);
insert into t values ( 2, 'OP', 502);
insert into t values ( 2, 'PL', NULL);
insert into t values ( 2, 'CL', NULL);
insert into t values ( 2, 'IE', NULL);
insert into t values ( 3, 'OP', NULL);
insert into t values ( 3, 'PL', NULL);
insert into t values ( 3, 'CL', NULL);
insert into t values ( 3, 'IE', NULL);
I need to fetch all the records with two extra columns IS_APPRVD and APRVD_BY.
Need output as :
EVNT STATUS IS_APPRVD APRVD_BY
1 OP APRVD 501
1 PL APRVD 501
1 CL APRVD 501
1 IE APRVD 501
2 OP APRVD 502
2 PL APRVD 502
2 CL APRVD 502
2 IE APRVD 502
3 OP NOT_APRVD NULL
3 PL NOT_APRVD NULL
3 CL NOT_APRVD NULL
3 IE NOT_APRVD NULL
The logic for the value of the column IS_APPRVD is as follow :
If ANY of the record for a particular evnt(in group of evnt) has value not null in approver column then its APRVD.
If all the record for a particular evnt has NULL value in approver column then NOT APRVD.
can we do this in query or we sd have to write a procedure to fetch all the records of the table and loop through the records to generate the required result set.
Thanks in Advance.......
July 30, 2012 - 7:45 am UTC
ops$tkyte%ORA11GR2> select evnt, status, approver,
2 nvl2( max(approver) over (partition by evnt), 'aprvd', 'not_aprvd' ) is_apprvd,
3 case when count(distinct approver) over (partition by evnt) > 1
4 then '*error*'
5 else to_char(max(approver) over (partition by evnt))
6 end aprvd_by
7 from t
8 /
EVNT STATU APPROVER IS_APPRVD APRVD_BY
---------- ----- ---------- --------- ----------------------------------------
1 CL 501 aprvd 501
1 OP aprvd 501
1 PL aprvd 501
1 IE aprvd 501
2 OP 502 aprvd 502
2 PL aprvd 502
2 IE aprvd 502
2 CL aprvd 502
3 OP not_aprvd
3 CL not_aprvd
3 PL not_aprvd
3 IE not_aprvd
12 rows selected.
you didn't say what happens when there are more than one approver, so I did what I wanted to do for that case...
Need Help in writing Query.....
Rajeshwaran, Jeyabal, July 24, 2012 - 4:54 am UTC
insert into t values ( 4, 'OP', 510);
insert into t values ( 4, 'PL', 511);
insert into t values ( 4, 'CL', 512);
insert into t values ( 4, 'IE', 513);
Given this data as inputs, Explain us what should be displayed in APRVD_BY column?
rajesh@ORA10GR2> select evnt,status,
2 decode( count(approver)
3 over(partition by evnt),0,
4 'NOT_APRVD','APRVD') as is_apprvd,
5 last_value(approver ignore nulls)
6 over(partition by evnt
7 order by status
8 rows between unbounded preceding
9 and unbounded following) as val
10 from t;
EVNT STATU IS_APPRVD VAL
---------- ----- --------- ----------
1 CL APRVD 501
1 IE APRVD 501
1 OP APRVD 501
1 PL APRVD 501
2 CL APRVD 502
2 IE APRVD 502
2 OP APRVD 502
2 PL APRVD 502
3 CL NOT_APRVD
3 IE NOT_APRVD
3 OP NOT_APRVD
3 PL NOT_APRVD
12 rows selected.
Elapsed: 00:00:00.03
rajesh@ORA10GR2>
July 30, 2012 - 9:10 am UTC
ops$tkyte%ORA11GR2> select evnt, status, approver,
2 nvl2( max(approver) over (partition by evnt), 'aprvd', 'not_aprvd' ) is_apprvd,
3 case when count(distinct approver) over (partition by evnt) > 1
4 then '*error*'
5 else to_char(max(approver) over (partition by evnt))
6 end aprvd_by
7 from t
8 /
EVNT STATU APPROVER IS_APPRVD APRVD_BY
---------- ----- ---------- --------- ----------------------------------------
1 CL 501 aprvd 501
1 PL aprvd 501
1 OP aprvd 501
1 IE aprvd 501
2 OP 502 aprvd 502
2 CL aprvd 502
2 PL aprvd 502
2 IE aprvd 502
3 OP not_aprvd
3 PL not_aprvd
3 CL not_aprvd
3 IE not_aprvd
4 OP 510 aprvd *error*
4 PL 511 aprvd *error*
4 CL 512 aprvd *error*
4 IE 513 aprvd *error*
16 rows selected.
ops$tkyte%ORA11GR2> select evnt,status,
2 decode( count(approver) over(partition by evnt),0, 'NOT_APRVD','APRVD') as is_apprvd,
3 last_value(approver ignore nulls) over(partition by evnt order by status rows between unbounded preceding and unbounded following) as val
4 from t;
EVNT STATU IS_APPRVD VAL
---------- ----- --------- ----------
1 CL APRVD 501
1 IE APRVD 501
1 OP APRVD 501
1 PL APRVD 501
2 CL APRVD 502
2 IE APRVD 502
2 OP APRVD 502
2 PL APRVD 502
3 CL NOT_APRVD
3 IE NOT_APRVD
3 OP NOT_APRVD
3 PL NOT_APRVD
4 CL APRVD 511
4 IE APRVD 511
4 OP APRVD 511
4 PL APRVD 511
16 rows selected.
SQL Query for copying onj in schema
Viknesh S, July 24, 2012 - 11:19 pm UTC
Hi tom,
Greeetings
i want to copy all the objects in one schema to another schema. and i want to do only through SQL query but not through PL SQL coz i have to do this on neovie database! If this can be done in diffrent way then please let me know.
Regards
VIknesh S
July 30, 2012 - 9:25 am UTC
no clue what neovie is or does.
join and then filter
kumar, August 03, 2012 - 2:00 pm UTC
Hi Tom,
Could you please let me know how we can let optimizer perform the join condition first and then apply the filter conditions? Let's say I have 3 tables(A,B,C) each got 10M rows. I am using with clause to filter the data between A&B and then joining it to C in the main select clause. Now C has some filter which is not a selective one. So optimizer is going for a full table scan on C and then joining it to the result set of A&B. Instead I would like to have it join to the result set of A&B which produces around 200K rows and have the filter applied on the final result set.
Please let me know if it's ever possible. If not what else could be done here.
dbversion: 11gr2
Thank you very much for providing valuable information.
August 17, 2012 - 9:54 am UTC
why???? that would take a lot longer!!!!!
what logic are you applying that makes you think you want to join 200k rows to 10,000,000 and then filter the resulting set - as opposed to joining 200k rows to (something less than 10,000,000)???
please explain?
Execution Steps of SQL Query
Hemanshu Sheth, August 28, 2012 - 9:15 am UTC
Hi Tom,
Please consider the following query. You will not have to create table but from output you will be able to understand my problem which may sound very simple to answer for you. I have exported the outputs in delimited text format which you may have to format with TABs in the notepad or excel.
select mri.N_PATIENTMR_ID Mr_Id, mri.V_MRNO, lower(mri.V_FIRSTNAME) First_Name, lower(mri.V_MIDDLENAME) Middle_Name, lower(mri.V_LASTNAME) Last_Name,
mri.V_PHONERES, mri.V_PHONEMOBILE
from T_M_RG_MRREGISTRTNHD mri,
(select lower(mr.V_FIRSTNAME) Fst, lower(mr.V_LASTNAME) Lst, count(mr.N_PATIENTMR_ID) from T_M_RG_MRREGISTRTNHD mr group by lower(V_FIRSTNAME),lower(V_LASTNAME) having count(mr.N_PATIENTMR_ID)>1) mrm
where lower(mri.V_FIRSTNAME)||lower(mri.V_LASTNAME) = mrm.fst||mrm.lst and mrm.fst||mrm.lst = 'gautamguha'
Gives following output:
MR_ID V_MRNO FIRST_NAME MIDDLE_NAME LAST_NAME V_PHONERES V_PHONEMOBILE
--------------------------------------------------------------------------------------
2502194 MR/07/4836 gautam prodyot guha 9833361897 9323712684
7340591 MR/08/4065 gautam guha 32589393
4345424 MR/07/18731 gautam kumar guha 32589393
23957998 MR/10/29725 gautam guha
24775067 MR/10/36154 gautam guha 09433735135
--------------------------------------------------------------------------------------
Explanation:
T_M_RG_MRREGISTRTNHD is the Client master with MR_Id as the unique numeric id & MR_No as the varchar display id.
The purpose is to find out the duplicate registrations of the same person. Unfortunately the checking mechanism at the time of entry was not so strong & hence system allowed duplicates. Above query shows one example of client Mr.Gautam Guha who has been registered 5 times.
Now, please consider the following query to identify the duplicates based on the residence phone nos & mobile nos. I am trying to check the phone no in both the fields to get one way of duplication.
select mro.N_PATIENTMR_ID Mr_Id, mro.V_MRNO, lower(mro.V_FIRSTNAME) First_Name, lower(mro.V_MIDDLENAME) Middle_Name, lower(mro.V_LASTNAME) Last_Name,
mro.V_PHONERES, mro.V_PHONEMOBILE
from T_M_RG_MRREGISTRTNHD mro
where mro.N_PATIENTMR_ID in
(select mri.N_PATIENTMR_ID from T_M_RG_MRREGISTRTNHD mri,
(select lower(mr.V_FIRSTNAME) Fst, lower(mr.V_LASTNAME) Lst, count(mr.N_PATIENTMR_ID) from T_M_RG_MRREGISTRTNHD mr group by lower(V_FIRSTNAME),lower(V_LASTNAME) having count(mr.N_PATIENTMR_ID)>1) mrm
where lower(mri.V_FIRSTNAME)||lower(mri.V_LASTNAME) = mrm.fst||mrm.lst and mrm.fst||mrm.lst = 'gautamguha'
and lower(mri.V_FIRSTNAME)=lower(mro.V_FIRSTNAME) --and mri.N_PATIENTMR_ID <> mro.N_PATIENTMR_ID
and (mri.V_PHONERES = mro.V_PHONERES or mri.V_PHONERES = mro.V_PHONEMOBILE))
which gives the following output:
--------------------------------------------------------------------------------------
MR_ID V_MRNO FIRST_NAME MIDDLE_NAME LAST_NAME V_PHONERES V_PHONEMOBILE
--------------------------------------------------------------------------------------
2502194 MR/07/4836 gautam prodyot guha 9833361897 9323712684
7340591 MR/08/4065 gautam guha 32589393
--------------------------------------------------------------------------------------
My questions are,
1. Why the entry with id=4345424 is not coming because, the v_phonemobile number (32589393) is same as the v_phoneres of the previous id 7340591 which I have checked with the OR condition (in bold in above query).
2. In case of such sub-queries, what are the steps of execution? whether it runs the inner query first i.e. scans all records in inner query for one record in outer query, then next record in outer query compared with full exceution of inner query OR VICE VERSA??
I have used Explain Plan in TOAD 9.5 but it does not give the statement wise order of execution.
kindly revert back at the earliest.
Regards
Hemanshu
August 29, 2012 - 1:51 pm UTC
i cannot read any of this, why didn't you use the CODE button?
Execution Steps of SQL Query
Hemanshu, August 30, 2012 - 12:55 am UTC
sir,
I have included the scripts & text outputs using the Code button.
Following query is to filter out those records which have multiple records of the same name.
select mri.N_PATIENTMR_ID Mr_Id, mri.V_MRNO, lower(mri.V_FIRSTNAME) First_Name, lower(mri.V_MIDDLENAME) Middle_Name, lower(mri.V_LASTNAME) Last_Name,
mri.V_PHONERES, mri.V_PHONEMOBILE
from T_M_RG_MRREGISTRTNHD mri,
(select lower(mr.V_FIRSTNAME) Fst, lower(mr.V_LASTNAME) Lst, count(mr.N_PATIENTMR_ID) from T_M_RG_MRREGISTRTNHD mr group by lower(V_FIRSTNAME),lower(V_LASTNAME) having count(mr.N_PATIENTMR_ID)>1) mrm
where lower(mri.V_FIRSTNAME)||lower(mri.V_LASTNAME) = mrm.fst||mrm.lst and mrm.fst||mrm.lst = 'gautamguha'
Below is the output of above query.
--------------------------------------------------------------------------------------
MR_ID V_MRNO FIRST_NAME MIDDLE_NAME LAST_NAME V_PHONERES V_PHONEMOBILE
--------------------------------------------------------------------------------------
2502194 MR/07/4836 gautam prodyot guha 9833361897 9323712684
7340591 MR/08/4065 gautam guha 32589393
4345424 MR/07/18731 gautam kumar guha 32589393
23957998 MR/10/29725 gautam guha
24775067 MR/10/36154 gautam guha 09433735135
--------------------------------------------------------------------------------------
I am using above query block as the subquery using IN to the original table so that comparison between Outer & Inner tables should fetch me duplicate records. please see below query.
select mro.N_PATIENTMR_ID Mr_Id, mro.V_MRNO, lower(mro.V_FIRSTNAME) First_Name, lower(mro.V_MIDDLENAME) Middle_Name, lower(mro.V_LASTNAME) Last_Name,
mro.V_PHONERES, mro.V_PHONEMOBILE
from T_M_RG_MRREGISTRTNHD mro
where mro.N_PATIENTMR_ID in
(select mri.N_PATIENTMR_ID from T_M_RG_MRREGISTRTNHD mri,
(select lower(mr.V_FIRSTNAME) Fst, lower(mr.V_LASTNAME) Lst, count(mr.N_PATIENTMR_ID) from T_M_RG_MRREGISTRTNHD mr group by lower(V_FIRSTNAME),lower(V_LASTNAME) having count(mr.N_PATIENTMR_ID)>1) mrm
where lower(mri.V_FIRSTNAME)||lower(mri.V_LASTNAME) = mrm.fst||mrm.lst and mrm.fst||mrm.lst = 'gautamguha'
and lower(mri.V_FIRSTNAME)=lower(mro.V_FIRSTNAME) --and mri.N_PATIENTMR_ID <> mro.N_PATIENTMR_ID
and (mri.V_PHONERES = mro.V_PHONERES <b>or mri.V_PHONERES = mro.V_PHONEMOBILE</b>))
below is the output.
--------------------------------------------------------------------------------------
MR_ID V_MRNO FIRST_NAME MIDDLE_NAME LAST_NAME V_PHONERES V_PHONEMOBILE
--------------------------------------------------------------------------------------
2502194 MR/07/4836 gautam prodyot guha 9833361897 9323712684
7340591 MR/08/4065 gautam guha 32589393
--------------------------------------------------------------------------------------
My questions are,
1. Why the entry with id=4345424 is not coming because, the v_phonemobile number (32589393) is same as the v_phoneres of the previous id 7340591 which I have checked with the OR condition (in bold in above query).
2. In case of such sub-queries, what are the steps of execution? whether it runs the inner query first i.e. scans all records in inner query for one record in outer query, then next record in outer query compared with full exceution of inner query OR VICE VERSA??
3. I think should I use the join instead of IN subquery? But that would put joined records in the same row of result. I wanted them as separate rows, therefore I used IN subquery.
Thanks & Regards
Hemanshu
September 10, 2012 - 5:58 pm UTC
I have included the scripts & text outputs using the Code button.
why doesn't anything line up then?
and I sure don't see any creates/inserts here either?
Need help with a query
Kkaran, September 10, 2012 - 3:58 am UTC
TABLE STRUCTURECREATE TABLE warehouse_objects
(
object_name VARCHAR2(20 CHAR)
, object_type VARCHAR2(20 CHAR)
);
SAMPLE DATAInsert into WAREHOUSE_OBJECTS
(OBJECT_NAME,
OBJECT_TYPE)
Values
('Object1', '10');
Insert into WAREHOUSE_OBJECTS
(OBJECT_NAME,
OBJECT_TYPE)
Values
('Object2', '10');
Insert into WAREHOUSE_OBJECTS
(OBJECT_NAME,
OBJECT_TYPE)
Values
('Object3', '10');
Insert into WAREHOUSE_OBJECTS
(OBJECT_NAME,
OBJECT_TYPE)
Values
('Object4', '20');
Insert into WAREHOUSE_OBJECTS
(OBJECT_NAME,
OBJECT_TYPE)
Values
('Object5', '20');
COMMIT;
PROBLEMI need to find the type for which there are maximum number of objects in this table and show a flag in the output with two possible values i.e. either yes or no. The logic is - for each object, if the type of that object is same as the type for which there are maximum number of objects in this table then the flag will be yes otherwise no.
So, for the sample data given above the expected output is like -
OBJECT_NAME OBJECT_TYPE FLAG
Object1 10 Yes
Object2 10 Yes
Object3 10 Yes
Object4 20 No
Object5 20 No
I have already written a query that does this (partly). The issue is - I am not able to handle the situation when there are 2 or more types with same number of objects.
Also, need your comments on the perforance of this query when the number of records in the base table increase to very large number. here is the query -
SELECT abcd.* , CASE when object_type = most_repeated_type THEN 'Yes' ELSE 'No' END as flag
FROM (
SELECT a.*, most_repeated_type
FROM warehouse_objects a
, (SELECT object_type most_repeated_type
FROM (SELECT object_type, COUNT(*) counter
FROM warehouse_objects GROUP BY object_type
)
WHERE counter = (SELECT MAX(COUNT(1)) FROM warehouse_objects GROUP BY object_type)
) b
) abcd
September 14, 2012 - 2:39 pm UTC
am not able to handle the situation when there are 2 or more types with same number of objects. I am absolutely baffled therefore that your example doesn't contain this case!!!!! Seriously??? - think about it.
and what is the "right way to handle it"??? I'll handle it my way and you can modify it if it isn't what your right way is.
ops$tkyte%ORA11GR2> insert into warehouse_objects (object_name,object_type) select 'object' ||(rownum*10), 100 from dual connect by level <= 3;
3 rows created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select object_name, object_type, case when max(cnt) over () = cnt then 'Yes' else 'No' end flag
2 from (
3 select object_name, object_type, count(*) over (partition by object_type) cnt
4 from warehouse_objects
5 )
6 /
OBJECT_NAME OBJECT_TYPE FLA
------------------------------ -------------------- ---
Object2 10 Yes
Object3 10 Yes
Object1 10 Yes
object20 100 Yes
object30 100 Yes
object10 100 Yes
Object5 20 No
Object4 20 No
8 rows selected.
Different Solution to the problem which I just posted
Karan, September 10, 2012 - 6:42 am UTC
Hi Tom,
This is in Continuation to the problem posted a while ago by me.
I have written another query to do the same job for me. In this one I have handled (or rather, tried to handle) the situation where more than one type is my most used OBJECT_TYPE.
I tried to run both the queries on a large table with no indexes and the first one performed much better than this one.
SELECT d.*, CASE WHEN rk = 1 AND no_of_common = 1 THEN 'Yes'
WHEN rk=1 AND no_of_common <> 1 THEN 'Equal'
WHEN rk > 1 THEN 'No'
END FLAG
FROM (SELECT c.*, COUNT(DISTINCT object_type) OVER(partition by rk) no_of_common
FROM (SELECT b.*, RANK() OVER (ORDER BY counter DESC) rk
FROM (SELECT a.*, COUNT(*) OVER (PARTITION BY object_type) as counter FROM warehouse_objects a) b
) c
) d
Please help me with this one.
Yours,
Karan
September 14, 2012 - 2:51 pm UTC
see above.
query
biju george, September 10, 2012 - 7:38 am UTC
Hi Karan,
Are you intending this--
select object_name,object_type,decode(dense_rank() over (partition by obj_cnt order by obj_cnt),1,'YES','NO') flag from
(select object_name,object_type,count(*) over (partition by object_type) obj_cnt from warehouse_objects);
Haven't tried executing it though...
NEED YOUR HELP FOR THIS REPORTING QUERY REQ
kumar, December 12, 2012 - 5:24 am UTC
Hi Tom,
I have to generate a count report for the product ids that were uploaded in the given time frame and their current status.
I am very much confused and not able to frame the whole requirement to a single query so i chose to
query the whole record and process the logic in procedure block....
just want to know whether this can be achieved in a single query.
CREATE TABLE PRODUCT
(
SEQID NUMBER,
PRDID NUMBER,
PRDCHNGDTTM DATE,
PRDCHNGDESC VARCHAR2(255),
PRODSTTS VARCHAR2(255),
MODULEOWNR NUMBER,
PRODUCTOWNR NUMBER
);
INSERT INTO PRODUCT VALUES(1,1,TO_DATE('01-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(2,1,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Need more details','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(3,2,TO_DATE('04-Apr-2012', 'DD-Mon-YYYY'),'EXLUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(4,2,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(5,3,TO_DATE('04-Apr-2012', 'DD-Mon-YYYY'),'EXLUPLD','ERROR',null,null);
INSERT INTO PRODUCT VALUES(6,3,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'changed to draft cos document provide','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(7,3,TO_DATE('06-May-2012', 'DD-Mon-YYYY'),'Approved','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(8,3,TO_DATE('07-May-2012', 'DD-Mon-YYYY'),'Auditor not agreed','AUDIT NOT PASSED',null,null);
INSERT INTO PRODUCT VALUES(9,4,TO_DATE('03-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','ERROR',null,null);
INSERT INTO PRODUCT VALUES(10,4,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(11,4,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,null);
INSERT INTO PRODUCT VALUES(12,5,TO_DATE('03-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(13,5,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(14,5,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',null,501);
INSERT INTO PRODUCT VALUES(15,6,TO_DATE('03-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(16,6,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(17,6,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,501);
INSERT INTO PRODUCT VALUES(18,7,TO_DATE('03-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(19,7,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(20,7,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,501);
INSERT INTO PRODUCT VALUES(21,7,TO_DATE('09-Jun-2012', 'DD-Mon-YYYY'),'Agreed','CLOSED',null,null);
INSERT INTO PRODUCT VALUES(22,8,TO_DATE('01-May-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(23,8,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Need more details','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(24,9,TO_DATE('02-May-2012', 'DD-Mon-YYYY'),'EXLUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(25,9,TO_DATE('06-Jun-2012', 'DD-Mon-YYYY'),'Approved','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(26,10,TO_DATE('08-May-2012', 'DD-Mon-YYYY'),'EXLUPLD','ERROR',null,null);
INSERT INTO PRODUCT VALUES(27,10,TO_DATE('06-Jun-2012', 'DD-Mon-YYYY'),'changed to draft cos document provide','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(28,10,TO_DATE('07-Jun-2012', 'DD-Mon-YYYY'),'Approved','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(29,10,TO_DATE('08-Jun-2012', 'DD-Mon-YYYY'),'Auditor not agreed','AUDIT NOT PASSED',null,null);
INSERT INTO PRODUCT VALUES(30,11,TO_DATE('03-May-2012', 'DD-Mon-YYYY'),'SYSUPLD','ERROR',null,null);
INSERT INTO PRODUCT VALUES(31,11,TO_DATE('05-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(32,11,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,null);
INSERT INTO PRODUCT VALUES(33,12,TO_DATE('10-May-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(34,12,TO_DATE('15-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(35,12,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',null,501);
INSERT INTO PRODUCT VALUES(36,13,TO_DATE('13-May-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(37,13,TO_DATE('25-May-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(38,13,TO_DATE('05-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,501);
INSERT INTO PRODUCT VALUES(39,14,TO_DATE('23-May-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
INSERT INTO PRODUCT VALUES(40,14,TO_DATE('06-Jun-2012', 'DD-Mon-YYYY'),'Approved cos details provided','PRELIMINARY',null,null);
INSERT INTO PRODUCT VALUES(41,14,TO_DATE('18-Jun-2012', 'DD-Mon-YYYY'),'Agreed','AUDITED',101,501);
INSERT INTO PRODUCT VALUES(42,14,TO_DATE('19-Jun-2012', 'DD-Mon-YYYY'),'Agreed','CLOSED',null,null);
INSERT INTO PRODUCT VALUES(43,15,TO_DATE('01-Apr-2012', 'DD-Mon-YYYY'),'SYSUPLD','DRAFT',null,null);
COMMIT;
I have to generate the monthly report having count of the following columns giving you the logic to take count :
Time sysupldnewrecord excelupldnewrecord stillindraft stillinerror stillinprelims audtnotpasd adtd closed
Apr-2012 6 2 2 0 1 1 3 1
so on for May-2012
May-2012
Need to retrieve data for date range suppose 01 Apr 2012 to 01 june 2012
means record uploaded during that period and their current status count
clarification on each column :
Time - monthly report since we have date range 01 Apr 2012 to 01 june 2012 this will be devided into two months data.
sysupldnewrecord - record who have PRDCHNGDESC as SYSUPLD and PRODSTTS as DRAFT/ERROR during that period(suppose for the record Apr-2012)
excelupldnewrecord - record who have PRDCHNGDESC as EXLUPLD and PRODSTTS as DRAFT/ERROR during that period(suppose for the record Apr-2012)
stillindraft - the max status or current status for product is still DRAFT.
stillinerror - the max status or current status for product is still ERROR.
stillinprelims - the max status or current status for product is still PRELIMINARY.
audtnotpasd - the max status or current status for product is still AUDIT NOT PASSED.
adtd - the max status or current status for product is still AUDITED.
closed - the max status or current status for product is CLOSED.
Hope i have made you clear about my requirement.
think we would have to use some kind of pivoting too
My question is can we get the record in desired format in single query having so many where clauses and group by to retrieve count
based on different criterias.
Thanks in advance
December 17, 2012 - 3:25 pm UTC
... stillindraft - the max status or current status for product is still DRAFT.
...
define what that means exactly. it is not clear to me what you mean by that - does it mean that for all records in the window of time you are reporting on, for all records in the table, for something else??? be very precise.
Clarification on req.
kumar, December 18, 2012 - 12:58 am UTC
Sorry for making myself not clear in one go......
In continuation with my above post
suppose current date or sysdate is 18th december and i want to generate a report for the date range filter criteria
From date : 01st April
To date : 5th May.
I want to generate count of all the records that were uploaded in our window time frame(i.e., uploaded records in our window 1st april to 5th may) but they are still in draft status even till december(i.e., all records). there cd be two reasons for being in stillindraft column:
1) they haven't been reviewed since they have been uploaded
2) they have been reviewed(lets say on 6th june), but due to some more documents needed or failure of some logical check they have been kept in draft status.(max(seq) for a particular prodid has status as draft in whole record.)
Its a very big req. Please help me in formulating this in query.
December 18, 2012 - 1:11 pm UTC
I *think* this is what you mean, you are still wording things very very very vaguely (tell me, could a new hire on your team write the code you wrote given your "spec"? I doubt it...)
ops$tkyte%ORA11GR2> variable sdt varchar2(30)
ops$tkyte%ORA11GR2> variable edt varchar2(30)
ops$tkyte%ORA11GR2> exec :sdt := '01-apr-2012'; :edt := '01-may-2012';
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select trunc( PRDCHNGDTTM, 'mm' ) dt,
2 count( case when PRDCHNGDESC = 'SYSUPLD'
3 and PRODSTTS in ( 'DRAFT', 'ERROR' )
4 then 1
5 end ) sysupldnewrecord,
6 count( case when PRDCHNGDESC = 'EXLUPLD'
7 and PRODSTTS in ( 'DRAFT', 'ERROR' )
8 then 1
9 end ) exlupldnewrecord,
10 count( case when last_status = 'DRAFT' then 1 end ) stillindraft,
11 count( case when last_status = 'ERROR' then 1 end ) stillinERROR,
12 count( case when last_status = 'PRELIMINARY' then 1 end ) stillinprelims,
13 count( case when last_status = 'AUDIT NOT PASSED' then 1 end ) audtnotpasd,
14 count( case when last_status = 'AUDITED' then 1 end ) adtd,
15 count( case when last_status = 'CLOSED' then 1 end ) CLOSED
16 from (
17 select PRDID, PRDCHNGDTTM, PRDCHNGDESC, PRODSTTS,
18 first_value(PRODSTTS) over
19 (partition by prdid order by PRDCHNGDTTM DESC) last_status
20 from product
21 )
22 where PRDCHNGDTTM between to_date( :sdt, 'dd-mon-yyyy' )
23 and to_date( :edt, 'dd-mon-yyyy' )
24 group by trunc( PRDCHNGDTTM, 'mm' )
25 having count( case when PRDCHNGDESC = 'SYSUPLD'
26 and PRODSTTS in ( 'DRAFT', 'ERROR' )
27 then 1
28 end ) >0
29 or count( case when PRDCHNGDESC = 'EXLUPLD'
30 and PRODSTTS in ( 'DRAFT', 'ERROR' )
31 then 1
32 end ) >0
33 order by trunc( PRDCHNGDTTM, 'mm' )
34 /
DT SYSUPLDNEWRECORD EXLUPLDNEWRECORD STILLINDRAFT STILLINERROR STILLINPRELIMS AUDTNOTPASD ADTD CLOSED
--------- ---------------- ---------------- ------------ ------------ -------------- ----------- ---------- ----------
01-APR-12 6 2 2 0 1 1 3 1
01-MAY-12 1 0 1 0 0 0 0 0
Thanks a lot!!!!!!
kumar, December 19, 2012 - 12:40 am UTC
Hats off to you sir.....
Thats the query I was looking for.... Thanks a lot Sir.
Need Help to make query
Shayam, January 07, 2013 - 6:38 am UTC
Hi Tom,
I need help to make SQL query to before and after specific character.
Create table test(flist not null VARCHAR2(200));
Flist field content with below record:
FC028CONNE_IMPORT_WRONG_COMP_LENGAPXXXXPPPP
FC024CALL_FUNCTION_OPEN_ERRORAPXXXXPP
FC025OPEN_DATASET_NO_AUTHORITYAPXXXXPPPPPPPPPPPPPP
FC015RAISE_EXCEPTIONAPAXEPPPPPPPPPPPPPPPPPPPP
Could you please help me to filter the above record from FLIST column thorugh sql script as below:
FC028< CONNE_IMPORT_WRONG_COMP_LENG> APXXXXPPPP
FC024< CALL_FUNCTION_OPEN_ERROR> APXXXXPP
FC025< OPEN_DATASET_NO_AUTHORITY> APXXXXPPPPPPPPPPPPPP
FC015< RAISE_EXCEPTION> APAXEPPPPPPPPPPPPPPPPPPPP
means remove first 5 charator and after APXXXXXXXXX.
Output of SQL query should come like below:
CONNE_IMPORT_WRONG_COMP_LENG
CALL_FUNCTION_OPEN_ERROR
OPEN_DATASET_NO_AUTHORITY
RAISE_EXCEPTION
Please let me know if you need any more information on this.
Thanks a lot in advance...
January 14, 2013 - 11:25 am UTC
no creates
no inserts
no look
but this is pretty trival as long as <> are "special"
use instr to find the "<"
use instr to find the ">"
now you know where to start your substr (at instr(C,'<')+1)
and how many characters to substr off (instr(C,'>')-instr(C,'<')-1)
Help in writing SQL Query - Open for all.
Vikas Sangar, January 16, 2013 - 1:37 pm UTC
Dear Mr. Kyte,
How are you doing? Wish a very Happy New Year.
Sir, first of all, Sorry if this is not the right thread to post a question related to SQL Queries.
I need a small help for writing a SQL query / Procedure that would parse SQL statements in V$SQL view for all the SQL statements using any sequence for inserts ...
"where lower(SQL_FULLTEXT) like '%nexval%';"
That would return me a short report with output as -
Sequence "sequence_owner.Sequence_name" is being used in "SQL_ID" for inserts in "TABLE_OWNER.TABLE_NAME" in Column "COLUMN_NAME" which is (or which is not) a part of "PRIMARY / UNIQUE KEY".
I tried a lot using intsr & substr functions, but couldn't get it right at all. Therefore, I am here for help.
I would appreciate your kind and valuable help in helping me out in writing a query/procedure for getting this information.
Thanks & regards.
January 17, 2013 - 8:16 am UTC
don't think you'll be doing this in SQL...
so in plsql, you'd be parsing a string sort of like:
insert into t ( c1, c2, c3, c4, c5 )
values
( 'x,y', 'b', 'c,v', substr( x, 1, 5 ), seq.nextval )
or just
insert into t
values
( 'x,y', 'b', 'c,v', substr( x, 1, 5 ), seq.nextval )
it would be a non-trivial parse operation - functions with commas, literals, and so on.
But something a programmer would be able to do pretty readily.
so you be doing something like
for x in (select sql_fulltext from v$sql where ...)
loop
parse and print it
end loop
in a procedure. I don't see doing this in a single sql statement - it'll be a sizable chunk of code you'll have to generate.
Help in writing SQL Query - Open for all.
Vikas Sangar, January 17, 2013 - 10:52 am UTC
Dear Mr. Kyte,
Thank you so much for your prompt response.
You are right its not possible using just SQL Query and will require PL/SQL. So far I had been able to parse just the table name, but all my efforts fall apart when it comes to parse the name of Sequence for obvious reasons already stated by you (My rough code below).
select substr(SQL_TEXT,
instr(SQL_TEXT,' ',1,2),
(instr(SQL_TEXT,' ',1,3))-(instr(SQL_TEXT,' ',1,2)))
from V$SQL where lower(SQL_TEXT) like 'insert%' and lower(SQL_TEXT) like '%.nextval%';
Sir, by any chance do you have any such code handy and available with you and if you would like to share it here?
As always, your valuable help in this regards will be highly appreciated.
Thanks & regards.
January 17, 2013 - 12:13 pm UTC
no i don't have any plsql sitting around to fully parse an insert/values statement.
sql_text on V$SQL
Misha Rosalie, January 18, 2013 - 2:19 am UTC
Hello Vikas,
I would use SQL_FULLTEXT (which is a CLOB) instead of SQL_TEXT.
For Shayam from india
Artieboy, January 18, 2013 - 1:53 pm UTC
If we assume that the end portion of the string which will be removed begins with 'AP' and there is no subsequent 'AP' combination then can't we just do something like this:
select flist, substr(c1, 1, c2-1) flist_new
from
(--2
select flist, c1, instr(c1,'AP',-1,1)c2 from
(--1
select flist, substr(flist,6)c1 from test
)--1
)--2
Is that OK Oracle?
Cheers!
sql_text on V$SQL @ Misha Rosalie
Vikas Sangar, January 22, 2013 - 3:39 pm UTC
Thanks for the advice Misha.
Yes, SQL_FULLTEXT would serve the purpose much better in case we have some really very long queries that don't fit in to SQL_TEXT column.
But in my case, I knew it will be not more that 1000 - 2000 characters, as I knew the queries that I've created to test this out.
Thanks & Regards
Select query
Sanjay, February 05, 2013 - 6:31 am UTC
Create Table FY_CAL
(
SRNO Number(5),
Dsgn Varchar2(4),
Xcal NUmber(10),
YCal Number(10),
Ftr Number(10)
);
Insert into FY_CAL Values(1,'A1',10,20,3);
Insert into FY_CAL Values(2,'A1',NULL,NULL,5);
Insert into FY_CAL Values(3,'A1',NULL,NULL,2);
Insert into FY_CAL Values(4,'A1',NULL,NULL,7);
Insert into FY_CAL Values(5,'A1',NULL,NULL,4);
Insert into FY_CAL Values(6,'A2',10,20,3);
Insert into FY_CAL Values(7,'A2',NULL,NULL,2);
Insert into FY_CAL Values(8,'A2',NULL,NULL,6);
Commit;
Select * from FY_CAL;
SRNO DSGN XCAL YCAL FTR
1 A1 10 20 3
2 A1 5
3 A1 2
4 A1 7
5 A1 4
6 A2 10 20 3
7 A2 2
8 A2 6
I have First row. the next row is calculate on the bases of first row for Dsgn.
So for SRNO =2 and Dsgn = 'A1'
the value of Xcal will be value of Ycal of Srno 1 i.e 20 multipled by 2 that is 40
the value of Ycal will be value of Xcal + Ftr i.e 40 + 5 that is 45
Similarily
So for SRNO =3 and Dsgn = 'A1'
the value of Xcal will be value of Ycal of Srno 2 i.e 45 multipled by 2 that is 90
the value of Ycal will be value of Xcal + Ftr i.e 90 + 2 that is 92
My question is can this done with SELECT statement.
Select * from FY_CAL where DSGN = 'A1';
So output should be
SRNO DSGN XCAL YCAL FTR
1 A1 10 20 3
2 A1 40 45 5
3 A1 90 92 2
4 A1 184 191 7
5 A1 382 386 4
Can you suggest how this can be achieved with using SQL query.
Need your help
kumar, February 07, 2013 - 8:09 am UTC
Hi Tom,
I need your help for this req.
create table table1
(
id number,
col1 varchar2(30),
col2 varchar2(30),
col3 varchar2(30),
updt date
);
insert into table1 values(101,'c1','c2','c3',sysdate);
insert into table1 values(102,'c11','c22','c33',sysdate);
insert into table1 values(103,'c111','c222','c333',sysdate);
insert into table1 values(105,'F1','F2','F3',sysdate);
create table table2
(
id number,
col1 varchar2(30),
col2 varchar2(30),
col3 varchar2(30),
updt date
);
insert into table2 values(101,null,'c2','c3',sysdate);
insert into table2 values(102,'c11','c22','c33',sysdate);
insert into table2 values(104,'c111','c222','c333',sysdate);
insert into table2 values(105,'F11','F22','F33',sysdate);
I need to find duplicates based on all columns except the updt date column and from both sides(I mean which is in table1 not in table2 and vice versa).
My output sd look like :
id col1 col2 col3
101 c1 c2 c3
101 null c2 c3
103 c111 c222 c333
104 c111 c222 c333
105 F1 F2 F3
105 F11 F22 F33
February 11, 2013 - 8:34 am UTC
Need your help
kumar, February 07, 2013 - 8:14 am UTC
In the above req.is it possible to have some kind of indicator also so that by going through the output i can recognize which table's record is it. e.g.,
can we have an extra column just to recognize whether its from table1 or table2.
1 which will mean the record is from table1
2 which will mean the record is from table2
February 11, 2013 - 8:35 am UTC
see above...
to Sanjay
igor, February 12, 2013 - 1:27 pm UTC
try something like this
SQL> SELECT srno, dsgn, xcal, ycal, ftr
2 FROM fy_cal
3 MODEL
4 PARTITION BY (dsgn)
5 DIMENSION BY (ROW_NUMBER() OVER (PARTITION BY dsgn ORDER BY srno) rn)
6 MEASURES (xcal, ycal, ftr, srno)
7 RULES ITERATE (100)
8 (
9 xcal[rn>1] = ycal[CV()-1]*2,
10 ycal[rn>1] = xcal[CV()] + ftr[CV()]
11 )
12 ORDER BY 1;
SRNO DSGN XCAL YCAL FTR
---------- ---- ---------- ---------- ----------
1 A1 10 20 3
2 A1 40 45 5
3 A1 90 92 2
4 A1 184 191 7
5 A1 382 386 4
6 A2 10 20 3
7 A2 40 42 2
8 A2 84 90 6
8 rows selected.
SQL>
Thanks a lot ...
Sanjay, February 18, 2013 - 8:27 am UTC
to Sanjay - a better method
igor, February 20, 2013 - 2:26 pm UTC
Sanjay, you are welcome. Actually, I have realized that iteration is not needed here ..
This method is much better (RULES AUTOMATIC ORDER does the trick!):
SQL> SELECT srno, dsgn, xcal, ycal, ftr
2 FROM fy_cal
3 MODEL
4 PARTITION BY (dsgn)
5 DIMENSION BY (ROW_NUMBER() OVER (PARTITION BY dsgn ORDER BY srno) rn)
6 MEASURES (srno, xcal, ycal, ftr)
7 RULES AUTOMATIC ORDER
8 ( xcal[rn>1] ORDER BY rn = ycal[CV()-1] * 2,
9 ycal[rn>1] ORDER BY rn = xcal[CV()] + ftr[CV()]
10 )
11 ORDER BY 1;
SRNO DSGN XCAL YCAL FTR
---------- ---- ---------- ---------- ----------
1 A1 10 20 3
2 A1 40 45 5
3 A1 90 92 2
4 A1 184 191 7
5 A1 382 386 4
6 A2 10 20 3
7 A2 40 42 2
8 A2 84 90 6
8 rows selected.
SQL>
sql query help
Rajeshwaran, Jeyabal, February 21, 2013 - 12:04 am UTC
Tom,
How can i achieve this with requirement with this below same data.
1) for each id, output only those records where hcc='CMS' if exists
2) if no record exists for hcc='CMS' for each id, get all those records maching hcc='EX'
I have below two approch, do you think of anything other better way than this?
drop table t purge;
create table t
( id number,
hcc varchar2(3)
constraint hcc_chk
check ( hcc in ('CMS','EX') ) )
/
insert into t values (1,'CMS');
insert into t values (1,'EX');
insert into t values (2,'EX');
insert into t values (2,'EX');
commit;
SQL#1rajesh@ORA10G> variable x number;
rajesh@ORA10G> exec :x :=1;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
rajesh@ORA10G> select id,hcc
2 from (
3 select id,hcc,
4 count(decode(hcc,'CMS',1)) over() cms_cnt,
5 count(decode(hcc,'EX',1)) over() ex_cnt
6 from t
7 where id = :x
8 )
9 where ( hcc='CMS' and cms_cnt >0)
10 or ( hcc='EX' and ex_cnt >0 and cms_cnt =0)
11 /
ID HCC
---------- ---
1 CMS
Elapsed: 00:00:00.00
rajesh@ORA10G> exec :x :=2;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
rajesh@ORA10G> /
ID HCC
---------- ---
2 EX
2 EX
Elapsed: 00:00:00.00
rajesh@ORA10G>
SQL#2rajesh@ORA10G> exec :x :=1;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
rajesh@ORA10G> select id,hcc
2 from t
3 where id = :x
4 and hcc ='CMS'
5 union all
6 select id,hcc
7 from t
8 where id =:x
9 and hcc != 'CMS'
10 and not exists (select null from t
11 where id =:x
12 and hcc='CMS')
13 /
ID HCC
---------- ---
1 CMS
Elapsed: 00:00:00.02
rajesh@ORA10G> exec :x :=2;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
rajesh@ORA10G> /
ID HCC
---------- ---
2 EX
2 EX
Elapsed: 00:00:00.00
rajesh@ORA10G>
February 25, 2013 - 10:11 am UTC
I would have approached this with analytics, similar to what you did in the first query..
Better way to write this?
Raghu, March 07, 2013 - 4:01 pm UTC
Hi Tom,
create table aabb
(
ab varchar2(10),
bc varchar2(10),
cd varchar2(10)
);
Insert into AABB Values ('ABC', 'DEF', 'ABC*');
Insert into AABB Values ('ABC', 'DEF', 'ABC');
Insert into AABB Values ('DEF', 'DEF', 'ABC');
Insert into AABB Values ('DEF*', 'DEF', 'ABC');
Insert into AABB Values ('ABC,ABC*', 'ABC', 'DEF');
COMMIT;
AB BC CD
------------------------
ABC DEF ABC*
ABC DEF ABC
DEF DEF ABC
DEF* DEF ABC
ABC,ABC* ABC DEF
What I'm trying to achieve is,
select
case
when instr(ab,'ABC') = 1 and (instr(ab,'ABC*') > 0 or instr(bc,'ABC*') > 0 or instr(cd,'ABC*') > 0) then 'warning'
when instr(ab,'DEF') = 1 and (instr(ab,'DEF*') > 0 or instr(bc,'DEF*') > 0 or instr(cd,'DEF*') > 0) then 'warning'
when instr(ab,'ABC*') = 1 and (instr(ab,'ABC') > 0 or instr(bc,'ABC') > 0 or instr(cd,'ABC') > 0) then 'warning'
when instr(ab,'DEF*') = 1 and (instr(ab,'DEF') > 0 or instr(bc,'DEF') > 0 or instr(cd,'DEF') > 0) then 'warning'
else 'no warning'
end abc
from aabb
The output of the query should say, in this case, 'Warning' if the row contains 'ABC' and 'ABC*' or 'DEF' and 'DEF*' in any order in any of the columns.
Is there a better way to compose the query? I feel silly using instr() on all the said columns.
Thanks
March 11, 2013 - 8:04 am UTC
your query doesn't match your text.
you wrote:
The output of the query should say, in this case, 'Warning' if the row contains
'ABC' and 'ABC*' or 'DEF' and 'DEF*' in any order in any of the columns.I added a row:
ABC* ABC* ABC* warning
it gets a warning, but you said "abc AND abc*" in any order. this doesn't have ABC for example.
If what you are saying is - you want a warning on any row such that ABC and ABC* appear OR DEF and DEF* appear, then here is a suggestion:
ops$tkyte%ORA11GR2> create table aabb
2 (
3 ab varchar2(10),
4 bc varchar2(10),
5 cd varchar2(10)
6 );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into aabb
2 with data
3 as
4 (select 'ABC' x from dual union all
5 select 'ABC*' from dual union all
6 select 'ABC,ABC*' from dual union all
7 select 'DEF*' from dual union all
8 select 'DEF' from dual
9 )
10 select d1.x, d2.x, d3.x
11 from data d1, data d2, data d3
12 /
125 rows created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select ab, bc, cd, my_flag, your_flag, decode( my_flag, your_flag, '', '<-----' )
2 from (
3 select x.*,
4 case when ( abc1_cnt > 0 and abc2_cnt > 0 )
5 then 'warning'
6 when ( def1_cnt > 0 and def2_cnt > 0 )
7 then 'warning'
8 else 'no warning'
9 end my_flag,
10 case
11 when instr(ab,'ABC') = 1 and (instr(ab,'ABC*') > 0 or instr(bc,'ABC*') > 0 or instr(cd,'ABC*') > 0) then 'warning'
12 when instr(ab,'DEF') = 1 and (instr(ab,'DEF*') > 0 or instr(bc,'DEF*') > 0 or instr(cd,'DEF*') > 0) then 'warning'
13 when instr(ab,'ABC*') = 1 and (instr(ab,'ABC') > 0 or instr(bc,'ABC') > 0 or instr(cd,'ABC') > 0) then 'warning'
14 when instr(ab,'DEF*') = 1 and (instr(ab,'DEF') > 0 or instr(bc,'DEF') > 0 or instr(cd,'DEF') > 0) then 'warning'
15 else 'no warning'
16 end your_flag
17 from (
18 select aabb.*,
19 case when ','||ab||','||bc||','||cd||',' like '%,ABC*,%' then 1 else 0 end abc1_cnt,
20 case when ','||ab||','||bc||','||cd||',' like '%,ABC,%' then 1 else 0 end abc2_cnt,
21 case when ','||ab||','||bc||','||cd||',' like '%,DEF*,%' then 1 else 0 end DEF1_cnt,
22 case when ','||ab||','||bc||','||cd||',' like '%,DEF,%' then 1 else 0 end DEF2_cnt
23 from aabb
24 ) x
25 ) y
26 /
AB BC CD MY_FLAG YOUR_FLAG DECODE
---------- ---------- ---------- ---------- ---------- ------
ABC ABC ABC no warning no warning
ABC ABC ABC* warning warning
ABC ABC ABC,ABC* warning warning
ABC ABC DEF* no warning no warning
ABC ABC DEF no warning no warning
ABC ABC* ABC warning warning
ABC ABC* ABC* warning warning
ABC ABC* ABC,ABC* warning warning
ABC ABC* DEF* warning warning
ABC ABC* DEF warning warning
ABC ABC,ABC* ABC warning warning
ABC ABC,ABC* ABC* warning warning
ABC ABC,ABC* ABC,ABC* warning warning
ABC ABC,ABC* DEF* warning warning
ABC ABC,ABC* DEF warning warning
ABC DEF* ABC no warning no warning
ABC DEF* ABC* warning warning
ABC DEF* ABC,ABC* warning warning
ABC DEF* DEF* no warning no warning
ABC DEF* DEF warning no warning <-----
ABC DEF ABC no warning no warning
ABC DEF ABC* warning warning
ABC DEF ABC,ABC* warning warning
ABC DEF DEF* warning no warning <-----
ABC DEF DEF no warning no warning
ABC* ABC ABC warning warning
ABC* ABC ABC* warning warning
ABC* ABC ABC,ABC* warning warning
ABC* ABC DEF* warning warning
ABC* ABC DEF warning warning
ABC* ABC* ABC warning warning
ABC* ABC* ABC* no warning warning <-----
ABC* ABC* ABC,ABC* warning warning
ABC* ABC* DEF* no warning warning <-----
ABC* ABC* DEF no warning warning <-----
ABC* ABC,ABC* ABC warning warning
ABC* ABC,ABC* ABC* warning warning
ABC* ABC,ABC* ABC,ABC* warning warning
ABC* ABC,ABC* DEF* warning warning
ABC* ABC,ABC* DEF warning warning
ABC* DEF* ABC warning warning
ABC* DEF* ABC* no warning warning <-----
ABC* DEF* ABC,ABC* warning warning
ABC* DEF* DEF* no warning warning <-----
ABC* DEF* DEF warning warning
ABC* DEF ABC warning warning
ABC* DEF ABC* no warning warning <-----
ABC* DEF ABC,ABC* warning warning
ABC* DEF DEF* warning warning
ABC* DEF DEF no warning warning <-----
ABC,ABC* ABC ABC warning warning
ABC,ABC* ABC ABC* warning warning
ABC,ABC* ABC ABC,ABC* warning warning
ABC,ABC* ABC DEF* warning warning
ABC,ABC* ABC DEF warning warning
ABC,ABC* ABC* ABC warning warning
ABC,ABC* ABC* ABC* warning warning
ABC,ABC* ABC* ABC,ABC* warning warning
ABC,ABC* ABC* DEF* warning warning
ABC,ABC* ABC* DEF warning warning
ABC,ABC* ABC,ABC* ABC warning warning
ABC,ABC* ABC,ABC* ABC* warning warning
ABC,ABC* ABC,ABC* ABC,ABC* warning warning
ABC,ABC* ABC,ABC* DEF* warning warning
ABC,ABC* ABC,ABC* DEF warning warning
ABC,ABC* DEF* ABC warning warning
ABC,ABC* DEF* ABC* warning warning
ABC,ABC* DEF* ABC,ABC* warning warning
ABC,ABC* DEF* DEF* warning warning
ABC,ABC* DEF* DEF warning warning
ABC,ABC* DEF ABC warning warning
ABC,ABC* DEF ABC* warning warning
ABC,ABC* DEF ABC,ABC* warning warning
ABC,ABC* DEF DEF* warning warning
ABC,ABC* DEF DEF warning warning
DEF* ABC ABC no warning warning <-----
DEF* ABC ABC* warning warning
DEF* ABC ABC,ABC* warning warning
DEF* ABC DEF* no warning warning <-----
DEF* ABC DEF warning warning
DEF* ABC* ABC warning warning
DEF* ABC* ABC* no warning warning <-----
DEF* ABC* ABC,ABC* warning warning
DEF* ABC* DEF* no warning warning <-----
DEF* ABC* DEF warning warning
DEF* ABC,ABC* ABC warning warning
DEF* ABC,ABC* ABC* warning warning
DEF* ABC,ABC* ABC,ABC* warning warning
DEF* ABC,ABC* DEF* warning warning
DEF* ABC,ABC* DEF warning warning
DEF* DEF* ABC no warning warning <-----
DEF* DEF* ABC* no warning warning <-----
DEF* DEF* ABC,ABC* warning warning
DEF* DEF* DEF* no warning warning <-----
DEF* DEF* DEF warning warning
DEF* DEF ABC warning warning
DEF* DEF ABC* warning warning
DEF* DEF ABC,ABC* warning warning
DEF* DEF DEF* warning warning
DEF* DEF DEF warning warning
DEF ABC ABC no warning no warning
DEF ABC ABC* warning no warning <-----
DEF ABC ABC,ABC* warning no warning <-----
DEF ABC DEF* warning warning
DEF ABC DEF no warning no warning
DEF ABC* ABC warning no warning <-----
DEF ABC* ABC* no warning no warning
DEF ABC* ABC,ABC* warning no warning <-----
DEF ABC* DEF* warning warning
DEF ABC* DEF no warning no warning
DEF ABC,ABC* ABC warning no warning <-----
DEF ABC,ABC* ABC* warning no warning <-----
DEF ABC,ABC* ABC,ABC* warning no warning <-----
DEF ABC,ABC* DEF* warning warning
DEF ABC,ABC* DEF warning no warning <-----
DEF DEF* ABC warning warning
DEF DEF* ABC* warning warning
DEF DEF* ABC,ABC* warning warning
DEF DEF* DEF* warning warning
DEF DEF* DEF warning warning
DEF DEF ABC no warning no warning
DEF DEF ABC* no warning no warning
DEF DEF ABC,ABC* warning no warning <-----
DEF DEF DEF* warning warning
DEF DEF DEF no warning no warning
125 rows selected.
you can see where we differ. I warned on every line where (abc and abc*) OR (def and def*) appeared in any order.
raj dubey, October 24, 2013 - 10:48 am UTC
in sql pl/sql my question is i have six column id name mark1 mark2 total percentage and id is auto generate and i fill only two column name and mark total and percentage is auto answer
with the help of stored procedure
sql
ramya, July 29, 2014 - 1:53 pm UTC
hi sir,
if supppose 1000 employees,only get the 100 empno what is the
query sir
I want genarate the rank
naresh, December 04, 2014 - 7:35 am UTC
its my studentmarks table
st_id,subcode,intmarks,pasfai,externalmarks,pasfail
{1 EC101 15 P 40 P 55 P
1 EC102 10 P 20 F 30 F
1 EC103 8 F 50 P 58 F
1 EC104 5 F 25 F 30 F
1 EC105 10 P 40 P 50 P
1 EC106 8 F 25 F 33 F
2 EC101 20 P 50 P 70 P
2 EC102 15 P 60 P 75 P
2 EC103 20 P 80 P 100 P
2 EC104 14 P 46 P 60 P
2 EC105 18 P 74 P 92 P
2 EC106 16 P 68 P 84 P
3 EC101 10 P 40 P 50 P
3 EC102 20 P 60 P 80 P
3 EC103 12 P 40 P 52 P
3 EC104 20 P 50 P 70 P
3 EC105 12 P 40 P 52 P
3 EC106 14 P 60 P 74 P
4 EC101 10 P 30 P 40 P
4 EC102 14 P 40 P 54 P
4 EC103 12 P 50 P 62 P
4 EC104 8 F 46 P 54 F
4 EC105 4 F 25 F 29 F
4 EC106 16 P 68 P 84 P}
its my subjectdetails table{EC101 Electronic devices and circuits ECE 20 10 80 30 1
EC102 Analog Communications ECE 20 10 80 30 1
EC103 Signals and systems ECE 20 10 80 30 1
EC104 Digital Electronics ECE 20 10 80 30 1
EC105 Micro processors ECE 20 10 80 30 1
EC106 Basic Electrical Engineering ECE 20 10 80 30 1
EC201 Digital Communications ECE 20 10 80 30 2
EC202 Micro controllers ECE 20 10 80 30 2
EC203 Advanced communication systems ECE 20 10 80 30 2
EC204 Puls and Digital Circuits ECE 20 10 80 30 2
EC205 Industrial Electronics ECE 20 10 80 30 2
EC206 Circuit Theory ECE 20 10 80 30 2}
from these two tables I want generate studentid,semesternumber,semestertotalmarks,and rank rank only for the students who passed in boath internal and external exams and i want create a procedure plese helpme
December 04, 2014 - 4:26 pm UTC
well, since you are the only one with the create table statement and the inserts needed to populate them - you should get working on the SQL.
You don't expect me or anyone else to turn your raw data into a useful test case do you? You would need to specify primary/foreign keys so people know how the schema works and so on... You would even have to do things like tell us the structure of the second table (details like column names for example...)
You'd also have to specify what "rank" means in this context
and probably convince us somehow that this is not homework for school...
where clause based on CASE result
BV, February 10, 2015 - 10:15 am UTC
Hi Tom
Your site has always been the dictionary for Oracle questions. I am just looking to see if there is a way to build a static SQL based on case result
for ex,
select (case when select count(*) from table1 where id=:a1) >=1 then
(select * from table2)
else
(select * from table3)
end
from dual;
Is it possible?
Thanks,
BV
sql query
sandip, February 16, 2015 - 6:37 am UTC
looks like it wont work.
inconsistent datatypes: expected CHAR got NUMBER would come
as from select * from table2 returms a result set , where as your condition based on count(1) which is number.
Opportunity
Pleasure Builder, February 17, 2015 - 11:13 am UTC
Be your own boss with pleasure builder dot com
Assignment 6-8: Identifying Sale Products
ELHOUSSINE, April 04, 2015 - 12:17 am UTC
When a product is placed on sale, Brewbean's records the sale's start and end dates in columns of the BB_PRODUCT table. A function is needed to provide sales information when a shopper selects an item. If a product is on sale, the function should return the value ON SALE!. However, if it isn’t on sale, the function should return the value Great Deal!. These values are used on the product display page. Create a function named CK_SALE_SF that accepts a date and product ID as arguments, checks whether the date falls with the product’s sale period, and returns the corresponding string value. Test the function with the product ID 6 and two dates: 10-JUN-12 and 19-JUN-12. Verify your results by reviewing the product sales information.
oracle queries
Devshree Zeple, September 20, 2015 - 10:26 am UTC
How to create eid,fname,lname, joining_date,'DD-MM-YY,salary,city,designation from employee in oracle ?
September 21, 2015 - 3:31 am UTC
Please create a new question, and provide a lot more detail please.
Different SQL query
Praveen Ray, April 20, 2016 - 8:08 am UTC
Hi - please, assume this to be a very long query, each expression composed of some computation/sub-queries etc. but I am interested only in the K-column which is a comma separated list. I want the query output to be same except that the K-column explode into multiple rows as explain:
select 1 a, 2 b, 3 c, 4 d, 5 e, 6 f, 7 g, 8 i, 9 j,
'a,b' k,
11 l, 12 m, 13 n, 14 o, 15 p, 16 q, 17 r, 18 s, from dual
union all
select 5 a, 6 b, 7 c, 4 d, 1 e, 9 f, 7 g, 8 i, 2 j,
'x,y,z' k,
10 l, 16 m, 13 n, 14 o, 35 p, 16 q, 27 r, 28 s, from dual;
a b c ... k ... r s
---------------------------------
1 2 3 ... a ... 17 18
1 2 3 ... b ... 17 18
5 6 7 ... x ... 27 28
5 6 7 ... y ... 27 28
5 6 7 ... z ... 27 28
April 20, 2016 - 10:23 am UTC
Choose column as per argument
Sandy, September 21, 2016 - 4:25 pm UTC
create table EMPLOYEES
(
employee_id NUMBER(6) not null,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
job_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
department_id NUMBER(4),
phone_number2 VARCHAR2(20)
);
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (183, 'Girard', 'Geoni', 'GGEONI', '650.507.9879', to_date('03-02-2008', 'dd-mm-yyyy'), 'SH_CLERK', 2800.00, null, 120, 50, '650.507.9879');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (189, 'Jennifer', 'Dilly', 'JDILLY', '650.505.2876', to_date('13-08-2005', 'dd-mm-yyyy'), 'SH_CLERK', 3600.00, null, 122, 50, '650.505.2876');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (195, 'Vance', 'Jones', 'VJONES', '650.501.4876', to_date('17-03-2007', 'dd-mm-yyyy'), 'SH_CLERK', 2800.00, null, 123, 50, '650.501.4876');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (100, 'Steven', 'King', 'SKING', '515.123.4567', to_date('17-06-2003', 'dd-mm-yyyy'), 'AD_PRES', 24000.00, null, null, 90, null);
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (101, 'Neena', 'Kochhar', 'NKOCHHAR', '515.123.4568', to_date('21-09-2005', 'dd-mm-yyyy'), 'AD_VP', 17000.00, null, 100, 90, '515.123.4568');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (102, 'Lex', 'De Haan', 'LDEHAAN', '515.123.4569', to_date('13-01-2001', 'dd-mm-yyyy'), 'AD_VP', 17000.00, null, 100, 90, '515.123.4569');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (103, 'Alexander', 'Hunold', 'AHUNOLD', '590.423.4567', to_date('03-01-2006', 'dd-mm-yyyy'), 'IT_PROG', 9000.00, null, 102, 60, '590.423.4567');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (104, 'Bruce', 'Ernst', 'BERNST', '590.423.4568', to_date('21-05-2007', 'dd-mm-yyyy'), 'IT_PROG', 6000.00, null, 103, 60, '590.423.4568');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (105, 'David', 'Austin', 'DAUSTIN', '590.423.4569', to_date('25-06-2005', 'dd-mm-yyyy'), 'IT_PROG', 4800.00, null, 103, 60, '590.423.4569');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (106, 'Valli', 'Pataballa', 'VPATABAL', '590.423.4560', to_date('05-02-2006', 'dd-mm-yyyy'), 'IT_PROG', 4800.00, null, 103, 60, '590.423.4560');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (107, 'Diana', 'Lorentz', 'DLORENTZ', '590.423.5567', to_date('07-02-2007', 'dd-mm-yyyy'), 'IT_PROG', 4200.00, null, 103, 60, '590.423.5567');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (108, 'Nancy', 'Greenberg', 'NGREENBE', '515.124.4569', to_date('17-08-2002', 'dd-mm-yyyy'), 'FI_MGR', 12008.00, null, 101, 100, '515.124.4569');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (109, 'Daniel', 'Faviet', 'DFAVIET', '515.124.4169', to_date('16-08-2002', 'dd-mm-yyyy'), 'FI_ACCOUNT', 9000.00, null, 108, 100, '515.124.4169');
insert into EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID, PHONE_NUMBER2)
values (110, 'John', 'Chen', 'JCHEN', '515.124.4269', to_date('28-09-2005', 'dd-mm-yyyy'), 'FI_ACCOUNT', 8200.00, null, 108, 100, '515.124.4269');
Now procedure :-
create or replace procedure get_empInfo(vPhone_number varchar2,
vPhone_number2 varchar2,
cur out sys_refcursor) as
begin
open cur for
with a as
(
select b.*,
( case when vPhone_number = 'Y' then b.phone_number
when vPhone_number2 = 'Y' then b.phone_number2
-- when vPhone_number = 'Y' and vPhone_number2 = 'Y' then b.phone_number, b.phone_number2 -- point a
end)
from employees b
)
select * from a;
end;
" point a" means display both columns when both arguments are 'Y'
Now My question is if client either pass argument vPhone_number or vPhone_number2 or both.
but here i want to only those column which pass by argument like if argument pass vPhone_number is 'Y' and vPhone_number 2 ='N' then display only phone_number column and vise versa.
can we achieve using where clause and case when statement because of performance.
Happy to help
September 21, 2016 - 10:19 pm UTC
Since you are just returning a ref cursor, just use two SQL's
create or replace
procedure get_empInfo(vPhone_number varchar2,
vPhone_number2 varchar2,
cur out sys_refcursor) as
begin
if vPhone_number = 'Y' and vPhone_number2 = 'Y' then
open cur for
select b.*,
b.phone_number,
b.phone_number2
from employees b
else
open cur for
select b.*,
case when vPhone_number = 'Y' then b.phone_number
when vPhone_number2 = 'Y' then b.phone_number2
end
from employees b
end if;
end;
UPDATE with Dynamic Columns
A reader, July 11, 2017 - 9:46 am UTC
Hi,
in a table We have some data With Spaces(Ex:'Santhosh ') for few columns. so i have to trim that Columns with a procedure.
so i created a procedure like this.
CREATE OR REPLACE PROCEDURE TRIM_CUST_STG_TAB_PRC(P_COLUMN_NAME VARCHAR2) IS
X VARCHAR2(4000);
Y VARCHAR2(2000);
-- Z varchar2(12) := 'Sale';
-- xx Varchar2(100);
v_1 varchar2(1000) := p_column_Name;
BEGIN
y :=P_COLUMN_NAME;
X := 'update cust_conv_stg set '||P_COLUMN_NAME|| '='||TRIM(P_COLUMN_NAME);
dbms_output.put_line('x: '||X);
-- dbms_output.put_line('v_1: '||v_1);
EXECUTE IMMEDIATE X;
DBMS_OUTPUT.PUT_LINE('success');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error while updating table'||sqlerrm);
END;
Here i pass column name via Parameter then i want the procedure to trim the column i passed. but it did not work can you tell me where i made mistake
Thanks in advance
July 11, 2017 - 2:36 pm UTC
What does "it did not work" mean? What precisely is the problem?
A reader, July 12, 2017 - 3:57 am UTC
It did not trim the data.
Latest question
GJ, July 12, 2017 - 10:33 am UTC
I think its due to this part of the code. You are triming the column_name passed as a parameter.
X := 'update cust_conv_stg set '||P_COLUMN_NAME|| '='||TRIM(P_COLUMN_NAME);
should be
X := 'update cust_conv_stg set '||P_COLUMN_NAME|| '=TRIM('||P_COLUMN_NAME||')';
--double check my syntax. i havent tested it
July 12, 2017 - 1:12 pm UTC
Yes, I think you're right.
How to pass dynamic parameter which if of comma separated columns to aggregate functions.
sidddesh, January 20, 2021 - 5:06 am UTC
Hi tom,
I have function which takes parameter with comma separated column names.('col1,col2,col3,col4')
So I want to apply aggregation (Sum) on those dynamically passing column parameter.
Like bellow:
select sum('col1,col2,col3,col4') from table;
how to do this ? could you please suggest me ?
January 20, 2021 - 6:00 am UTC
Assuming the table has columns COL1, COL2, COL3, COL4
what does "sum('col1,col2,col3,col4') " mean?
Does it mean sum(col1+col2+col3+col4) ?
Or sum(col1)+sum(col2)+... etc
Or soemthing else ?
How to pass dynamic parameter which is of comma separated columns to aggregate functions.
sidddesh, January 20, 2021 - 5:09 am UTC
Hi tom,
I have function which takes parameter with comma separated column names.('col1,col2,col3,col4')
So I want to apply aggregation (Sum) on those dynamically passing column parameter.
Like bellow:
select sum('col1,col2,col3,col4') from table;
how to do this ? could you please suggest me ?
NULL Values
Dieter, January 20, 2021 - 3:07 pm UTC
Just to elaborate on Connor's questions ...
At first glance, both of Connor's statements should render the same result, BUT:
NULL values are evaluated differently by aggregation and arithmetic functions,
resulting in different results when NULLS are present!
--arithmetic first
WITH a AS
(SELECT 1 col1,
2 col2
FROM dual
UNION ALL
SELECT - .1 col1,
3 col2
FROM dual
UNION ALL
SELECT -4 col1,
NULL col2
FROM dual)
SELECT SUM(col1) + SUM(col2)
FROM a;
--aggregation first
WITH a AS
(SELECT 1 col1,
2 col2
FROM dual
UNION ALL
SELECT - .1 col1,
3 col2
FROM dual UNION ALL
SELECT -4 col1,
NULL col2
FROM dual)
SELECT SUM(col1 + col2)
FROM a;
January 22, 2021 - 10:50 am UTC
Well put
Null values
A reader, January 21, 2021 - 1:41 am UTC
Uups, comments in wrang order