Skip to Main Content

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Tom Kyte

Thanks for the question, Raj.

Asked: March 27, 2001 - 4:24 pm UTC

Last updated: August 14, 2013 - 3:15 pm UTC

Version: Oracle 8i

Viewed 50K+ times! This question is

You Asked

Hi Tom,

Thanks very much for your answers for all my questions.

Now I have one more such silly question.

My requirement is I have to transform all my rows into columns.

Here is an example:

First table: t_events(event, desc, logdate)
Event Desc Logdate
1 abcd xxxxxxx
2 xyz zzzzzz

Second table:t_event_data(dataid, event, element, curval)
Dataid Event Element CurrVal
1 1 userid Mani
2 1 company WFB
3 1 transaction wells
4 1 userid rao
5 1 company NOTWFB
6 1 transaction wires

7 2 val1 0
8 2 val2 3/12/01 etc...

The above structures are my input tables.

Now My goal is to create a view for each event. The view should should
show as follows.

View for event 1:
event_view1(evnet_id, desc, logdate, userid, company, transaction);
Event_id Desc Logdt userid company Trasaction

1 abcd xxxxx Mani WFB wells
1 abcd xxxxx rai NOTWFB wires

View for event 2:
Event_veiw2 (event_id, desc, logdate, val1, val2);
event_id desc logdate val1 val2
1 xyz yyyyyy 0 3/12/01


Based on the above input tables I have to create the views which can
be used by my developers.

Could you please suggest me some solution for this or Do I need to change my table structure.

This is exactly like transforming the rows to columns. Is there any
facility Oracle 8i provides to achieve this..

I appreciate your early reply.

Thank you very much.
Rao

and Tom said...

decode can do this.

ops$tkyte@ORA8I.WORLD> create or replace view event_view1
2 as
3 select t_events.event, t_events.descript, t_events.logdate,
4 max( decode( element, 'userid', curval, NULL ) ) userid,
5 max( decode( element, 'company', curval, NULL )) company,
6 max( decode( element, 'transaction', curval, NULL)) transaction
7 from t_events, t_event_data
8 where t_events.event = t_event_data.event
9 group by t_events.event, t_events.descript, t_events.logdate
10 /

View created.

ops$tkyte@ORA8I.WORLD>
ops$tkyte@ORA8I.WORLD>
ops$tkyte@ORA8I.WORLD> create or replace view event_view2
2 as
3 select t_events.event, t_events.descript, t_events.logdate,
4 max( decode( element, 'val1', curval, NULL ) ) val1,
5 max( decode( element, 'val2', curval, NULL )) val2
6 from t_events, t_event_data
7 where t_events.event = t_event_data.event
8 group by t_events.event, t_events.descript, t_events.logdate
9 /

View created.

ops$tkyte@ORA8I.WORLD>
ops$tkyte@ORA8I.WORLD>
ops$tkyte@ORA8I.WORLD>
ops$tkyte@ORA8I.WORLD> select * from event_view1;

EVENT DESCRIPT LOGDATE USERID COMPANY TRANSACTION
---------- ---------- --------- --------------- --------------- ---------------
1 abc 28-MAR-01 rao WFB wires

ops$tkyte@ORA8I.WORLD> select * from event_view2;

EVENT DESCRIPT LOGDATE VAL1 VAL2
---------- ---------- --------- --------------- ---------------
1 abc 28-MAR-01 3/12/01 0


Rating

  (103 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

Rows To Columns: What about 9i

Georg, March 30, 2003 - 3:52 am UTC

Hi Tom,

great "trick". What about 9i ? Same trick or is there some better one ?

Thanks in advance.
Georg

Tom Kyte
March 30, 2003 - 8:25 am UTC

sql is sql -- there is no "pivot" command in sql.

This is not a trick, it is a sound technique. SQL needs to know "there are N columns in the result set", this is the way it is done.


Now, end user reporting tools -- thats different. Reports/Discoverer -- they all do pivots anyway you want. but they take a flat sql query and pivot it manually.

Pivot view without group by

Marc, March 30, 2003 - 11:19 am UTC

Hi Tom

I have to create a pivot view for a table with more than 200 attributes in the view and 250 million rows per month. The group by operation cost many resources in temp tablespace (size 50 GB) and huge of cpu time for sort and aggregate.

Do you know a trick for build a pivot WITHOUT group by ? Is there a way to do it with analytic functions ? Many thanks for your feedback.

Kind Regards
Marc

Tom Kyte
March 30, 2003 - 11:41 am UTC

In order to pivot, you have to group by. A group by does not have to sort necessarily.

For what you describe, 50gig of temp isn't that much -- increasing your sort_area_size will help considerably -- ensure your temp extents match your sort_area_size (at least -- they can be N*sort_area_size as well)


In order to pivot --you need to find all of the rows that match your key -- thats a group by.

Just FYI

Jeff, July 09, 2003 - 11:53 am UTC

If you search your site for 'row to columns' this thread doesn't come up at all, I went up to about the 50th.

Thanks so much for all of your time and effort, I and countless others really do appreciate it

Tom Kyte
July 09, 2003 - 1:33 pm UTC

if you search for

"row to columns"

(double quotes) it is the first of two.


if you search for

'row to columns'

which is the same as

row to columns

you get everything with row & to & columns which is over 1,000 hits!

AKMD, July 29, 2003 - 2:52 am UTC

good.
I have information like below in tables T1 and T2.

SQL> desc t1;
 Name                            Null?    Type
 ------------------------------- -------- ----
 A                                        NUMBER
 B                                        NUMBER
 C                                        NUMBER
SQL> select * from t1;
        A         B         C
--------- --------- ---------
       23        45        60
       33        55        70
       33        23        70
SQL> desc t2
 Name                            Null?    Type
 ------------------------------- -------- ----
 D                                        VARCHAR2(10)
 I                                        NUMBER
SQL> select * from t2;
D                  I
---------- ---------
TEST              23
TEST1             45
TEST2             60
TEST3             33
TEST4             55
TEST5             70

I need output as folows:
        A D               B D                  C D
--------- ------------------ ---------- --------- ----------
       23 TEST            45 TEST1             60 TEST2
       33 TEST3           23 TEST              70 TEST5
       33 TEST3           55 TEST4             70 TEST5

Nothing is fixed.

Thanks in advance.
 

Tom Kyte
July 29, 2003 - 7:07 am UTC

simple -- "join"


select t1.a, t2_1.d, t1.b, t2_2.d, t1.c, t2_3.d
from t1, t2 t2_1, t2 t2_2, t2 t2_3
where t1.a = t2_1.i
and t1.b = t2_2.i
and t1.c = t2_3.i
/




A reader, July 30, 2003 - 12:10 am UTC

Hi,

select q.a, q.d, p.b, p.d, s.c, s.d from
(select distinct x.a a, y.d d from t1 x, t2 y
where y.i = x.a) q,
(select distinct x.b b, y.d d from t1 x, t2 y
where y.i = x.b) p,
(select distinct x.c c, y.d d from t1 x, t2 y
where y.i = x.c) s, t1
where q.a = t1.a and p.b = t1.b and s.c = t1.c
/

I used above query to get the result.
Can u explain me what is the diff???

Thank you.

Tom Kyte
July 30, 2003 - 7:25 am UTC

beside that yours will run infinitely slower?


why would you make it so hard?

A reader, July 30, 2003 - 9:56 am UTC

I hope inline views are faster in quires any advise?

Tom Kyte
July 30, 2003 - 10:21 am UTC

I hope the redskins do better this year?

inline views are just a construct. they are no different then the query against a stored view. they are no different then a query that doesn't use an inline view.

They can however be used to develop queries that

a) don't need a stored view
b) could not be answered without a stored view otherwise

group by does not have to sort

A reader, August 02, 2003 - 1:32 am UTC

" group by does not have to sort 
necessarily.
"

SQL> select deptno, sum(sal)
  2  from emp1
  3  group by deptno;

    DEPTNO   SUM(SAL)
---------- ----------
        10       8750
        20      10875
        30       9400

when does a group by sorts automatically , when does it not...

Please illsutrate. 

Tom Kyte
August 02, 2003 - 8:03 am UTC

whenever it feels like it -- whenever it can -- whenever it makes sense.

say you "group by a, b" and there is a nice index on "b,a"..... might it use the index and the data might appear to be sorted by b,a?


say you "group by a,b" and you've hash partitioned by a. might it return results from each partition one after the other -- so it appears sorted by a,b AFTER hashing (so if you had 8 hash partitions, you might get 8 "apparently sorted" result sets one after the other -- the entire thing isn't sorted)


say your nls_sort is french. group by sort does a binary sort, data won't be sorted according to your sort order


and so on -- there are many ways to group by that do not return sorted data.

There is only one way to get sorted data -- order by!

Rows to Columns

M S Reddy, December 10, 2003 - 3:13 pm UTC

Tom,
can this be done in one single sql statement without looping.

set serveroutput on size 20000
declare
procedure temp is
v_str varchar2(76):= '';
begin
for v_rec in (select 'a' col from t1 where rownum < 77)
loop
v_str := v_str || v_rec.col;
end loop;
dbms_output.put_line(v_str) ;
end;

begin
temp;
end;
/

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

that rownum < 77 is not fixed it can vary.

Thanks,
M S Reddy

Tom Kyte
December 10, 2003 - 4:19 pm UTC

rpad( 'a', 77, 'a')

????

Rows To Columns

M S Reddy, December 11, 2003 - 10:36 am UTC

Tom ,
That number 77 is not fixed .The column does not always have a value 'a' it can be anything else.
consider this example.

T1
-------------------------------
c1 c2 c3
-------------------------------
a b c
d e f
g h i
j k l

the result should be

-------------------------------
c1 c2 c3
-------------------------------
adgj behk cfil

Can this be done in single sql statement?

Thanks,
M S Reddy.

Tom Kyte
December 11, 2003 - 12:23 pm UTC

search for stragg on this site ( and don't oversimplify your examples -- give real ones. given the question -- the answer is in fact "rpad" !!!! 77 can be a bind, 'a' can be a bind. It just looked like you made a really hard way to get N letters in a row)

Dinamic pivot query

Luda, December 23, 2003 - 4:49 pm UTC

Hi Tom,
I have two problems in this query:
select item_code,inventory_id,
sum(decode(monthly,'Jan',cnt,null)) jan,
sum(decode(monthly,'Feb',cnt,null)) feb,
sum(decode(monthly,'mart',cnt,null)) mart
from(
select i.item_code, i.inventory_id,
decode(to_char(b.ship_date,'mon/yy'),'jan/03','Jan'
,'feb/03','Feb'
,'mar/03','mart') as monthly,
sum(NVL(a.itm_shipped,0)) as cnt
from inventory i,order_items a, orders b
where i.inventory_id=a.inventory_id(+)
and i.account_id=b.account_id
and a.order_id=b.order_id
and i.account_id=20
and ship_date>='01-Jan-03'
and ship_date <='18-Dec-03'
and rownum < 500
group by i.item_code ,i.inventory_id ,
decode(to_char(b.ship_date,'mon/yy'),'jan/03','Jan'
,'feb/03','Feb'
,'mar/03','mart')
)
group by item_code, inventory_id;
1. It's tooooo slowly.
2. I do not now months user can pick up, for all year, a few months, or just sysdate. I need something dynamic.
I use this query in Coldfusion.
Please help.







Tom Kyte
December 23, 2003 - 5:41 pm UTC

well -- step 1. lose the outer join. it doesn't count in your case.

You have:

where i.inventory_id=a.inventory_id(+)
and i.account_id=b.account_id
and a.order_id=b.order_id

So, if we actually outer join to A, then A.ORDER_ID is NULL by definition and therefore the join to B never finds anything (since NULL is never equal to anything).

Hence, either

a) query is buggy and returns wrong answer because you did not continue outer joining
b) outer join is not relevant.

I'll vote for B :) outer join not relevant so lose it.


Now -- rownum < 500????? whats up with that??? ok, i know how to make this go faster -- change 500 to 1 or 0, it'll run much faster and give you just as reliable an answer! (you do realize that the same query run twice with the SAME EXACT INPUTS could give you entirely different answers here??????? You get 500 rows and then group them.


So, i'm very much thinking "this query has too many things wrong with it to even think about tuning -- i don't think it returns anything useful as written" (hence to make it faster, don't run it.)



A reader, December 23, 2003 - 6:48 pm UTC

I made it different way but it takes a lot of time.
first I pick up all id from inventory table, than in loop
I pass id to another query and find sum and month.
select id,col1,col2 ... from i
loop
select sum(item),to_char(b.ship_date,'mon/yy'),
from a,b
where a.id=query1.id
from date1 to date2
group by to_char(b.ship_date,'mon/yy')
end loop
So if second query doesn't return any record I show first select. Let's say, if first query returns 1700 rows
I have to show in report 1700 rows with blank in sum field.
Problem only in time. Too much calls and user defently will lose patient.
Can I do it in one select and faster?

Tom Kyte
December 23, 2003 - 7:59 pm UTC

I cannot tune that which does not compute or make sense. I don't even know the question here (since the query is wrong one way or the other)

No one can tune this. the question being asked of the data isn't very clear at all.

dynamic query against statspack

A reader, January 30, 2004 - 5:30 am UTC

Hi

I have following query

select executions, hash_value
from STATS$SQL_SUMMARY
where hash_value in (3822005349, 3255899958, 635556189)
and snap_id between 200 and 222

I would like to know if there is a way in SQL to make this query run against several snap_id. Right now I have to repeat the query for each

and snap_id between 200 and 222

is it possible?

Tom Kyte
January 30, 2004 - 8:18 am UTC

don't understand what you mean.

(it is right now running against several snap_ids -- 200, 201, 202, .... 222. so, not sure what you mean)

dynamic query against statspack

A reader, January 30, 2004 - 1:32 pm UTC

sorry I was not clear

right now I do this

select executions, hash_value
from STATS$SQL_SUMMARY
where hash_value in (3822005349, 3255899958, 635556189)
and snap_id between 201 and 202

select executions, hash_value
from STATS$SQL_SUMMARY
where hash_value in (3822005349, 3255899958, 635556189)
and snap_id between 202 and 203

select executions, hash_value
from STATS$SQL_SUMMARY
where hash_value in (3822005349, 3255899958, 635556189)
and snap_id between 203 and 204

and so on until

select executions, hash_value
from STATS$SQL_SUMMARY
where hash_value in (3822005349, 3255899958, 635556189)
and snap_id between 221 and 222

I have to run many times the same query with different snap_id range, I was wondering if I can run all these SQLs in a go in SQL or I need to use PL/SQL?

Tom Kyte
January 30, 2004 - 7:48 pm UTC

tell me what you do/did with the output of these queries and I'll tell you how to write it as a single query. But I'd need to know what you were doing with it sort of.


But, in a nutshell -- it'll involve LAG() and/or LEAD() (hint hint)

The output

A reader, February 03, 2004 - 9:11 am UTC

Hi

Sorry for the delay..

I tried this query

select hash_value, snap_id, lead(snap_id) over (order by snap_ID) leadsnap, cpu_time
from STATS$SQL_SUMMARY
where snap_id between 383 and 385
and hash_value = 4133308760

HASH_VALUE SNAP_ID LEADSNAP CPU_TIME
---------- ---------- ---------- ----------
4133308760 383 384 887070000
4133308760 384 385 887210000
4133308760 385 887310000


What I want is this

select min(snap_id) || ', ' || max(snap_id) snap_range, (max(cpu_time) - min(cpu_time))/1000000 cpu_used
from STATS$SQL_SUMMARY
where hash_value = 4133308760
and snap_id between 383 and 384
union all
select min(snap_id) || ', ' || max(snap_id) snap_range, (max(cpu_time) - min(cpu_time))/1000000 cpu_used
from STATS$SQL_SUMMARY
where hash_value = 4133308760
and snap_id between 384 and 385

SNAP_RANGE CPU_USED
-------------- ----------
383, 384 .14
384, 385 .1


possible :-?

Tom Kyte
February 03, 2004 - 9:27 am UTC

ops$tkyte@ORA10G> select *
  2    from (
  3  select hash_value,
  4         snap_id  || ',' ||
  5         lead(snap_id) over (order by snap_ID) snapid,
  6         (lead(cpu_time) over (order by snap_ID) - cpu_time)/1000000 cpu
  7  from t
  8  where snap_id between 383 and 385
  9    and hash_value = 4133308760
 10        )
 11   where snapid not like '%,'
 12  /
 
HASH_VALUE SNAPID            CPU
---------- ---------- ----------
4133308760 383,384           .14
4133308760 384,385            .1
 
 

this works but

A reader, February 03, 2004 - 9:16 am UTC

Hi again

I rewrote the query not it looks like this

select snap_id || ', ' || leadsnap snap_range, (cpu_time - cpu_used)/1000000 cpu_time
from
(select hash_value, snap_id, lead(snap_id) over (order by snap_ID) leadsnap, cpu_time, lead(cpu_time) over (order by snap_id) cpu_used
from STATS$SQL_SUMMARY
where snap_id between 383 and 387
and hash_value = 4133308760)

SNAP_RANGE CPU_TIME
-------------- ----------
383, 384 -.14
384, 385 -.1
385, 386 -.55
386, 387 -.51
387,


It works, how do you get rid of last row though?

Tom Kyte
February 03, 2004 - 9:27 am UTC

inline view and filter with a where clause.

convert row to ?

A reader, February 16, 2004 - 6:52 am UTC

Hi

I am writing a procedure to analyza tables, basically I have a table with these two columns

table_name
column_name

data looks like this

table_name column_name
----------- -------------
emp deptno
emp mgr

What I am trying to do is get this output

deptno, mgr

This is because I need to analyze histograms for certain columns for certain tables.... Of course the number of rows can vary depending on number of columns required to gather histograms

Cheers

Tom Kyte
February 16, 2004 - 7:45 am UTC

search this site for stragg

Charanjiv, April 15, 2004 - 6:33 am UTC

We have a person table and a telephone table
Person
Person ID

Telephone
Person ID -- Can be NULL
tel_Num
default flag

Result set we looking for
Person id , tel# (default) , tel#2
1 1231234 NULL
2 NULL NULL

Problem i am facing is that i want only two numbers even if there are more than
two in the telephone table for one person.

How can this be done

Tom Kyte
April 15, 2004 - 8:37 am UTC



select person_id,
max(decode(default_flag,'Y',tel_num)),
max(decode(default_flag,'N',tel_num))
from T
group by person_id;


I don't see record "2" naturually occuring here -- there wouldn't have been a single row for that person.

help with thuis pivot

Jimmy, April 30, 2004 - 4:07 am UTC

Hi tom

I was wondering how i can get the results of this query to pivot.

My querie

SELECT emp.ID ID1, hr.id ID2, emp.NAME name1, hr.NAME name2, emp.dept dept1, hr.dept dept2
FROM employees emp, human_resources hr
WHERE emp.ID = hr.ID;

Returns the following results:

ID1 ID2 Name1 Name2 Dept1 Dept2
-------------------------------------------
154 154 Jones Smith HR HR

The desired output i need is for the results to be pivoted so they appear as follows:

x y
------------
154 154
Jones Smith
HR HR

Is this possible?

I would be very grateful of any help you can give me.

Thanks in advance





Tom Kyte
April 30, 2004 - 7:44 am UTC

select decode( r, 1, id1, 2, name1, 3, dept1 ),
decode( r, 1, id2, 2, name2, 3, dept3 )
from (YOUR_QUERY_GOES_HERE),
(select 1 r from dual UNION ALL select 2 from dual
UNION ALL select 3 from dual )
/




Help Required in pivot Query.

A reader, May 10, 2004 - 7:10 am UTC

hi tom ,

I require a help in pivot query.
Without using xml functions in the query i get the correct expected result. But when i add the two extract in the query i get multiple records.


When i execute this query
SELECT
ord.id OrderID
,max(decode(ownertypeid, 2, FileName, NULL)) AblaufData
,max(decode(ownertypeid, 3, FileName, NULL)) TypeData
,max(decode(ownertypeid, 4, FileName, NULL)) RegistryKOnfig
-- ,EXTRACT(fc.SequenceData, '/MEPSequenceData/@LastTimeStamp/text()').GetStringVal() seq_LastTimeStamp
-- ,EXTRACT(fc.TypData, '/Data/@LastTimeStamp/text()').GetStringVal() typ_LastTimeStamp
FROM
tabl_order ord
,tabl_fileinfo fi
,tabl_filecontent fc
,tabl_measurementlog mlog
WHERE
mlog.orderid = ord.id
AND fi.id = fc.fileid
AND ((mlog.templateid = fi.ownerid AND mlog.templateversion = fi.additionalkey)
OR (ord.id = fi.ownerid))
GROUP BY
ord.id
-- , EXTRACT(fc.SequenceData, '/MEPSequenceData/@LastTimeStamp/text()').GetStringVal()
-- , EXTRACT(fc.TypData, '/Data/@LastTimeStamp/text()').GetStringVal()
/

i get correct results. Single Record.

But i also want an attribute from the xml file. So i use xml functions to get the data.
And at the end i want single record from the query.

But when i uncomment the EXTRACT part from the column clause of the sql statement, i get two records.

Basically i get all the required data in a single view basd on the above query and generate xml out of it.



How can this be fixed.

Tom Kyte
May 10, 2004 - 8:41 am UTC

not sure anything is broken. without a small, concise, yet 100% complete example (in the fashion I have in my answers) -- it is hard to say what is happening.

most likely -- you are getting the correct results in BOTH cases.

you are comparing

group by A

with

group by A, B, C

which equates to apples and oranges.

Help Required in pivot Query

A reader, May 23, 2004 - 3:28 am UTC

Sorry for the delayed posting. Please refer my earlier thread above with the same Title.
Here is a test case and data for your reference.

create table tabl_order
( ID NUMBER
,ORDERNAME VARCHAR2(32))
/

create table tabl_measurementlog
( ID NUMBER
,ORDERID NUMBER
,TEMPLATEID NUMBER
,TEMPLATEVERSION NUMBER(8,2))
/

create table tabl_fileinfo
( ID NUMBER
,FILENAME VARCHAR2(32)
,OWNERID NUMBER
,ADDITIONALKEY NUMBER(8,2)
,OWNERTYPEID NUMBER)
/

create table tabl_filecontent
( FILEID NUMBER
,SEQUENCEDATA SYS.XMLTYPE
,TYPDATA SYS.XMLTYPE)
/

insert into tabl_order values(259206, 'MeasurementOrder')
/

insert into tabl_measurementlog values(1, 259206, 259023, 1.1)
/

insert into tabl_fileinfo values(1, 'Typdaten.xml',259206, null, 3)
/

insert into tabl_fileinfo values(2, 'Ablaufdaten.xml',259023, 1.1, 2)
/

insert into tabl_filecontent values
(1
,null
,xmltype('<?xml version="1.0" encoding="UTF-8"?>
<Data xmlns:xsi="</code> http://www.w3.org/2001/XMLSchema-instance"
LastTimeStamp="20040430171532">
    <Allgemein>
        <Allgemein.Row TimeStamp="20040430171532">
            <Allgemein.Column id="Name" DataType="text">XMLAblDatenWithSchema</Allgemein.Column>
            <Allgemein.Column id="Version" DataType="text">0.1</Allgemein.Column>
        </Allgemein.Row>
    </Allgemein>
</Data>'))
/

insert into tabl_filecontent values
(2
,xmltype('<?xml version="1.0" encoding="UTF-8"?>
<MEPSequenceData xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance" <code>LastTimeStamp="20030330181532">
<Allgemein>
<Allgemein.Row TimeStamp="20040430171532">
<Allgemein.Column id="Name" DataType="text">XMLAblDatenWithSchema</Allgemein.Column>
<Allgemein.Column id="Version" DataType="text">0.1</Allgemein.Column>
</Allgemein.Row>
</Allgemein>
</MEPSequenceData>')
, null)
/

set pagesize 100
set linesize 192
column orderid format 9999999
column AblaufData format a20
column TypeData format a20
column typ_LastTimeStamp format a20
column seq_LastTimeStamp format a20

SELECT
ord.id OrderID
,max(decode(ownertypeid, 2, FileName, NULL)) AblaufData
,max(decode(ownertypeid, 3, FileName, NULL)) TypeData
,EXTRACT(fc.SequenceData, '/MEPSequenceData@LastTimeStamp').GetStringVal() seq_LastTimeStamp
,EXTRACT(fc.TypData, '/Data@LastTimeStamp').GetStringVal() typ_LastTimeStamp
FROM
tabl_order ord
,tabl_fileinfo fi
,tabl_filecontent fc
,tabl_measurementlog mlog
WHERE
mlog.orderid = ord.id
AND fi.id = fc.fileid
AND ((mlog.templateid = fi.ownerid AND mlog.templateversion = fi.additionalkey)
OR (ord.id = fi.ownerid))
GROUP BY
ord.id
, EXTRACT(fc.SequenceData, '/MEPSequenceData@LastTimeStamp').GetStringVal()
, EXTRACT(fc.TypData, '/Data@LastTimeStamp').GetStringVal()
/

clear columns

Here is the output.

ORDERID ABLAUFDATA TYPEDATA SEQ_LASTTIMESTAMP TYP_LASTTIMESTAMP
-------- -------------------- -------------------- -------------------- --------------------
259206 Typdaten.xml 20040430171532
259206 Ablaufdaten.xml 20030330181532

But when i exclude the EXTRACT function colulmns from the query i get single record.

ORDERID ABLAUFDATA TYPEDATA
---------- -------------------------------- --------------------------------
259206 Ablaufdaten.xml Typdaten.xml





But here is the output i am looking for with the EXTRACT f unction inclusive.

ORDERID ABLAUFDATA TYPEDATA SEQ_LASTTIMESTAMP TYP_LASTTIMESTAMP
-------- -------------------- -------------------- -------------------- --------------------
259206 Ablaufdaten.xml Typdaten.xml 20030330181532 20040430171532

You help will be greatly appreciated.


Tom Kyte
May 23, 2004 - 9:37 am UTC

if you are always expecting two rows and just want to squash up the typedata/etc columns -- max them all:

ops$tkyte@ORA9IR2> SELECT
  2        ord.id OrderID
  3           ,max(decode(ownertypeid, 2, FileName, NULL)) AblaufData
  4          ,max(decode(ownertypeid, 3, FileName, NULL)) TypeData
  5          ,max(EXTRACT(fc.SequenceData, '/MEPSequenceData@LastTimeStamp').GetStringVal()) seq_LastTimeStamp
  6          ,max(EXTRACT(fc.TypData, '/Data@LastTimeStamp').GetStringVal()) typ_LastTimeStamp
  7  FROM
  8           tabl_order                ord
  9        ,tabl_fileinfo            fi
 10        ,tabl_filecontent       fc
 11        ,tabl_measurementlog    mlog
 12  WHERE
 13         mlog.orderid     = ord.id
 14    AND  fi.id            = fc.fileid
 15    AND ((mlog.templateid = fi.ownerid  AND  mlog.templateversion =
 16  fi.additionalkey)
 17     OR (ord.id           = fi.ownerid))
 18  GROUP BY ord.id
 19  /
 
 ORDERID ABLAUFDATA      TYPEDATA           SEQ_LASTTIMES TYP_LASTTIMESTAMP
-------- --------------  ------------------ -------------- ------------------
  259206 Ablaufdaten.xml Typdaten.xml      20030330181532     20040430171532
 
 

excellent!!!

A reader, May 24, 2004 - 2:40 am UTC

excellent as always.

selecting first not null column

praveen, June 27, 2004 - 12:18 am UTC

Hi tom,
I have a table T with four columns, say, A1,A2,A3,A4. All are nullable columns. Now I need to write a query which will fetch the first not null column value from each record.
For eg if the table contains values as:
A1 A2 A3 A4
-- -- -- --
501 865
653 999
The query result should be:
Result
------
501
653
Is this possible through a query? Or should I need to write procedural code for this.I'm using Oracle 9i release 2.Thank you very much for your time.

Tom Kyte
June 27, 2004 - 11:04 am UTC

no create table, no inserts... sigh.

read about coalesce()

ops$tkyte@ORA9IR2> select coalesce( null, 1 ) from dual;
 
COALESCE(NULL,1)
----------------
               1
 
ops$tkyte@ORA9IR2> select coalesce( null, null, 1, 2 ) from dual;
 
COALESCE(NULL,NULL,1,2)
-----------------------
                      1
 
ops$tkyte@ORA9IR2>
 

Pretty cool but ...:(

Arindam, July 10, 2004 - 9:20 pm UTC

Tom,
I have gone through Analytics and also Pivots that are been asked and answered by you, but I still have a question.

I have a table like :

Table X
--------
TPID number(10) pk
CDID number(10)
SRCID varchar2(10)


Now there can be number of CDID associated with SRCID. For eg.
CDID SRCID
1 HT
1 FF
1 CT
2 HT
3 CT
4 FF
4 BG
...............
...............

Now I want to find out from this table in one report:

1. UNIQUE CDID for a SRCID
2. COUNT OF OVERLAPS

For eg.

SRCID CNT1(BG) CNT2(CT) CNT3(FF) CNT3(HT) UNIQUE CNT(SRCID)
BG 10 2 3 0 5
CT 15 30 5 1 9
FF 3 5 15 0 7

CNT = Counts

The columns, apart from the last UNIQUE CNT(SRCID) gives the overlapping SRCID's. The last column basically gives the UNIQUE SRCID . I think its a pivot report, or may be some analytics, am at a loss can u please help ?

Regards
Arindam

Tom Kyte
July 11, 2004 - 9:32 am UTC

no create tables, no inserts -- and a not clear enough defintion of the problem.

count of "overlaps"??? what are "overlaps"

give a complete example (all inputs) and detail how the output is to be derived from that input.

I understand the first column in your output and the last column - but that is about it.

Sorry for being vague...

Arindam, July 11, 2004 - 12:56 pm UTC

Here is an example...

create table test
(TPID number(10) primary key,
CDID number(10),
SRCID varchar2(10))

insert into test values(1,1,'AV');
insert into test values(2,1,'AV');
insert into test values(3,1,'AV');
insert into test values(4,2,'AV');
insert into test values(5,2,'HT');
insert into test values(6,2,'HT');
insert into test values(7,2,'BG');
insert into test values(8,3,'CT');
insert into test values(9,3,'CT');
insert into test values(10,3,'HT');
insert into test values(11,3,'AV');
insert into test values(12,4,'CT');
insert into test values(13,4,'FF');
insert into test values(14,4,'HT');

commit;

Now I want to find out from this table in one report:

1. UNIQUE CDID for a SRCID
2. COUNT OF OVERLAPS

Overlaps = How many (counts) src's overlap each other. For example :

number of unique CDID using 'HT' = 0
number of CDID using 'HT' but also uses 'BG' = 1 (CDID 2)
number of CDID using 'HT' but also uses 'AV' = 1 (CDID 3)
number of CDID using 'HT' but also uses 'CT' = 2 (CDID = 3 and 4 )

Its like set theory's intersection.
If we consider AV,BG,CT,HT,FF to be sets and CDID's as values for each set then we might represent the same as :
AV = {1,1,1,2,3}
BG = {2}
CT = {3,3,4}
HT = {2,2,3,4}
FF = {4}

so looking at the example given above :
Unique HT = 0
HT intersection(overlap) BG = 1
HT intersection(overlap) CT = 1 etc.

Unique FF = 0
FF intersection(overlap) AV = 0
FF intersection(overlap) BG = 0
FF intersection(overlap) CT = 1
FF intersection(overlap) HT = 1

Hope I could make myself clear.

Regards

Tom Kyte
July 11, 2004 - 1:41 pm UTC

number of unique ccid using ht is zero???? how so?

Sorry a Typo

Arindam, July 11, 2004 - 5:57 pm UTC

It should be 3. Number of UNIQUE CCID for HT = 3. I apologise.

regards

Getting there but need your comments!!

Arindam, July 12, 2004 - 10:47 am UTC

Tom,
I got a query working for my earlier problem. It goes like this:
select src,
max(av) AV,
round((max(av)/max(cnt))*100,2) perAV,
max(bg) BG,
round((max(bg)/max(cnt))*100,2) perBG,
max(ct) CT,
round((max(ct)/max(cnt))*100,2) perCT,
max(ht) HT,
round((max(ht)/max(cnt))*100,2) perHT,
max(ff) FF,
round((max(ff)/max(cnt))*100,2) perFF,
max(cnt) UNQ
from
(
select vt2.srcid src,sum(AV_FLAG) AV, sum(BG_FLAG) BG,sum(CT_FLAG) CT,sum(HT_FLAG) HT,sum(FF_FLAG) FF,null cnt
from
(
select CDID,SRCID,
max(decode(srcid,'AV',1,0)) AV_FLAG,
max(decode(srcid,'BG',1,0)) BG_FLAG,
max(decode(srcid,'CT',1,0)) CT_FLAG,
max(decode(srcid,'HT',1,0)) HT_FLAG,
max(decode(srcid,'FF',1,0)) FF_FLAG
from test2
group by CDID,SRCID
) VT1
,
(
select cdid,srcid
from test2
group by srcid,cdid
) VT2
where vt1.cdid = vt2.cdid
group by vt2.srcid
union
select srcid,null,null,null,null,null,count(srcid) cnt
from test2
group by srcid
)
group by src

perAV,perBG etc gives us the percentage of a src within an overlapping (intersecting) source. Could you comment on the same ? Is there a neater way to write this query ?

Regards

Tom Kyte
July 12, 2004 - 11:46 am UTC

it looks OK -- it is a hard question to answer -- and requires as far as I can see at least 2 passes, a group by cube/rollup *might* be able to avoid that.

Thanks

Arindam, July 12, 2004 - 12:00 pm UTC

Can you give me an example of Group By cube with rollup using my example ?

Regards

Rows to column

Yogesh, August 17, 2004 - 9:11 am UTC

I want to show result as follows

Mon Tue Wed .....Sun
c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3
abcd x y z
pppp x1 y1 z1
llll x2 y2 z2
uuuu x3 y3 z3

Kind of cross table and pivot. Can we achive this in single query in Oracle 8.0

Tom Kyte
August 17, 2004 - 9:57 am UTC

give it a whirl -- (yes, you are just pivoting here, no different than the first example?


select whatever_that_first_column_is,
max( decode( to_char(dt_col,'dy'), 'Mon', c1 ) ),
max( decode( to_char(dt_col,'dy'), 'Mon', c2 ) ),
max( decode( to_char(dt_col,'dy'), 'Mon', c3 ) ),
...
max( decode( to_char(dt_col,'dy'), 'Fri', c1 ) ),
max( decode( to_char(dt_col,'dy'), 'Fri', c2 ) ),
max( decode( to_char(dt_col,'dy'), 'Fri', c3 ) )
from t
group by whatever_that_first_column_is

Dynamic Pivot Query

Nikhil, October 21, 2004 - 9:08 am UTC

I have a query as below:
SELECT
d.SERVICE_CENTER_ID,
d.PRODUCT_LINE_CD,
NVL(e.QUARTER_NUM,2) quarter,
NVL(e.APPLICABLE_YR,2004) YEAR,
MAX(DECODE(rn,1,SURVEY_QSTN_AVG_SCR_NUM)) Accomodating_Needs,
MAX(DECODE(rn,2,SURVEY_QSTN_AVG_SCR_NUM)) Returned_Request_Date,
MAX(DECODE(rn,3,SURVEY_QSTN_AVG_SCR_NUM)) Repair_Time,
MAX(DECODE(rn,4,SURVEY_QSTN_AVG_SCR_NUM)) Shown_All_Info,
MAX(DECODE(rn,5,SURVEY_QSTN_AVG_SCR_NUM)) Web_Site_Tools,
MAX(DECODE(rn,6,SURVEY_QSTN_AVG_SCR_NUM)) Web_Site_Tools_Usability,
MAX(DECODE(rn,7,SURVEY_QSTN_AVG_SCR_NUM)) Compared_Costs,
MAX(DECODE(rn,8,SURVEY_QSTN_AVG_SCR_NUM)) Invoice_Accuracy,
MAX(DECODE(rn,9,SURVEY_QSTN_AVG_SCR_NUM)) Ease_Invoice_Disputes,
MAX(DECODE(rn,10,SURVEY_QSTN_AVG_SCR_NUM)) Appearance_Center,
MAX(DECODE(rn,11,SURVEY_QSTN_AVG_SCR_NUM)) Work_Quality,
MAX(DECODE(rn,12,SURVEY_QSTN_AVG_SCR_NUM)) Professionalism,
MAX(DECODE(rn,13,SURVEY_QSTN_AVG_SCR_NUM)) Rental_Engine_Support
FROM COMMULATIVE_SRVY_QST_AVG_SCR e,SERVICE_CNTR_PROD_LINE_AUTH d,
(SELECT a.QUESTION_NUM,a.APPLICABLE_YR,a.QUARTER_NUM,
row_number() over (PARTITION BY a.SEQUENCE_NUM ORDER BY a.QUESTION_NUM) rn
FROM STAAR_SURVEY_ANSWER a,SURVEY_QUESTION_MASTER b
WHERE b.QUESTION_TYPE_ID=1
AND a.QUESTION_NUM=b.QUESTION_NUM AND a.SURVEY_ID=1
) x
WHERE e.APPLICABLE_YR(+)=2004
AND e.QUARTER_NUM(+)=2
AND e.QUESTION_NUM=x.QUESTION_NUM
AND e.SERVICE_CENTER_ID(+)=d.SERVICE_CENTER_ID
AND e.PRODUCT_LINE_CD(+)=d.PRODUCT_LINE_CD
AND d.SERVICE_CENTER_ID IN('162','14','128')
AND d.PRODUCT_LINE_CD IN ('TFE')
GROUP BY d.SERVICE_CENTER_ID,d.PRODUCT_LINE_CD,e.QUARTER_NUM,e.APPLICABLE_YR
ORDER BY 1,2,3,4

I want to make the pivot-section of this query to be dynamic--means if i add one more question to SURVEY_QUESTION_MASTER table, then dynamically on more decode statement should be appended in above query.

Please help me at the earliest.

Nikhil from India

Tom Kyte
October 21, 2004 - 2:25 pm UTC

you have to do that in your program. cannot be done in a static sql query.

dynamic pivot

A reader, October 21, 2004 - 2:50 pm UTC

I couldn't get much from your response... Basically, i wnat my whole query to become dynamic,but just with the pivot section taking into consideration..

Nikhil

Tom Kyte
October 21, 2004 - 3:54 pm UTC

right -- and the ONLY way to get a "dynamic" query is -- well, to write code that dynamically builds a query and then executes it.

Krishna, October 22, 2004 - 1:47 pm UTC

I have written a function to do this and it is published in
OTN, Code tip of the week in 2004. Check it out.

Tom Kyte
October 23, 2004 - 9:07 am UTC

(you should post the URL to it....)

A reader, October 25, 2004 - 3:08 pm UTC

Here is the URL </code> https://asktom.oracle.com/magazine-archive.htm <code>

-Krishna

Krishna, October 25, 2004 - 3:08 pm UTC

Here is the URL </code> https://asktom.oracle.com/magazine-archive.htm <code>

-Krishna

cols_as_rows code from your book

A reader, November 01, 2004 - 3:49 pm UTC

Hi Tom,
I used code from your book (Effective Oracle By Design) page 642. I get the error:
----
CNAME VAL
------------------------------ ----------
SCORE887 .0056
ERROR:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.DBMS_SYS_SQL", line 1149
ORA-06512: at "SYS.DBMS_SQL", line 339
ORA-06512: at "CVS.COLS_AS_ROWS", line 39
_______________
There are about 900 columns in the table.
Could you please help me for this.
Thanks

Tom Kyte
November 02, 2004 - 6:39 am UTC

what is line 39 in your copy -- mine isn't a line that could throw that -- so, list out what are lines 35..45 (and maybe it'll be obvious to you what the error/issue is..)

Sorry! Tom I figured it out.

A reader, November 01, 2004 - 4:01 pm UTC


Tom Kyte
November 02, 2004 - 6:39 am UTC

what was it?

COLS_AS_ROWS

Oren Nakdimon, November 20, 2004 - 9:42 am UTC

Tom, can you please post the code for COLS_AS_ROWS? Couldn't find it in ~tkyte.
Thanks.


Tom Kyte
November 20, 2004 - 10:03 am UTC

it is with effective Oracle by Design -- on the oraclepress.com website as part of that book. downloads for the book are all there.

Great! Thanks a lot!

Oren Nakdimon, November 20, 2004 - 10:57 am UTC


Another technique for converting rows to cols?

A reader, November 20, 2004 - 6:24 pm UTC

Hi Tom,

The solution you gave for the original problem, using group ny, max, decode, is quite interesting. It's not obvious, I wouldn't have thought of it. How about:

create or replace view event_view1
as
select t_events.event,
t_events.descript,
t_events.logdate,
my_function(t_events.event, 'userid') userid,
my_function(t_events.event, 'company') company,
my_function(t_events.event, 'transaction') transaction
from t_events

And my_funtion(p_event, p_element) does a select from t_event_data with the predicate "where event = p_event and element = p_element".

How would the performance compare?

Best regards

Huy



Tom Kyte
November 20, 2004 - 6:45 pm UTC

I would never use plsql where sql can do it.

you'll find it 99.999999% of the time to be *alot* slower.


but -- in your case above, you are missing an inline view, for you would need to group by event, descrip, logdate AND THEN call the function.

Ravi

ravi, May 18, 2005 - 1:08 am UTC

Tom,
Thanks for great demonstration again.
can we use coalesce to convert rows to columns like in "SQL SERVER".
A piece of code that i use in sql server is like this

declare @name int
select @name = coalesce(@name + ',','') + ename from
emp ;
print @name

consider normal emp table structure in scott.emp ;

can i do the same thing in oracle also with this "coalesce". If so how ?

- Thanks in advance
ravi


Tom Kyte
May 18, 2005 - 8:48 am UTC

search this site for stragg

they'll have a problem having named that function coalesce -- ANSI has a very different use for that name.

ravi, May 18, 2005 - 1:10 am UTC

sorry please read above as
declare @name varchar(2000)

Having Date column in the Group by clause

A reader, January 29, 2006 - 2:25 pm UTC

Hi Tom


Transforming row to columns

Anurag Dixit, September 13, 2006 - 2:48 am UTC

Hi Tom,
Thanks for this fantastic technique. I have some different table structure and wondering how can i do this?
For example:
First table: master(Prop_ID, Desc, Date)
Prop_ID Desc Date
1 f_name xxxxxxx
2 L_name zzzzzz
3 email yyyyy


Second table: detail(cust_id, prop_id, value)
Cust_id Prop_id Value
1 1 prnab
1 2 desh
1 3 pd@xyz.com
2 1 anu
2 2 bharat
2 3 ab@pmn.com
3 1 lata
3 2 sharma
3 3 ls@trp.com

and now i want to develop report like this

Cust_id f_name l_name email
1 prnab desh pd@xyz.com
2 anu bharat ab@pmn.com



Tom Kyte
September 13, 2006 - 2:29 pm UTC

if you understand this technique, you would know you really don't have a "different structure"

write a query to join your two tables, pivot that query exactly the way I did in the original answer.

without table creates and inserts - no way I'm even going to attempt syntax for you - but - this isn't any different from the original problem statement!

Transforming row to columns

anurag dixit, September 15, 2006 - 6:06 am UTC

hi Tom,
i tried to build the query but not succeed to get the desired result, Please help me as i need to develop same report frequently. i have attache the create table and insert statements.

create table master(prop_id number,descp varchar2(50), cr_date date);

create table detail(cust_id number(10),prop_id number, value varchar2(100);

insert into master values(1,'F_name',sysdate);
insert into master values(2,'L_name',sysdate);
insert into master values(3,'E_mail',sysdate);
insert into detail values(1,1,'prnab');
insert into detail values(1,2,'desh');
insert into detail values(1,3,'pd@asd.com');
insert into detail values(2,1,'anu');
insert into detail values(2,2,'bharat');
insert into detail values(2,3,'ab@pnm.com');
insert into detail values(3,1,'lata');
insert into detail values(3,1,'sharma');
insert into detail values(3,1,'ls@rte.com');

and now i want to develop report like this

Cust_id f_name l_name email
1 prnab desh pd@xyz.com
2 anu bharat ab@pmn.com

I tried to build the query like :
select t1.event,
max(decode(tb.event,40,element,'NA'))f_name,
max(decode(tb.event,41,element,'NA'))l_name,
max(decode(tb.event,5,element,'NA'))e_mail
from t1_dtl tb , t1
where tb.event= t1.event and
rownum <=100
group by t1.event;

Thanks and regards.

Tom Kyte
September 15, 2006 - 7:05 am UTC

ops$tkyte%ORA9IR2> select cust_id,
  2         max(decode(prop_id,1,value)) fname,
  3         max(decode(prop_id,2,value)) lname,
  4         max(decode(prop_id,3,value)) email
  5    from detail
  6   group by cust_id;

   CUST_ID FNAME      LNAME      EMAIL
---------- ---------- ---------- ----------
         1 prnab      desh       pd@asd.com
         2 anu        bharat     ab@pnm.com
         3 sharma

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select cust_id,
  2         max(decode(descp,'F_name',value)) fname,
  3         max(decode(descp,'L_name',value)) lname,
  4         max(decode(descp,'E_mail',value)) email
  5    from detail, master
  6   where detail.prop_id = master.prop_id
  7   group by cust_id;

   CUST_ID FNAME      LNAME      EMAIL
---------- ---------- ---------- ----------
         1 prnab      desh       pd@asd.com
         2 anu        bharat     ab@pnm.com
         3 sharma

 

Columns to Rows / Rows to Columns: an absract approach

Matthias Schulz, September 15, 2006 - 10:10 am UTC

Hello Tom,

thank you for your ideas on this topic.
I have allowed myself to use your ideas to create a generalized approach:

-- Rows to columns:
SELECT "1", "2", "3", "4", "5", "6"
FROM (SELECT MAX(decode(nr, 1, val, NULL)) AS "1",
MAX(decode(nr, 2, val, NULL)) AS "2",
MAX(decode(nr, 3, val, NULL)) AS "3",
MAX(decode(nr, 4, val, NULL)) AS "4",
MAX(decode(nr, 5, val, NULL)) AS "5",
MAX(decode(nr, 6, val, NULL)) AS "6"
FROM (SELECT val, ROW_NUMBER() over(ORDER BY ord) AS nr
FROM (/***** Your query goes here: *****/
SELECT username AS val, username AS ord
FROM all_users
/*********************************/)))
GROUP BY "1", "2", "3", "4", "5", "6";


-- Columns to rows:
select decode(col, 1,"1", 2,"2", 3,"3", 4,"4") AS "Table"
from (/***** Your query goes here: *****/
SELECT owner AS "1", table_name AS "2", status AS "3",
tablespace_name AS "4"
FROM ALL_TABLES
WHERE ROWNUM = 1
/*********************************/),
(select ROWNUM AS col from all_objects WHERE ROWNUM <= 4)


Best regards,
Matthias Schulz


Create SQL Oracle Pivot Column

alejandro diaz h, September 29, 2006 - 1:11 pm UTC

Hi, my problem is: i have this sql
SELECT
sdan8,
sdsrp1,
sdsrp2,
sditm,
sdmcu,
sum(sduorg)
FROM testdta.f4101,
(select *
from testdta.f4211
where sdtrdj between (select wfdtfr from F570001) and
(select wfdtto from F570001)
union all
select *
from testdta.f42119
where sdtrdj between (select wfdtfr from F570001) and
(select wfdtto from F570001)
)
WHERE
sditm=imitm
GROUP BY
sdan8, sdsrp1, sdsrp2, sditm, sdmcu


result
sdan8 sdsrp1 sdsrp2 sditm sdmcu sum(sduorg)
0 01 130 806821 A 1
0 01 130 806822 B 2
0 01 130 806824 A 4
0 01 130 806824 C 4

i need generate this output
sdan8 sdsrp1 sdspr2 sditm sdmcuA sdmcuB sdmcuC.... sdmcn
0 01 130 806821 1 0 0 0
0 01 130 806822 0 1 0 0
0 01 130 806824 4 0 4 0


thanks (oracle 9i)

Tom Kyte
September 30, 2006 - 7:51 am UTC

you do it exactly like I showed above??? with max(decode())



Which one is better - rows or columns

AKumar, February 14, 2007 - 11:31 pm UTC

Hi Tom,

We have a situation which demands the count of rows processed by some interfaces to be recorded and reporting using XML.

The autosys jobs run every hour to process data using interface. This new interface is going to count the rows processed and record it.

Is it a good idea to have the columns like hr1_count, hr2_count...hr24_count or it is recommended to use rows instead of columns with date/time stamp from where the hours or processing can be calculated and record counts will privide the result.

My experience tells it should be through columns as millions of rows will be processed with the interfaces in every hour and the 1st option will record only one row with the count for each interface run on an hourly basis and the 2nd option will actually push the same amount of rows (millions) into this new table unnecessarily - please do suggest!

Thanks,
AKumar
Tom Kyte
February 15, 2007 - 11:39 am UTC

rows

AKumar, February 15, 2007 - 10:58 pm UTC

Hi Tom, thanks for replying...

The argument is as follows:
1. If rows, the disadvantages are
a) we are unncessarily piling up and cosuming database storage
b) To sum all those rows to find out the coung will also impact on query processing
2. If we store these columns, then we can get all these info in one row only - see the impact on query performance. Basically I am talking about 1 rows v/s million rows - query processing WILL BE FASTER

In today's world storage is not a big issue, but the performance is. With option 1, we are hitting none of this issues and with option 2, we are hitting both on the head. Please counter...
Tom Kyte
February 16, 2007 - 1:56 pm UTC

1a) false, disagree
1b) false, disagree

look - you know what you want to do, you won't talk me out of storing it properly.

for you see, as soon as you make it hourly, someone will say "lets to half hours" - or whatever.

but you know what you already wanted to do....

AKumar, February 17, 2007 - 12:22 am UTC

Hi Tom,

I agree with you on "for you see, as soon as you make it hourly, someone will say "lets to half hours" - or whatever" - but the requirement is clear - and the users might not also be interested in knowing how much data flows every half an hour/minute -- it's useless for them. Let's assume that they want it every hour...

I am sorry, if I am overstreching this conversation. But some results do not seem to agree with your view.

I have created two tables
1. xintf_count (INTF_NAME, DATE_TIME, PROCESS_TYPE) with 11100 rows AND
2. xintf_count_in_col (INTF_NAME, hr1_count, hr2_count...hr24_count) with 2 rows having all the information as above

(Using 10g) The explain plan for both the queries are as follows:
To access #1,
"select intf_name, to_char(date_time,'hh24'), count(1)
from xintf_count
group by intf_name, to_char(date_time,'hh24') order by 1,2"

Operation Options Object Rows Time Cost Bytes Filter Predicates * Access Predicates
SELECT STATEMENT 1 114 9,477 61
SORT GROUP BY 1 114 9,477 61
TABLE ACCESS FULL XINTF_COUNT 1 114 9,476 61

-- and this for #2,
"select * from xintf_count_in_col order by 1"

Operation Options Object Rows Time Cost Bytes Filter Predicates * Access Predicates
SELECT STATEMENT 2 1 4 728
SORT ORDER BY 2 1 4 728
TABLE ACCESS FULL XINTF_COUNT 2 1 3 728

-- I am thinking of fetch time - inserts will be done simply once but the fetch will be more time consuming...
Your thoughts please...
Tom Kyte
February 18, 2007 - 9:39 am UTC

you will not change my mind, you want to do it your way - cool. But I'm going to say rows.


AKumar, February 18, 2007 - 10:49 pm UTC

Hi Tom,

Yes, the design is not generic - it's different. And this design being robust, there is no counter argument to it - but the way we have been doing designs make us feel happy to go with the conventional way.

I have been working for 7 years as a designer in various systems and I do not agree with people who go design the conventional way without thinking the implications that follow on the production system. The developer will code the way the system is designed and will obviously scream that coding time is more in the non-conventional way. It's a fact that developers think little about the implications their code enforces on the database.

There will be little bit of more coding with this but eventually it will provide the same output faster and that's the intent here.

Anyways, thanks a lot Tom.
Tom Kyte
February 19, 2007 - 9:24 am UTC

hahaha, there is no counter argument to it

except experience that shows this will change. It always does.


AKumar, February 19, 2007 - 11:05 pm UTC

Hi Tom,

There is a saying in India (in Hindi):

"Jo Boya - so Paya" - which means "get results the way you reap".

Unfortunately, the scope of software programming is limited within a boundary. We are just silent workers ;)

- And we give them what they ask for.

Thanks!
Tom Kyte
February 20, 2007 - 9:30 am UTC

... And we give them what they ask for. ...

there is a saying in English...

garbage in
garbage out

;)


Different sort of transpose??

Rajarshi, March 17, 2007 - 9:22 am UTC

Hi Tom,

I have the following as my base data:

SQL> CREATE TABLE tab1 (col1 NUMBER, col2 NUMBER, col3 NUMBER, Yr NUMBER)
2 /

Table created

SQL> INSERT INTO tab1 VALUES (1, 2, 3, 2004)
2 /

1 row inserted

SQL> INSERT INTO tab1 VALUES (4, 5, 6, 2005)
2 /

1 row inserted

SQL> INSERT INTO tab1 VALUES (7, 8, 9, 2006)
2 /

1 row inserted

SQL> INSERT INTO tab1 VALUES (10, 11, 12, 2007)
2 /

1 row inserted

SQL> COMMIT
2 /

Commit complete

SQL> SELECT * FROM tab1
2 /

COL1 COL2 COL3 YR
---------- ---------- ---------- ----------
1 2 3 2004
4 5 6 2005
7 8 9 2006
10 11 12 2007

SQL>

Now, my requirement is to represent the data in a tabular format such as this:

ATTR 2004 2005 2006 2007
---------- ---------- ---------- ---------- ----------
COL1 1 4 7 10
COL2 2 5 8 11
COL3 3 6 9 12

Number of 'Year columns' may grow - however, more than 40 years' of data is not expected. There are fixed number COLs (around 50).

Please help.

Also, asktom site has always been my favourite one for problem-solving and also for knwoledge building. Thanks a lot!

Best Regards,
Tom Kyte
March 17, 2007 - 5:29 pm UTC

ops$tkyte%ORA10GR2> with data
  2  as
  3  (select level l from dual connect by level <= 3)
  4  select attr,
  5         max( decode( yr, 2004, val ) ) "2004",
  6         max( decode( yr, 2005, val ) ) "2005",
  7         max( decode( yr, 2006, val ) ) "2006",
  8         max( decode( yr, 2007, val ) ) "2007"
  9    from (
 10  select decode( l, 1, 'COL1', 2, 'COL2', 3, 'COL3' ) attr,
 11         decode( l, 1, col1,   2,  col2,  3,  col3  ) val,
 12             yr
 13    from tab1, data
 14         )
 15   group by attr
 16   order by attr
 17  /

ATTR       2004       2005       2006       2007
---- ---------- ---------- ---------- ----------
COL1          1          4          7         10
COL2          2          5          8         11
COL3          3          6          9         12


Rob K, April 04, 2007 - 12:06 pm UTC

I was wondering if I can you your thoughts on the following.

I have the following table:

Table:rfk_test

user_input_inst_id input_parm_type_id, user_input_parm_value
1 MODELINTID 212
1 MAKEINTID 23
1 MODELINTID 331
1 MAKEINTID 36
1 ZIPCODE 92310
2 MAKEINTID 23
2 MAKEINTID 36
2 MODELINTID 5767
2 ZIPCODE 48152

Transform rows to columns for every combination:

Output>
user_input_inst_id makeintid modelintid zipcode
1 36 331 92310
1 23 331 92310
1 36 212 92310
1 23 212 92310
2 36 5767 48152
2 23 5767 48152


I am looking for the most efficient way.

Our production table will have 16 different input_parm_types and source table (i.e. rfk_test) will contain approximately 20 million rows.

Currently we are testing with the following approach:

SELECT a.user_input_inst_id,
b.user_input_parm_value AS make_id,
c.user_input_parm_value AS model_id,
d.user_input_parm_value AS zip_code
FROM (SELECT DISTINCT user_input_inst_id
FROM rfk_test) a,
(SELECT DISTINCT user_input_inst_id,
user_input_parm_value
FROM rfk_test
WHERE UPPER (input_parm_type_id) = 'MAKEINTID') b,
(SELECT DISTINCT user_input_inst_id,
user_input_parm_value
FROM rfk_test
WHERE UPPER (input_parm_type_id) = 'MODELINTID') c,
(SELECT DISTINCT user_input_inst_id,
user_input_parm_value
FROM rfk_test
WHERE UPPER (input_parm_type_id) = 'ZIPCODE') d
WHERE a.user_input_inst_id = b.user_input_inst_id(+)
AND a.user_input_inst_id = c.user_input_inst_id(+)
AND a.user_input_inst_id = d.user_input_inst_id(+)

Thanks

Tom Kyte
April 04, 2007 - 12:20 pm UTC

no create
no inserts
no look


and no, the outer join isn't the right way to pivot - search for the word pivot on this site to see the max(decode()) 'trick' to doing this efficiently.

Rob K, April 04, 2007 - 2:28 pm UTC

I was wondering if I can you your thoughts on the following.

I have the following table:

CREATE TABLE rfk_test
    (user_input_inst_id             NUMBER(38,0),
    input_parm_type_id             VARCHAR2(250),
    user_input_parm_value          VARCHAR2(250))

/

INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (1,'MODELINTID',212);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (1,'MAKEINTID',23);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (1,'MODELINTID',331);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (1,'MAKEINTID',36);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (1,'ZIPCODE',92310);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (2,'MAKEINTID',23);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (2,'MAKEINTID',36);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (2,'MODELINTID',5767);
INSERT INTO rfk_test(user_input_inst_id, input_parm_type_id, user_input_parm_value) 
values (2,'ZIPCODE',48152);




Transform rows to columns for every combination:

Output>
user_input_inst_id makeintid modelintid zipcode
1 36 331 92310
1 23 331 92310
1 36 212 92310
1 23 212 92310
2 36 5767 48152
2 23 5767 48152


I am looking for the most efficient way.

Our production table will have 16 different input_parm_types and source table (i.e. rfk_test) will contain approximately 20 million rows.

Currently we are testing with the following approach:

SELECT a.user_input_inst_id,
    b.user_input_parm_value AS make_id,
    c.user_input_parm_value AS model_id,
    d.user_input_parm_value AS zip_code
FROM 
(SELECT DISTINCT user_input_inst_id FROM rfk_test) a,
(SELECT DISTINCT user_input_inst_id,
                 user_input_parm_value
   FROM rfk_test
  WHERE UPPER (input_parm_type_id) = 'MAKEINTID') b,
(SELECT DISTINCT user_input_inst_id,
                 user_input_parm_value
   FROM rfk_test
  WHERE UPPER (input_parm_type_id) = 'MODELINTID') c,
(SELECT DISTINCT user_input_inst_id,
        user_input_parm_value
   FROM rfk_test
  WHERE UPPER (input_parm_type_id) = 'ZIPCODE') d
WHERE a.user_input_inst_id = b.user_input_inst_id(+)
  AND a.user_input_inst_id = c.user_input_inst_id(+)
  AND a.user_input_inst_id = d.user_input_inst_id(+)


Thanks

Tom Kyte
April 04, 2007 - 3:55 pm UTC

I despise these models.



ops$tkyte%ORA10GR2> select user_input_inst_id,
  2         max( decode( input_parm_type_id, 'MODELINTID', user_input_parm_value ) ) MODELINTID,
  3         max( decode( input_parm_type_id, 'MAKEINTID', user_input_parm_value ) ) MAKEINTID,
  4         max( decode( input_parm_type_id, 'ZIPCODE', user_input_parm_value ) ) ZIPCODE
  5    from rfk_test
  6   group by user_input_inst_id
  7  /

USER_INPUT_INST_ID MODELINTID               MAKEINTID                ZIPCODE
------------------ ------------------------ ------------------------ ------------------------
                 1 331                      36                       92310
                 2 5767                     36                       48152


Rob K, April 05, 2007 - 9:32 am UTC

Your solution does not give my the results I am looking for. I need to return every combination for each user_input_inst_id.

The result set should look like:

user_input_inst_id makeintid modelintid zipcode
1                  36        331        92310
1                  23        331        92310
1                  36        212        92310
1                  23        212        92310
2                  36        5767       48152
2                  23        5767       48152 


There are multiple makeintid and modelintid for a given user_input_inst_id.

Are you referring to how the source data is modeled?

Thanks
Tom Kyte
April 05, 2007 - 11:30 am UTC

yes, I'm referring to the 'model' (if we can call it such)

you'll need the three way cartesian join as you are doing to get every combination like that. Likely you need FULL OUTER joins, not just outer joins - if you wanted to skip that 4th full scan (to produce your "A" table)

AKumar, June 08, 2007 - 6:56 am UTC

Hi Tom,

You remember "Garbage in - Garbage out" ?

Well, the situation has finally arrived. The user now wants to have the count of the interface records processed after every 30 minutes.

As anybody might have perceived the design was not that worse but was skillfully handled! Any idea how I managed reporting the count for every 30 mins, with the design recording one hour count? That's easy.

Thanks, AKumar
Tom Kyte
June 09, 2007 - 10:28 am UTC

not following you here...

A twist on "rows to columns"

David W., June 10, 2007 - 1:42 am UTC

Tom,

I'm running Oracle 10.2.0.3 Standard Edition.

I have a bit of a twist on the "rows to columns" that I cannot figure out.
I've got a time entry table with rows containing employee time by project.

Here is a simple example (we also have a time_id column fed by a sequence, which I'm omitting as it does not seem relevant)

    create table time_demo (
        emp_id    number,
        proj_id   number,
        work_date date,
        hours     number(5,2)
    )

    And some data:

        insert into time_demo values (1,1,to_date('4/2/2007','MM/DD/YYYY'),4.25);
        insert into time_demo values (1,1,to_date('4/2/2007','MM/DD/YYYY'),1.50);
        insert into time_demo values (1,1,to_date('4/2/2007','MM/DD/YYYY'),0.75);
        insert into time_demo values (1,1,to_date('4/3/2007','MM/DD/YYYY'),7.25);
        insert into time_demo values (1,1,to_date('4/4/2007','MM/DD/YYYY'),6.50);
        insert into time_demo values (1,1,to_date('4/5/2007','MM/DD/YYYY'),4.25);
        insert into time_demo values (1,1,to_date('4/6/2007','MM/DD/YYYY'),7.75);

    Now, I use the decode method to transform rows to columns to get a report by week:

    select emp_id, proj_id,
       decode(to_char(work_date,'D'),1,hours,null) sun,
       decode(to_char(work_date,'D'),2,hours,null) mon,
       decode(to_char(work_date,'D'),3,hours,null) tue,
       decode(to_char(work_date,'D'),4,hours,null) wed,
       decode(to_char(work_date,'D'),5,hours,null) thu,
       decode(to_char(work_date,'D'),6,hours,null) fri,
       decode(to_char(work_date,'D'),7,hours,null) sat
    from   time_demo

    I am intentionally leaving out the "sum()" and "group by" so 
    that I can see each entry.  I get this:

         EMP_ID PROJ_ID    SUN    MON    TUE    WED    THU    FRI    SAT
        ------- ------- ------ ------ ------ ------ ------ ------ ------
              1       1          4.25
              1       1          1.50
              1       1           .75
              1       1                 7.25
              1       1                        6.50
              1       1                               4.25
              1       1                                      7.75

        7 rows selected.
        SQL>                                     

    What really I want for output is this:

        EMP_ID PROJ_ID    SUN    MON    TUE    WED    THU    FRI    SAT
        ------- ------- ------ ------ ------ ------ ------ ------ ------
              1       1          4.25   7.25   6.50   4.25   7.75  
              1       1          1.50
              1       1           .75

In other words, consolidate the entries into as few rows as possible, but keep all individual entries. I don't even care what order the entries for Monday are in, just that I see all of them. The other columns in rows two and three should return NULL's.

I case you are wondering why I want to do this, I have an Application Express app and I want to see all individual entries displayed, but save screen real estate to fit as many rows as possible.

My thought is to write a pipelined function to return the rows the way that I want them, but it really seems like there should be a "SQL way" to do this.

Any help would be appreciated.

Thanks,
David


My Frankenstein (for David)

Tyler, June 10, 2007 - 11:36 am UTC

First thing i'll say is i wouldn't use this is real code :)

The solution leaves out project (easy enough to add in).

I'm certain it can be refined (or done better-er outright), but this gives the answer...


--wanted a second employee to test against
insert into time_demo values (2,1,to_date('4/1/2007','MM/DD/YYYY'),4.3);
insert into time_demo values (2,1,to_date('4/3/2007','MM/DD/YYYY'),6.9);


WITH step_one AS
(
SELECT
emp_id,
TO_CHAR(work_date,'D') AS day,
hours
FROM time_demo
ORDER BY emp_id, TO_CHAR(work_date,'D'), hours
),
step_two AS
(
SELECT
emp_id,
day,
hours,
ROW_NUMBER() OVER (PARTITION BY emp_id, day ORDER BY 1) AS rownumber
FROM STEP_ONE
),
DRIVER AS
(
SELECT
DISTINCT
emp_id,
rownumber
FROM step_two
)
SELECT
d.emp_id,
s1.hours AS sun,
s2.hours AS mon,
s3.hours AS tue,
s4.hours AS wed,
s5.hours AS thu,
s6.hours AS fri,
s7.hours AS sat
FROM
DRIVER d,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 1 AND s.emp_id = d.emp_id
) S1,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 2 AND s.emp_id = d.emp_id
) S2,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 3 AND s.emp_id = d.emp_id
) S3,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 4 AND s.emp_id = d.emp_id
) S4,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 5 AND s.emp_id = d.emp_id
) S5,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 6 AND s.emp_id = d.emp_id
) S6,
(
SELECT s.rownumber, s.hours, s.emp_id FROM STEP_TWO S, DRIVER D
WHERE S.rownumber = d.rownumber AND s.DAY = 7 AND s.emp_id = d.emp_id
) S7
WHERE d.rownumber = s1.rownumber (+) AND d.emp_id = s1.emp_id (+)
AND d.rownumber = s2.rownumber (+) AND d.emp_id = s2.emp_id (+)
AND d.rownumber = s3.rownumber (+) AND d.emp_id = s3.emp_id (+)
AND d.rownumber = s4.rownumber (+) AND d.emp_id = s4.emp_id (+)
AND d.rownumber = s5.rownumber (+) AND d.emp_id = s5.emp_id (+)
AND d.rownumber = s6.rownumber (+) AND d.emp_id = s6.emp_id (+)
AND d.rownumber = s7.rownumber (+) AND d.emp_id = s7.emp_id (+);





RE: A twist on "rows to columns"

Stew Ashton, June 11, 2007 - 8:18 am UTC


Something simpler without self joins (Tom, sorry, when I saw the other solution I couldn't resist...)

WITH step1 AS (
SELECT emp_id, proj_id, to_char(work_date, 'D') AS daynum, HOURS FROM time_demo
), step2 AS (
SELECT step1.*,
row_number() over(PARTITION BY emp_id, proj_id, daynum ORDER BY daynum) AS rn
FROM step1
)
SELECT emp_id, proj_id, rn, 
MAX(decode(daynum, 1, HOURS, NULL)) AS sun, 
MAX(decode(daynum, 2, HOURS, NULL)) AS mon, 
MAX(decode(daynum, 3, HOURS, NULL)) AS tue, 
MAX(decode(daynum, 4, HOURS, NULL)) AS wed, 
MAX(decode(daynum, 5, HOURS, NULL)) AS thu, 
MAX(decode(daynum, 6, HOURS, NULL)) AS fri, 
MAX(decode(daynum, 7, HOURS, NULL)) AS sat 
FROM step2 
GROUP BY emp_id, proj_id, rn 
ORDER BY emp_id, proj_id, rn;

EMP_ID  PROJ_ID RN  SUN   MON   TUE  WED   THU   FRI   SAT
------- ------- --- ----- ----- ---- ----- ----- ----- ----
1       1       1         4,25  7,25  6,5  4,25  7,75
1       1       2         1,5
1       1       3         0,75

3 rows selected

Thanks

David W., June 11, 2007 - 8:45 am UTC

Stew & Tyler, thanks for your solutions. I can see that I need to learn more about the "WITH" syntax. This is exactly what I needed.

David

breaking a row into sets of columns

Jayanth, July 03, 2007 - 1:08 pm UTC

Tom,
I have a requirement to transform row data into columns, where each row is actually a set of columns from the original table. The following are the create and insert statements.
create table t1 (id varchar2(1), fname1 varchar2(10), lname1 varchar2(10), id1 varchar2(10), fname2 varchar2(10), lname2 varchar2(10), id2 varchar2(10), fname3 varchar2(10), lname3 varchar2(10), id3 varchar2(10));
  
  insert into t1 values ('x', 'first1', 'last1', 'num1', 'first2', 'last2', 'num2', 'first3', 'last3', 'num3');
  
  I need the output as follows:
  
  id    col1   col2   col3
  --------------------------------
  x     first1  last1   num1
  x     first2  last2   num2
  x     first3  last3   num3


I tried the following query (from one of your previous answers, but don't seem to quite get it).

select decode(r, 1, fname1, 2, lname1, 3, id1),
       decode(r, 1, fname2, 2, lname2, 3, id2),
    decode(r, 1, fname3, 2, lname3, 3, id3)
  from (select * from t1), (select 1 r from dual union all select 2 r from dual union all select 3 r from dual)


Can you point out the right approach/query please.

Thanks a ton!
Tom Kyte
July 05, 2007 - 8:44 am UTC

select
decode( r, 1, fname1, 2, fname2, 3, fname3 ) fname,
decode( r, 1, lname1, 2, lname2, 3, lname3 ) lname,
.......



think about it - you wanted either fname1, 2, or 3 depending on R, not fname1 OR lname1 OR id1 - but rather fname1, 2 OR 3.

breaking a row into sets of columns - thank you

Jayanth, July 05, 2007 - 9:41 am UTC

That hint worked! I guess it was right in the face, but didn't realize what I was doing until you pointed it out!

Thank you so much!

Concatenate rows into previous row.

A reader, August 23, 2007 - 6:29 pm UTC

Hi Tom, Below is what I am trying to achieve.

I would appreciate any help.

Thanks in advance.

create table t13(rn number, rec_type varchar2(20));


insert into t13 values(1, '01');
insert into t13 values(2, '02');
insert into t13 values(3, '03');
insert into t13 values(4, '88');
insert into t13 values(5, '88');
insert into t13 values(6, '88');
insert into t13 values(7, '88');
insert into t13 values(8, '16');
insert into t13 values(9, '88');
insert into t13 values(10, '88');
insert into t13 values(11, '88');
insert into t13 values(12, '16');
insert into t13 values(13, '88');
insert into t13 values(14, '88');
insert into t13 values(15, '88');
insert into t13 values(16, '16');
insert into t13 values(17, '88');
insert into t13 values(18, '49');
insert into t13 values(19, '98');
insert into t13 values(20, '99');




select * From t13;

RN RE
---- --
1 01
2 02
3 03
4 88
5 88
6 88
7 88
8 16
9 88
10 88
11 88

RN RE
---- --
12 16
13 88
14 88
15 88
16 16
17 88
18 49
19 98
20 99


what I am trying to do is to append all 88's to the previous record where 88 starts.

result will look like


RN RE
---- --
1 01
2 02
3 03-88-88-88-88
8 16-88-88-88
12 16-88-88-88
16 16-88
18 49
19 98
20 99





Tom Kyte
August 24, 2007 - 2:08 pm UTC

ops$tkyte%ORA10GR2> select max( case when rec_type<>'88' then rn end ) rn,
  2         max( case when rec_type<>'88' then rec_type end ) ||
  3             replace( rpad( '*', count(*)-1, '*' ), '*', '-88' ) rec_type
  4    from (
  5  select rn, rec_type, max(grp) over (order by rn) maxgrp
  6    from (
  7  select rn, rec_type,
  8         case when rec_type <> '88'
  9                  then row_number() over (order by rn)
 10                  end grp
 11    from t13
 12         )
 13         )
 14   group by maxgrp
 15   order by rn
 16  /

        RN REC_TYPE
---------- --------------------
         1 01
         2 02
         3 03-88-88-88-88
         8 16-88-88-88
        12 16-88-88-88
        16 16-88
        18 49
        19 98
        20 99

9 rows selected.


is one approach.

Brilliant answer

A reader, August 25, 2007 - 2:37 am UTC

Its amazing as always.

Thanks very much Tom

ROW TO COLUMN

Roopa, May 01, 2008 - 2:31 am UTC

I have the following as my base data: 

SQL> CREATE TABLE tab1 (Yr NUMBER,AMOUNT NUMBER) 
2 / 

Table created 

SQL> INSERT INTO tab1 VALUES (01.12.2006,5395) 
2 / 

1 row inserted 

SQL> INSERT INTO tab1 VALUES (01.02.2007,5395) 
2 / 

1 row inserted 

SQL> INSERT INTO tab1 VALUES (01.11.2006,5395) 
2 / 

1 row inserted 

SQL> COMMIT 
2 / 

Commit complete 

SQL> SELECT * FROM tab1 
2 / 

   Yr     AMOUNT
---------- ---------- 
01.12.2006   5395     
01.02.2007   5395     
01.11.2006   5395     
    

SQL> 

Now, my requirement is to represent the data in a tabular format such as this: 

MONTH         JAN  FEB MAR APR MAY JUN JUL AUG SEP OCT  NOV  DEC
01-DEC-2006   0    0   0   0   0   0   0   0   0   0    0   5395
01-FEB-2007   0   5395 0   0   0   0   0   0   0   0    0    0
01-NOV-2006   0    0   0   0   0   0   0   0   0   0   5395  0


Tom Kyte
May 01, 2008 - 9:45 am UTC

get serious please.

ops$tkyte%ORA10GR2> CREATE TABLE tab1 (Yr NUMBER,AMOUNT NUMBER)
  2  /

Table created.

ops$tkyte%ORA10GR2> INSERT INTO tab1 VALUES (01.12.2006,5395)
  2  /
INSERT INTO tab1 VALUES (01.12.2006,5395)
                              *
ERROR at line 1:
ORA-00917: missing comma


post this in only ONE PLACE please as well - not here there and everywhere, I'll just start deleting everything if you do.

stay over here:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1460004570029#821502100346553246

row to column conversion

Roopa, May 02, 2008 - 12:46 am UTC

hi tom
iam extremely sorry for the thing i have done.Actually i was in tension mood that i couldn't get the result what i want.So,i misplaced the query.Please excuse me for that and please solve the problem.

sql>CREATE TABLE tab1 (Yr date,AMOUNT NUMBER)
table created

sql>INSERT INTO tab1 VALUES ('01-dec-2006',5395)
/
1 row created.

sql>INSERT INTO tab1 VALUES ('01-feb-2007',5395)
/
1 row created.

sql>INSERT INTO tab1 VALUES ('01-nov-2006',5395)
/
1 row created.

sql>COMMIT
/
Commit complete.

sql>SELECT * FROM tab1
/


YR AMOUNT
--------- --------
01-DEC-06 5395
01-FEB-07 5395
01-NOV-06 5395

sql>

Now, my requirement is to represent the data in a tabular format such as this:

MONTH JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
01-DEC-2006 0 0 0 0 0 0 0 0 0 0 0 5395
01-FEB-2007 0 5395 0 0 0 0 0 0 0 0 0 0
01-NOV-2006 0 0 0 0 0 0 0 0 0 0 5395 0

Please look into that Tom and help me.Once again iam sorry for that.
thanking you Tom.







Tom Kyte
May 02, 2008 - 7:11 am UTC

you know, you still - sigh - haven't addressed my questions - that is quite simply "frustrating"

this is a trivial simple pivot - but you've left out lots of bits of information here - like, can there be multiple rows/month? I presume so, because earlier 'examples' seem to have shown it. Why would you use the same number over and over in the example - just to make verification of results really really hard?

your column name YR - please - you do not have yearly data at all.

Confusion will be my epitaph - King Crimson


Here you go, a pair of probably the least useful queries I've ever written - I don't understand why anyone would want an output line that ALWAYS contains 11 zeros - seems absolutely silly:


ops$tkyte%ORA10GR2> select yr,
  2         decode( to_char(yr,'mm'), '01', amount,0 ) jan,
  3         decode( to_char(yr,'mm'), '02', amount,0 ) feb,
  4         decode( to_char(yr,'mm'), '03', amount,0 ) mar,
  5         decode( to_char(yr,'mm'), '04', amount,0 ) apr,
  6         decode( to_char(yr,'mm'), '05', amount,0 ) may,
  7         decode( to_char(yr,'mm'), '06', amount,0 ) jun,
  8         decode( to_char(yr,'mm'), '07', amount,0 ) jul,
  9         decode( to_char(yr,'mm'), '08', amount,0 ) aug,
 10         decode( to_char(yr,'mm'), '09', amount,0 ) sep,
 11         decode( to_char(yr,'mm'), '10', amount,0 ) oct,
 12         decode( to_char(yr,'mm'), '11', amount,0 ) nov,
 13         decode( to_char(yr,'mm'), '12', amount,0 ) dec
 14    from tab1
 15   order by yr
 16  /

YR          JAN   FEB   MAR   APR   MAY   JUN   JUL   AUG   SEP   OCT   NOV   DEC
--------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
01-NOV-06     0     0     0     0     0     0     0     0     0     0  5395     0
01-DEC-06     0     0     0     0     0     0     0     0     0     0     0  5395
01-FEB-07     0  5395     0     0     0     0     0     0     0     0     0     0

ops$tkyte%ORA10GR2> select yr,
  2         sum(decode( to_char(yr,'mm'), '01', amount,0 )) jan,
  3         sum(decode( to_char(yr,'mm'), '02', amount,0 )) feb,
  4         sum(decode( to_char(yr,'mm'), '03', amount,0 )) mar,
  5         sum(decode( to_char(yr,'mm'), '04', amount,0 )) apr,
  6         sum(decode( to_char(yr,'mm'), '05', amount,0 )) may,
  7         sum(decode( to_char(yr,'mm'), '06', amount,0 )) jun,
  8         sum(decode( to_char(yr,'mm'), '07', amount,0 )) jul,
  9         sum(decode( to_char(yr,'mm'), '08', amount,0 )) aug,
 10         sum(decode( to_char(yr,'mm'), '09', amount,0 )) sep,
 11         sum(decode( to_char(yr,'mm'), '10', amount,0 )) oct,
 12         sum(decode( to_char(yr,'mm'), '11', amount,0 )) nov,
 13         sum(decode( to_char(yr,'mm'), '12', amount,0 )) dec
 14    from tab1
 15   group by yr
 16   order by yr
 17  /

YR          JAN   FEB   MAR   APR   MAY   JUN   JUL   AUG   SEP   OCT   NOV   DEC
--------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
01-NOV-06     0     0     0     0     0     0     0     0     0     0  5395     0
01-DEC-06     0     0     0     0     0     0     0     0     0     0     0  5395
01-FEB-07     0  5395     0     0     0     0     0     0     0     0     0     0

To Roopa

Raj, May 02, 2008 - 2:45 am UTC

How about query like this? 

SQL> SELECT yr,
  2   case when to_char(YR,'MM') = '02' then sum(amount) else 0 end Jan,
  3   case when to_char(YR,'MM') = '02' then sum(amount) else 0 end Feb,
  4   case when to_char(YR,'MM') = '03' then sum(amount) else 0 end Mar,
  5   case when to_char(YR,'MM') = '04' then sum(amount) else 0 end Apr,
  6   case when to_char(YR,'MM') = '05' then sum(amount) else 0 end May,
  7   case when to_char(YR,'MM') = '06' then sum(amount) else 0 end Jun,
  8   case when to_char(YR,'MM') = '07' then sum(amount) else 0 end Jul,
  9   case when to_char(YR,'MM') = '08' then sum(amount) else 0 end Aug,
 10   case when to_char(YR,'MM') = '09' then sum(amount) else 0 end Sep,
 11   case when to_char(YR,'MM') = '10' then sum(amount) else 0 end Oct,
 12   case when to_char(YR,'MM') = '11' then sum(amount) else 0 end Nov,
 13   case when to_char(YR,'MM') = '12' then sum(amount) else 0 end Dec
 14    FROM tab1
 15   group by yr
 16  /

YR               JAN        FEB        MAR        APR        MAY        JUN
--------- ---------- ---------- ---------- ---------- ---------- ----------
       JUL        AUG        SEP        OCT        NOV        DEC
---------- ---------- ---------- ---------- ---------- ----------
01-NOV-06          0          0          0          0          0          0
         0          0          0          0       5395          0

01-FEB-07       5395       5395          0          0          0          0
         0          0          0          0          0          0

01-DEC-06          0          0          0          0          0          0
         0          0          0          0          0       5395





to roopa

Raj, May 02, 2008 - 2:48 am UTC

small error in my above query for month of Jan. It should be modified like this

case when to_char(YR,'MM') = '01' then sum(amount) else 0 end Jan

from
case when to_char(YR,'MM') = '02' then sum(amount) else 0 end Jan

sorry for the copy paste error

Rows to column

Sebastien Frade, May 23, 2008 - 9:59 am UTC

Hi,

I've got the following table:
USER_ID LVL_ID VAL_ID
CSMCF Person CSMCF
CSMCF Function GMO
CSMCF Location THA
CSMCF Access Right RW
CSMCF Person CSMCF
CSMCF Function TRIAL
CSMCF Location THA
CSMCF Access Right R
MBSPJ Person MBSPJ
MBSPJ Function GMO
MBSPJ Location IND
MBSPJ Access Right RW

And like to tranform it as
Person Location GMO TRIAL
CSMCF THA RW R
MBSPJ IND RW (NULL)

May you help me?

Cheers Sebastien
Tom Kyte
May 23, 2008 - 4:32 pm UTC

the horried EAV - entity attribute value - model.

And, you don't even know how to query it :(

you probably already know performance STINKS, now you know how impossible this junk is to query.

Developers will say "but we don't need to design, we can just add attributes on the fly"

I say "great, too bad no one can really use this junk, perfect - well done"

I hate this so much....
and I have no idea how you go from:
MBSPJ    Person         MBSPJ
MBSPJ    Function       GMO
MBSPJ    Location       IND
MBSPJ    Access Right   RW

to
Person  Location  GMO  TRIAL
------- --------  ----  -----
MBSPJ   IND       RW   (NULL)



by the way, makes no sense.

Multiple Rows to one Row

Nimesh, April 29, 2009 - 11:18 am UTC

I've a situation where I'm retreiving the data in a subquery, the result for which can be multiple rows and columns.

And I want the result for this sub-query to appear in a single row for the parent query.

E.g. The Result from Subquery
No. - Name
1 - Google
2 - Yahoo
3 - MSN

The transformation for the parent query
List of Names
1 - Google, 2 - Yahoo, 3 - MSN

I know it won't make much sense but the data from subquery can go from 0 to 50 records for each parent query, so to keep it simple and understandable, I want it this way.

Tom Kyte
April 29, 2009 - 11:28 am UTC

and I want a create table, insert statements, your CURRENT sql statement with the "subquery", etc to work with. Otherwise, I have no clue if anything I type will be even remotely useful to you.

*give an example*, *a full one*, like I always do.

you cannot always get what you want...


Multiple Rows to one Row

Nimesh, May 05, 2009 - 9:22 am UTC

Sorry for the mistake, but the current query is not a subquery. I thought that the expected result could come using subquery.

It's just a join statement returning multiple rows based on the relation ship, which is the expected behaviour

Currently, I'm not in an environment where I can write the create/update queries so if there is any mistake in the below statements, kindly correct me. Also I'm not an expert at this.

Also these statements are just for the sample, for a join of 2 tables. My actual statement comprises join of 5 tables.

Create Table:-
------------------
CREATE TABLE SITE_CAT(
CATID NUMBER(1),
CATEGORY VARCHAR2(10));

CREATE TABLE FAV_SITE(
SITEID NUMBER(1),
CATID NUMBER(1),
SITE_NAME VARCHAR2(10));

Insert Statements:-
------------------------
INSERT INTO SITE_CAT VALUES(1,'SEARCH');
INSERT INTO SITE_CAT VALUES(2,'EMAIL');
INSERT INTO SITE_CAT VALUES(3,'TRAVEL');

INSERT INTO FAV_SITE VALUES(1,1,'GOOGLE');
INSERT INTO FAV_SITE VALUES(2,1,'YAHOO');
INSERT INTO FAV_SITE VALUES(3,1,'MSN');

INSERT INTO FAV_SITE VALUES(1,2,'GMAIL');
INSERT INTO FAV_SITE VALUES(2,2,'YAHOO');
INSERT INTO FAV_SITE VALUES(3,2,'HOTMAIL');

INSERT INTO FAV_SITE VALUES(1,3,'TRAVELGURU');
INSERT INTO FAV_SITE VALUES(2,3,'YATRA');
INSERT INTO FAV_SITE VALUES(3,3,'INCREDIBLEINDIA');

Current Query:-
------------------
SELECT a.category
, b.siteid + " - " b.site_name as "Sites List"
from site_cat a
, fav_site b
where a.catid = b.catid
order
by a.category
, b.site_name

Current Result:-
----------------
CATEROGY Sites List
SEARCH 1 - GOOGLE
SEARCH 2 - YAHOO
SEARCH 3 - MSN
EMAIL 1 - GMAIL
EMAIL 2 - YAHOO
EMAIL 3 - HOTMAIL
TRAVEL 1 - TRAVELGURU
TRAVEL 2 - YATRA
TRAVEL 3 - INCREDIBLEINDIA

Expected new result:-
----------------------
CATEROGY Sites List
SEARCH 1 - GOOGLE, 2 - YAHOO, 3 - MSN
EMAIL 1 - GMAIL, 2 - YAHOO, 3 - HOTMAIL
TRAVEL 1 - TRAVELGURU, 2 - YATRA, 3 - INCREDIBLEINDIA
Tom Kyte
May 11, 2009 - 8:49 am UTC

Ah, a sqlserver programmer - welcome (the + and improper use of " - " - double quotes to quote a string gives you away)...


ops$tkyte%ORA10GR2> select category, substr(max(sys_connect_by_path( siteid || ' - ' || site_name, ', ' )),2) site_list  2    from (
  3  SELECT a.category , b.siteid,  b.site_name
  4    from site_cat a
  5       , fav_site b
  6   where a.catid = b.catid
  7   order
  8      by a.category
  9       , b.site_name
 10           )
 11  start with siteid = 1 connect by prior category = category and prior siteid+1 = siteid
 12  group by category
 13  /

CATEGORY   SITE_LIST
---------- --------------------------------------------------
EMAIL       1 - GMAIL, 2 - YAHOO, 3 - HOTMAIL
TRAVEL      1 - TRAVELGURU, 2 - YATRA, 3 - INCREDIBLE
SEARCH      1 - GOOGLE, 2 - YAHOO, 3 - MSN




and a suggestion - goto http://otn.oracle.com/xe and download Oracle XE, you never ever should say "I'm not in an environment where I can write the create/update
queries" - that would be like a programmer saying "I'm not allowed to compile"

Get the free version, install it, test/play with it - learn....


Multiple Rows to one Row UPDATE

Nimesh Thakkar, June 05, 2009 - 5:28 am UTC

Hi,

Yes, I was an SQL Server programmer, but now on Oracle Apps, so only select queries for me :)
I missed on " and + because I use VB more than Oracle

Well, thanks for the query, it worked as per the requirement.

But still few problems which I don't know if it can be solved
1] Siteid will not always start from 1, it can be any number.
2] the difference in siteid's may not always be of 1, it can be far apart.

Any more suggestions for these problems.

And for oraclexe, I'd used it long ago.
Will give a try once again.
Tom Kyte
June 08, 2009 - 11:54 am UTC

1) use row_number instead

ops$tkyte%ORA10GR2> select category, substr(max(sys_connect_by_path( siteid || ' - ' || site_name, ', ' )),2) site_list
  2    from (
  3  SELECT a.category , row_number() over (partition by a.category order by b.siteid) rn, b.siteid,  b.site_name
  4    from site_cat a
  5       , fav_site b
  6   where a.catid = b.catid
  7   order
  8      by a.category
  9       , b.site_name
 10           )
 11  start with rn = 1 connect by prior category = category and prior rn+1 = rn
 12  group by category
 13  /

CATEGORY
----------
SITE_LIST
-------------------------------------------------------------------------------
EMAIL
 1 - GMAIL, 2 - YAHOO, 3 - HOTMAIL

TRAVEL
 1 - TRAVELGURU, 2 - YATRA, 3 - INCREDIBLEINDIA

SEARCH
 10 - GOOGLE, 20 - YAHOO, 30 - MSN


That was awesome

Nimesh Thakkar, June 11, 2009 - 6:59 am UTC

Thanks for the last query.
That solved my problem.

It worked like a charm :)

Pivot row value as column header?

Efendi Kwok, December 22, 2010 - 2:12 am UTC

Hi Tom,

I have the following scenario:

create table table1
(ID varchar(1),
Desc varchar(3));

insert into table1 values('1','aaa');
insert into table1 values('2','bbb');
insert into table1 values('3','ccc');
insert into table1 values('4','ddd');
insert into table1 values('5','eee');

Table 1

ID Desc
-- ----
1 aaa
2 bbb
3 ccc
4 ddd
5 eee

create table table2
(ID varchar(1),
Key varchar(5),
Value varchar(1));

insert into table2 values('1', 'usr1', '1');
insert into table2 values('2', 'usr1', '1');
insert into table2 values('3', 'usr1', '1');
insert into table2 values('2', 'usr2', '1');
insert into table2 values('3', 'usr2', '0');
insert into table2 values('1', 'usr3', '1');
insert into table2 values('2', 'usr3', '1');
insert into table2 values('3', 'usr3', '0');
insert into table2 values('4', 'usr3', '1');
insert into table2 values('5', 'usr3', '0');

Table 2

ID Key Value
-- ---- -----
1 usr1 1
2 usr1 1
3 usr1 1
2 usr2 1
3 usr2 0
1 usr3 1
2 usr3 1
3 usr3 0
4 usr3 1
5 usr3 0

I would like to display all Table 1 left join with Table 2. So the results would be like this:

Key aaa bbb ccc ddd eee
--- --- --- --- --- ---
usr1 1 1 1 NULL NULL
usr2 NULL 1 0 NULL NULL
usr3 1 1 0 1 0

The Desc value in Table 1 needs to become the column header of the result.

Rows in Table 1 is not fixed, I have similar tables with different number of rows.

Can you help to do a query?

Thanks in advance.
Tom Kyte
December 22, 2010 - 2:22 pm UTC

You HAVE to know in advance - how many columns you will be selecting and what their names are - no way to avoid that.


You will have to

a) run a query against table1 to get the distinct list of descriptions (IF YOU WOULD HAVE RUN YOUR EXAMPLE, you would know that desc cannot be a column name...).

b) use that information to build dynamically - another query that explicitly references those values.



format of that query in (b) is given above in the original question/answer - so the answer is

do (a)
write the code for (b)



RE: Pivot row value as column header?

Efendi Kwok, December 29, 2010 - 12:14 am UTC

Hi Tom,

Sorry for the late reply. Merry Christmas !

Thanks for the clarification. I guess the option to have dynamic number of cols can't be done in one single query.

>>(IF YOU WOULD HAVE RUN YOUR EXAMPLE, you would know that desc cannot be a column name...)

This is what I would like to achieve if there's a query that allows this.

Anyway, thanks again for the explanation. I appreciate this.


Reader, January 19, 2011 - 10:19 am UTC

create table tst
( confirm_num number
,start_dt date
,end_dt date
)

insert into tst values (573, to_date(20071201,'yyyymmdd'), to_date(20071207,'yyyymmdd'));

insert into tst values (543, to_date(20071215,'yyyymmdd'), to_date(20071219,'yyyymmdd'));

select confirm_num
,start_dt
,end_dt
,(end_dt - start_dt) num_days
from tst;

CONFIRM_NUM START_DT END_DT NUM_DAYS
573 12/1/2007 12/7/2007 6
543 12/15/2007 12/19/2007 4


how to get the result as shown below -

CONFIRM_NUM START_DT DAY_NUM
573 12/1/2007 1
573 12/2/2007 2
573 12/3/2007 3
573 12/4/2007 4
573 12/5/2007 5
573 12/6/2007 6
543 12/15/2007 1
543 12/16/2007 2
543 12/17/2007 3
543 12/18/2007 4
Tom Kyte
January 24, 2011 - 5:42 am UTC

ops$tkyte%ORA11GR2> select tst.*, start_dt+column_value-1, column_value
  2    from tst, table(
  3                cast(
  4                   multiset(select level from dual connect by level <= tst.end_dt-tst.start_dt)
  5                     as sys.odcinumberlist
  6                    )
  7                   )
  8  /

CONFIRM_NUM START_DT  END_DT    START_DT+ COLUMN_VALUE
----------- --------- --------- --------- ------------
        573 01-DEC-07 07-DEC-07 01-DEC-07            1
        573 01-DEC-07 07-DEC-07 02-DEC-07            2
        573 01-DEC-07 07-DEC-07 03-DEC-07            3
        573 01-DEC-07 07-DEC-07 04-DEC-07            4
        573 01-DEC-07 07-DEC-07 05-DEC-07            5
        573 01-DEC-07 07-DEC-07 06-DEC-07            6
        543 15-DEC-07 19-DEC-07 15-DEC-07            1
        543 15-DEC-07 19-DEC-07 16-DEC-07            2
        543 15-DEC-07 19-DEC-07 17-DEC-07            3
        543 15-DEC-07 19-DEC-07 18-DEC-07            4

10 rows selected.

finding

prabha, May 24, 2011 - 11:25 pm UTC

hi,

i need something information

i have table like a .
in my table like this

a
...........
uytr12
wesd
dreddd

so now i want to find out
only string values
like (wesd
dreddd) only

not like (uytr12)

how can i

pls give me query pls...........


Tom Kyte
May 25, 2011 - 11:23 am UTC

ops$tkyte%ORA11GR2> create table t as
  2  select 'a' txt from dual union all
  3  select 'ufadf12' txt from dual union all
  4  select 'wesd' txt from dual union all
  5  select 'dredd' txt from dual union all
  6  select '123' txt from dual;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from t where replace( translate( lower(txt), 'abcdefghijklmnopqrstuvwxyz', 'a' ), 'a', '' ) is null;

TXT
-------
a
wesd
dredd

A reader, June 04, 2011 - 10:12 am UTC

hi tom,

create table t (Stu_id varchar2(20), sem_id varchar2(20));

insert into t values(a1,b2);
insert into t values(a3,b4);
insert into t values(a5,b6);
insert into t values(a7,b8);
insert into t values(a9,b10);
insert into t values(a11,b12);
insert into t values(a13,b14);
insert into t values(a15,b16);
commit;

i want the output to be as follow:

col1 col2 col3 col4 col5 col6 col7 col8
---- ---- ---- ---- ---- ---- ---- ----
a1 b2 a3 b4 a5 b6 a7 b8
a9 b10 a11 b12 a13 b14 a15 b16

and so on?
Tom Kyte
June 06, 2011 - 9:22 am UTC

ops$tkyte%ORA11GR2> select max(decode( col, 1, stu_id)) col1,
  2         max(decode( col, 1, sem_id)) col2,
  3         max(decode( col, 2, stu_id)) col3,
  4         max(decode( col, 2, sem_id)) col4,
  5         max(decode( col, 3, stu_id)) col5,
  6         max(decode( col, 3, sem_id)) col6,
  7         max(decode( col, 0, stu_id)) col7,
  8         max(decode( col, 0, sem_id)) col8
  9    from (
 10  select stu_id, sem_id, mod(rn,4) col, trunc((rn-0.1)/4) grp
 11    from (
 12  select stu_id, sem_id, row_number() over (order by to_number( substr( stu_id, 2 ) )) rn
 13    from t
 14         )
 15         )
 16   group by grp
 17   order by grp
 18  /

COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8
---- ---- ---- ---- ---- ---- ---- ----
a1   b2   a3   b4   a5   b6   a7   b8
a9   b10  a11  b12  a13  b14  a15  b16



had to make lots of assumptions since you didn't give us much information to go on.

The biggest one was that stu_id will always be a single character followed by a number - so we can sort (since we all KNOW that rows don't have any order - we need an order by to get order - and you seemed to want the data organized by a numeric sort on the last part of stu_id)


Next time, please do take the time to:

a) make sure your code actually is valid code. Why don't you try to run your example? You'll find it doesn't actually work.

b) explain what you are looking for in a bit more detail, I had to guess.

Sushil, June 15, 2011 - 8:06 am UTC

Hi Tom,

need to convert rows to columns but cannot use the decode for this as it has a group by columns.

My query is
select activity_name, processing_stage,Status, count(*) from soa_message_tracking_hdr
where business_process_id=58 and trunc(creation_date)='05-MAY-11'
group by activity_name, processing_stage,status
order by activity_name, processing_stage,status


And the results are

Activity Name Processing Stage Status Count
Compliance Service SC End 114
Compliance Service SC Start 114
EmployeeCheckBpel SC End 114
EmployeeCheckBpel SC Error 3
EmployeeCheckBpel SC Start 117

Now i want something like

Activity Name Processing Stage Start Count Error Count End Count
Compliance Service SC 114 0 114
EmployeeCheckBpel SC 117 3 114


Can this be achieved?




Tom Kyte
June 17, 2011 - 12:41 pm UTC

yes, the presence of group by doesn't affect anything


select max(decode( ......
from (YOUR_QUERY_GOES_HERE)
group by whatever;


Just put your query as an inline view and pretend it is a table. Just group by again


and please - do not do this, do NOT do this:

trunc(creation_date)='05-MAY-11'


do not compare a date to a string. do not obviate the possibility of a regular index by putting a function on a column when you do not need to. do NOT rely on default conversion rules.


do this instead:


creation_date >= to_date( '05-may-11', 'dd-mon-rr' ) and creation_date < (to_date( '05-may-11', 'dd-mon-rr')+1)



Convrting Rows to Columns

FollowMe, June 24, 2011 - 7:33 am UTC

Hi Tom,

need to convert rows to columns without using the decode.

I have created a temporary table & data and applying the logic as below:
create table #temp(
id int,
xdate date
)
insert into #temp values(1,'20-jun-1995')
insert into #temp values(2,'01-mar-1997')
insert into #temp values(3,'19-aug-2001')
insert into #temp values(4,'11-feb-1997')
insert into #temp values(5,'27-mar-2003')
insert into #temp values(6,'05-jan-1999')

select distinct 'xdate' as id,
case when id = 1 then (select top 1 xdate from #temp order by xdate asc)end as '1',
case when id = 1 then (select top 1 xdate from(select top 2 xdate from #temp order by xdate asc) #temp
order by xdate desc) end as '2',
case when id = 1 then (select top 1 xdate from(select top 3 xdate from #temp order by xdate asc) #temp
order by xdate desc) end as '3',
case when id = 1 then (select top 1 xdate from(select top 4 xdate from #temp order by xdate asc) #temp
order by xdate desc) end as '4',
case when id = 1 then (select top 1 xdate from(select top 5 xdate from #temp order by xdate asc) #temp
order by xdate desc) end as '5',
case when id = 1 then (select top 1 xdate from(select top 6 xdate from #temp order by xdate asc) #temp
order by xdate desc) end as '6'
from #temp
Tom Kyte
June 24, 2011 - 9:37 am UTC

I cannot live with the limitations of pretending to be in sqlserver. sorry. when you are migrating to Oracle - you should MIGRATE, not try to accommodate.

If you are in 11g, you can use PIVOT.
You can use CASE instead of decode - without using a scalar (slow in this case) subquery.


ops$tkyte@ORA8I.WORLD> create or replace view event_view1
  2  as
  3  select t_events.event,     t_events.descript,  t_events.logdate,
  4         max( case when element = 'userid' then curval end ) userid,
  5         max( case when element = 'company' then curval end ) company,
  6         max( case when element = 'transaction' then curval end) transaction
  7    from t_events, t_event_data
  8   where t_events.event = t_event_data.event
  9   group by t_events.event,  t_events.descript,  t_events.logdate
 10  /


will work instead of decode.

Transforming row to columns

Follow me, June 27, 2011 - 4:40 am UTC

Hi Tom,

When i try to use max(case...) with my subquery to extract the required date for a column, it throws following error:

Msg 130, Level 15, State 1, Line 2
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.

Kindly suggest an alternative logic to use any aggregate function.
Tom Kyte
June 27, 2011 - 11:23 am UTC

kindly give us a simple example to work with. You must be doing more than just casing out a simple date. give us a TINY example to explore.

Sridhar, August 01, 2011 - 5:55 am UTC

Hi Tom ,

Based the above posts i tried to create a view.
But its not working in my case...
Here my case is Rows to be trasposed in to Columns.

In my table data is like
Emp_ID Seq_No Field
101 9 ABC
101 10 DEF
101 11 XYZ

My requirement is
Emp_ID Field1(Seq 9) Field1(Seq 9) Field1( Seq 9)
101 ABC DEF XYZ


pls help on this

Tom Kyte
August 01, 2011 - 12:00 pm UTC

no create
no inserts
no look


"but its not working in my case..."

is horribly vague. why don't you show us what you tried and what "not working" means to you.

Row to Column Traspose

Sridhar, August 02, 2011 - 5:51 am UTC

Sorry for uncleared question..

Here i tried to create a view based on the earlier posts.
Code is

CREATE OR REPLACE FORCE VIEW "SYSADM"."TEST_SPLIT_VW" ("EMP_ID","SPCH_SEQ","PAY_REG_CODE9", "PAY_REG_CODE10","REG_CODE11")
AS
SELECT BNS."EMP_ID",BNS."SPCH_SEQ",
(case when BNS.SPCH_SEQ=9 then BNS.PAY_REG_CODE end),
(case when BNS.SPCH_SEQ=10 then BNS.PAY_REG_CODE end),
(case when BNS.SPCH_SEQ=11 then BNS.PAY_REG_CODE end)
FROM BASE_TABLE BNS , CALENDER CAL
WHERE BNS.EFFDT = ( SELECT MAX (EFFDT) FROM BASE_TABLE BNS1 WHERE
BNS1.GEID_KEY = BNS.GEID_KEY AND BNS1.EFFDT <= CAL.CUTOFF_DT )
AND BNS.EFF_SEQ_NUM = ( SELECT MAX (EFF_SEQ_NUM) FROM BASE_TABLE BNS2
WHERE BNS2.CIC_GEID_KEY = BNS.GEID_KEY AND BNS2.EFFDT = BNS.EFFDT );


This is
In my table data is like
Emp_ID SPCH_SEQ PAY_REG_CODE
101 9 ABC
101 10 DEF
101 11 XYZ

My requirement is
Emp_ID Field1(Seq 9) Field1(Seq 9) Field1( Seq 9)
101 ABC DEF XYZ

I hope this will help.....
Tom Kyte
August 02, 2011 - 7:09 am UTC

read the previous followup - read it a bit more closely . Pay particular attention to the sentences starting with "no"...


In all of the examples above of doing a pivot - there are aggregates such as MAX or SUM and group by's. That is a key component of pivoting using this technique. Make sure to re-read the examples to see what is going on - make sure you understand the technique before you try to employ it.


Conversion from rows to columns

Payel, October 13, 2011 - 2:32 am UTC

hi,

My requirement is that basically I will have to create a report.I will have to convert the rows into columns from the table while displaying in the report.The data in the table are as follows:

Item_No Date Short_Amt
I1 D1 S1
I1 D2 S2
I2 D3 S3
I2 D4 S4
I2 D5 S5
.
.
. Dn Sn


Now My output will be:

Item_No D1 D2 D3 D4 D5....Dn
I1 S1 S2
I2 S3 S4 S5....Sn

But the data is infinite.....

Can you please help???
Thanks in advance

Tom Kyte
October 13, 2011 - 7:35 am UTC

in order to run sql - you must know how many columns you will have, and what their names are. There is no avoiding that.

So, run a query to find out what the distinct dates are

then, using that information, build your pivot query.

A reader, October 13, 2011 - 9:11 am UTC

Hi,

Thanks...............

But how can I get the dates as the heading and below that the respective short value?????

Item_No D1 D2 D3 D4 D5
I1 S1 S2
I2 S3 S4 S5


Tom Kyte
October 13, 2011 - 10:51 am UTC

by reading the original answer which shows you how to pivot.

if you have 11g, you can use the new sql pivot syntax as well.


you'll read the table to get a distinct list of dates.

you'll build a query similar to:


select item_no,
max( decode( dt, <first_date_val>, s1 )) "<first_date_val>",
max( decode( dt, <second_date_val>, s2 )) "<second_date_val>",
...
from table
group by item_no

Payel, October 14, 2011 - 4:14 am UTC

Hi,

Actually, I will have many date values so I can't hard code the dates.

By matrix report can you plz tell me how can I do that or any other solution other than DECODE???

Thanks.........


Tom Kyte
October 14, 2011 - 11:20 pm UTC

I told you

a) run a query, get a list of distinct dates
b) BUILD A QUERY based on A
c) execute the query you just built.


why is that not making sense? I did write:


you'll read the table to get a distinct list of dates.

you'll build a query similar to:


select item_no,
max( decode( dt, <first_date_val>, s1 )) "<first_date_val>",
max( decode( dt, <second_date_val>, s2 )) "<second_date_val>",
...
from table
group by item_no



which I was hoping would clarify

So, run a query to find out what the distinct dates are

then, using that information, build your pivot query.



but this is the third time I'm saying it ... I don't know how else to say it???


run a query to get the distinct dates
build a query based on that
run that new query you just built.

dynamic transpose columns to rows

reham, October 20, 2011 - 3:17 am UTC

hi,Tom
i have a table like this
description vodafone mobinil land line
1-2010 3 3 2
2-2006 9 12 12
5-2007 4 8 9
i need to make it like
description 1-2010 2-2006 5-2007
vodafone 3 9 4
mobinil 3 12 8
land line 2 12 9
i need to do it dynamically to the future when dates increase.
thanks a lot

Tom Kyte
October 20, 2011 - 3:40 am UTC

just a comment: I hate your data model. What happens when vodophone splits in two, or vodafone buys mobinil. Those should never have been columns - never.

well, to turn the columns into rows, you can use the unpivot command in 11g, or this technique in any release:


with data as (select level r from dual connect by level <= 3)
select descript,
decode( r, 1, 'vodaphone', 2, 'mobinil', 3, 'land line' ) what,
decode( r, 1, vodaphone, 2, mobinil, 3, land line ) value
from t, data;

that'll give you
descrip   what         value
1-2010    vodaphone    3
1-2010    mobinil      3
1-2010    land line    2
....

and so on


then, this page shows you how to pivot that result to put the descript values as columns. Or you can read about the 11g PIVOT syntax that can do the same.

to read about how to do that "dynamically" - read carefully the above last three comments - it explains what to do regarding that.


On dynamically transposing rows and columns: 11GR2 only

Stew Ashton, October 21, 2011 - 5:36 am UTC


I worked this out for practice and then couldn't resist posting it, because I found out by accident that UNPIVOT and PIVOT can be put in the same SELECT. 11GR2 makes this so easy!
> drop table T
/
table T dropped.

> create table T(DESCRIPTION, VODAFONE, MOBINIL, LAND_LINE) as
select '1-2010', 3, 3, 2 from DUAL union all
select '2-2006', 9, 12, 12 from DUAL union all
select '5-2007', 4, 8, 9 from DUAL
/
table T created.

-- QUERY BY HAND: UNPIVOT and PIVOT in the same SELECT!
> select * from T
UNPIVOT(VAL for operator in(VODAFONE, MOBINIL, LAND_LINE))
PIVOT(max(VAL) for DESCRIPTION in
('1-2010' as "1-2010",'2-2006' as "2-2006",'1-2010' as "5-2007" ))
/
OPERATOR  1-2010                 2-2006                 5-2007
--------- ---------------------- ---------------------- ----------
VODAFONE  3                      9                      3
LAND_LINE 2                      12                     2
MOBINIL   3                      12                     3

-- The only dynamic part is the list of pivot columns
> select
LISTAGG('''' || DESCRIPTION || ''' as "' || DESCRIPTION || '"', ',')
within group(order by DESCRIPTION) collist
from (select distinct description from t)
/
COLLIST
--------------------------------------------------------------
'1-2010' as "1-2010",'2-2006' as "2-2006",'5-2007' as "5-2007"

-- Query to generate the desired query with dynamic pivot columns
> select
'select * from T
UNPIVOT(VAL for operator in(VODAFONE, MOBINIL, LAND_LINE))
PIVOT(max(VAL) for DESCRIPTION in
(' ||
LISTAGG('''' || DESCRIPTION || ''' as "' || DESCRIPTION || '"', ',')
within group(order by DESCRIPTION)
||'))' sql_text from (select distinct DESCRIPTION from T)
/
SQL_TEXT
-----------------------------------------------------------------
select * from T
UNPIVOT(VAL for operator in(VODAFONE, MOBINIL, LAND_LINE))
PIVOT(max(VAL) for DESCRIPTION in
('1-2010' as "1-2010",'2-2006' as "2-2006",'5-2007' as "5-2007"))

-- Run the generated query
> select * from T
UNPIVOT(VAL for operator in(VODAFONE, MOBINIL, LAND_LINE))
PIVOT(max(VAL) for DESCRIPTION in
('1-2010' as "1-2010",'2-2006' as "2-2006",'5-2007' as "5-2007"))
/
OPERATOR  1-2010                 2-2006                 5-2007
--------- ---------------------- ---------------------- ---------
VODAFONE  3                      9                      4
LAND_LINE 2                      12                     9
MOBINIL   3                      12                     8

Rows to columns

vj, November 10, 2011 - 12:38 pm UTC

select * from T
UNPIVOT(VAL for operator in(VODAFONE, MOBINIL, LAND_LINE))
PIVOT(max(VAL) for DESCRIPTION in
('1-2010' as "1-2010",'2-2006' as "2-2006",'1-2010' as "5-2007" ))

From your forums i found this example is useful. if i have no of dates i cannot hard code. what is the bestway to modify above query.
Tom Kyte
November 10, 2011 - 1:52 pm UTC

you have to run two queries.

One to get the list of dates to pivot.

the other a dynamically generated query that pivots based on those dates.

@vj: are you kidding me?

Stew Ashton, November 10, 2011 - 3:38 pm UTC


OK, You just copied my manual query about one page up. You even left in the error I made on the third date :)

Now you ask how to change the query if the dates change. Don't you realize that is exactly what I showed one page up?
- the first manual query was just to show what the final query should look like.
- the second query gets the list of dates to pivot.
- the third query builds on the previous two: it dynamically generates the full query with the right list of dates.
- then I just run that dynamically generated query.

Exactly what Tom said, shown step by step. What do you think is missing?

Suresh, December 29, 2011 - 5:27 am UTC

I have a table like

subscr_no balance_id balance_type
681 101 1
681 102 2

There could be multiple balances(N) for one subscr_no

Now i want the output like

subscr_no balance_id1 balance_type1 balance_id2 blance_type2
681 101 1 102 2


is this possible with help of sql..your quick response will help me very much

Tom Kyte
December 29, 2011 - 11:27 am UTC

no create
no inserts
no look


also, you don't have sufficient information here, what is the maximum number of balances a given subscriber would have.

your example is insufficient as well - it should definitely have multiple subscribers to test that anything provided would work in that real world situation. It should have a subscriber with a minimum number of balances, a maximum number, and somewhere in the middle - to make sure they all work.


column into rows

A reader, January 05, 2012 - 10:18 am UTC

Is there a way to turn the output of the following SQL from 3 values separated by a comma into 3 separate rows without using a function?

SELECT value
FROM v$parameter
WHERE name = 'control_files';

VALUE
-----------------------------------------------------------------------------------------------------------------------------------
C:\APP\ADMINISTRATOR\ORADATA\PROTECT\CONTROL01.CTL, C:\APP\ADMINISTRATOR\ORADATA\PROTECT\CONTROL02.CTL, C:\APP\ADMINISTRATOR\ORADAT
A\PROTECT\CONTROL03.CTL

Desired output:

C:\APP\ADMINISTRATOR\ORADATA\PROTECT\CONTROL01.CTL
C:\APP\ADMINISTRATOR\ORADATA\PROTECT\CONTROL02.CTL
C:\APP\ADMINISTRATOR\ORADATA\PROTECT\CONTROL03.CTL


Tom Kyte
January 05, 2012 - 12:46 pm UTC

ops$tkyte%ORA11GR2> show parameter control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /home/ora11gr2/app/ora11gr2/or
                                                 adata/ora11gr2/control01.ctl,
                                                 /home/ora11gr2/app/ora11gr2/fa
                                                 st_recovery_area/ora11gr2/cont
                                                 rol02.ctl
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select rownum, column_value
  2    from (select ','||value||',' txt
  3            from v$parameter
  4           where name = 'control_files'),
  5         table( cast( multiset
  6        (select trim( substr (txt,
  7                              instr (txt, ',', 1, level  ) + 1,
  8                              instr (txt, ',', 1, level+1)
  9                                 - instr (txt, ',', 1, level) -1 ) )
 10           from dual
 11        connect by level <= length(txt)-length(replace(txt,',',''))-1
 12        ) as sys.odciVarchar2List ) )
 13  /

    ROWNUM COLUMN_VALUE
---------- ----------------------------------------------------------------------------------------------------
         1 /home/ora11gr2/app/ora11gr2/oradata/ora11gr2/control01.ctl
         2 /home/ora11gr2/app/ora11gr2/fast_recovery_area/ora11gr2/control02.ctl




see
http://asktom.oracle.com/Misc/varying-in-lists.html

Pivoting a query

A reader, February 21, 2012 - 9:20 pm UTC

Hi Tom,

I have a query where I have to turn rows into columns:
CREATE TABLE TABLE_1(
column_1 VARCHAR2(100),column_2 VARCHAR2(100),column_3 VARCHAR2(100),
column_4 VARCHAR2(100),column_5 VARCHAR2(100),column_6 VARCHAR2(100),
column_7 VARCHAR2(100),column_8 DATE,column_9 DATE
)

Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','00','I','07','col_val61',null,to_timestamp('01-JAN-00','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','00','I','09','CHE',null,to_timestamp('01-JAN-00','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));

Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','10','I','07','col_val61',null,to_timestamp('01-JAN-01','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','10','P','06','col_val64',null,to_timestamp('01-JAN-00','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','10','I','999','col_val62',null,to_timestamp('01-JAN-11','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));

Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','20','E','71','11',null,to_timestamp('01-JAN-12','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','20','E','06','CT1',null,to_timestamp('01-JAN-01','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-20','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','20','I','07','col_val61',null,to_timestamp('01-JAN-00','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));
Insert into "table_1" (column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values ('VAL_1','VAL_2','20','I','999','col_val62',null,to_timestamp('01-JAN-00','DD-MON-RR HH.MI.SSXFF AM'),to_timestamp('31-DEC-99','DD-MON-RR HH.MI.SSXFF AM'));

Now my desired output looks like:

r_column_1 r_column_2 r_column_3 r_column_4 r_column_5 r_column_6 r_column_7 r_column_8 r_column_9 r_column_10 r_column_11 r_column_12 r_column_13 r_column_14 r_column_15 r_column_16 r_column_17 r_column_18 r_column_19 r_column_20 r_column_21 r_column_22 r_column_23 r_column_24 r_column_25 r_column_26 r_column_27
VAL_1 VAL_2 00 I 07 col_val61 NULL 01-JAN-00 31-DEC-99 I 09 CHE NULL 01-JAN-00 31-DEC-99
VAL_1 VAL_2 10 I 07 col_val61 NULL 01-JAN-01 31-DEC-99 P 06 col_val64 NULL 01-JAN-00 31-DEC-99 I 999 col_val62 NULL 01-JAN-11 31-DEC-99
VAL_1 VAL_2 20 E 71 11 NULL 01-JAN-12 31-DEC-99 E 06 CT1 NULL 01-JAN-01 31-DEC-20 I 07 col_val61 NULL 01-JAN-11 31-DEC-99 I 999 col_val62 NULL 01-JAN-00 31-DEC-99

The logic here is:

I have to do the pivot on the basis of column_3 value.

1. Column_1, column_2 remain static
2. Rows are made on basis of Column_3 asa we had three different column_3 values here.
3. We can have upto 90 number of column_6. So which in turn grows my columns. As you can see for column_3 value = 00 i have two rows so my flattening goes till r_column_15 but for column_3 value=10, I have values till r_column_21



Please advise. I apologize for readability part.


Tom Kyte
February 23, 2012 - 12:27 am UTC

just keep repeating this pattern:

ops$tkyte%ORA11GR2> select max(column_1), max(column_2), column_3,
  2         max( decode( rn, 1, column_4 ) ) c1_4,
  3         max( decode( rn, 1, column_5 ) ) c1_5,
  4         max( decode( rn, 1, column_6 ) ) c1_6,
  5         max( decode( rn, 1, column_7 ) ) c1_7,
  6         max( decode( rn, 1, column_8 ) ) c1_8,
  7         max( decode( rn, 1, column_9 ) ) c1_9,
  8         max( decode( rn, 2, column_4 ) ) c2_4,
  9         max( decode( rn, 2, column_5 ) ) c2_5,
 10         max( decode( rn, 2, column_6 ) ) c2_6,
 11         max( decode( rn, 2, column_7 ) ) c2_7,
 12         max( decode( rn, 2, column_8 ) ) c2_8,
 13         max( decode( rn, 2, column_9 ) ) c2_9,
 14         max( decode( rn, 3, column_4 ) ) c3_4,
 15         max( decode( rn, 3, column_5 ) ) c3_5,
 16         max( decode( rn, 3, column_6 ) ) c3_6,
 17         max( decode( rn, 3, column_7 ) ) c3_7,
 18         max( decode( rn, 3, column_8 ) ) c3_8,
 19         max( decode( rn, 3, column_9 ) ) c3_9
 20    from (
 21  select t.*, row_number() over (partition by column_3 order by rowid) rn
 22    from t
 23         )
 24   group by column_3
 25   order by column_3
 26  /

MAX(COLUMN MAX(COLUMN COLUMN_3   C1_4       C1_5       C1_6       C1_7       C1_8      C1_9      C2_4       C2_5       C2_6       C2_7       C2_8      C2_9      C3_4       C3_5       C3_6       C3_7       C3_8      C3_9
---------- ---------- ---------- ---------- ---------- ---------- ---------- --------- --------- ---------- ---------- ---------- ---------- --------- --------- ---------- ---------- ---------- ---------- --------- ---------
VAL_1      VAL_2      00         I          07         col_val61  *null*     01-JAN-00 31-DEC-99 I          09         CHE        *null*     01-JAN-00 31-DEC-99 *null*     *null*     *null*     *null*     *null*    *null*
VAL_1      VAL_2      10         I          07         col_val61  *null*     01-JAN-01 31-DEC-99 P          06         col_val64  *null*     01-JAN-00 31-DEC-99 I          999        col_val62  *null*     01-JAN-11 31-DEC-99
VAL_1      VAL_2      20         E          71         11         *null*     01-JAN-12 31-DEC-99 E          06         CT1        *null*     01-JAN-01 31-DEC-20 I          07         col_val61  *null*     01-JAN-00 31-DEC-99



it is really not any different than the original answer! Just a small variation on a theme.

Followup

A reader, February 22, 2012 - 3:33 am UTC

Hi Tom,


In case we cannot make it dynamic, I would appreciate if you could suggest me with the hardcoded column names.
Tom Kyte
February 23, 2012 - 12:46 am UTC

huh? I don't know what you mean at all...

Followup

A reader, February 23, 2012 - 1:25 am UTC

Thanks Tom,

It is working. Also, just wanted to know if we can name the columns based on the COLUMN6 value..

I have also taken help from one of your packages which deals with pivot dynamically:

SQL> variable x refcursor
SQL> set autoprint on
SQL> 
SQL> 
SQL> begin
  2  my_pkg.pivot
  3  ( p_max_cols_query => 'select max(count(*)) from t
  4  group by column_1, column_2, column_3',
  5  p_query => 'select column_1, column_2, column_3, column_5, column_6,
  6  row_number() over ( partition by column_1, column_2, column_3
  7  order by column_1, column_2, column_3 ) rn
  8  from t a',
  9  p_anchor => my_pkg.array('COLUMN_1', 'COLUMN_2', 'COLUMN_3'),
 10  p_pivot => my_pkg.array('COLUMN_6', 'COLUMN_5' ),
 11  p_cursor => :x );
 12  
 13  end;
 14  /

PL/SQL procedure successfully completed.


COLUMN_1   COLUMN_2   COLUMN_3   COLUMN_6_1 COLUMN_5_1 COLUMN_6_2 COLUMN_5_2 COLUMN_6_3 COLUMN_5_3  COLUMN_6_4 COLUMN_5_4         
----------------------------------------------------------------------------------------------------------------------------
VAL_1      VAL_2      00         121        07         101        07         37         09     

VAL_1      VAL_2      10         61         07         64         06         101        999   

VAL_1      VAL_2      20         37         71         101        06         61         07           62            999   




The problem I face here is it gives me the repetitions of COLUMN5 and COLUMN6 according to the count of rows.

As you can see that I had 4 different values when grouped by column_1, column_2 and column_3. The query broke the Column5 and column6 correctly as column5_1, column6_1 and so on...

Is it possible to have the rows look like:

COLUMN_1 COLUMN_2 COLUMN_3 COL6_121 COL_5_121 COL_6_101 COL_5_101 COL_6_37 COL_5_37 COL_6_62 COL_5_62 COL_6_61 COL_5_61 COL_6_64 COL_5_64
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
VAL_1 VAL_2 00 121 07 101 07 37 09
VAL_1 VAL_2 10 101 999 61 07 64 06
VAL_1 VAL_2 20 101 06 37 71 62 999 61 07


If this can be achieved then we can have a mapping of all column_6 values and the application can easily identify the columns.
I am trying t do it through PL SQL but am not able to put it in an elegant way.

Could you please help with one more time. In case this is not possible, I am still happy with the earlier suggested method.

Thanks for your kind help.
Tom Kyte
February 23, 2012 - 7:48 pm UTC

... It is working. Also, just wanted to know if we can name the columns based on the COLUMN6 value..
...

no, the column names have to be known BEFORE the query is parsed/compiled. You would have had to of queried column6, and then used that result when you built the query.



I did not follow your last bit, explain in more detail please. when things word wrap, I *cannot* clearly see anything - it is unreadable. Perhaps use less columns OR use 2 character columns with column names c1, c2, c3, ... so they FIT on the screen.

Forgot to put Code tag for DATA

A reader, February 23, 2012 - 1:29 am UTC



COLUMN_1 COLUMN_2 COLUMN_3 COL6_121 COL_5_121 COL_6_101 COL_5_101 COL_6_37 COL_5_37 COL_6_62 COL_5_62 COL_6_61 COL_5_61 COL_6_64 COL_5_64
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
VAL_1 VAL_2 00 121 07 101 07 37 09
VAL_1 VAL_2 10 101 999 61 07 64 06
VAL_1 VAL_2 20 101 06 37 71 62 999 61 07


Followup

A reader, February 23, 2012 - 6:15 pm UTC

Hi Tom,

I have been able to get the results in terms of values through dynamic PL SQL.


declare
v_final VARCHAR2(32000);
v_string VARCHAR2(10000);
v_sqls VARCHAR2(10000);


begin
v_sqls := 'SELECT column1, column2, column3,';
for ctr_editnum in (select distinct column6 from t)
loop
v_string := 'max( decode( column6, ' || ctr_editnum.column6 || ', column4 ) ) c_4_' || ctr_editnum.column6 || ',
max( decode( column6, ' || ctr_editnum.column6 || ', column5 ) ) c_5_' || ctr_editnum.column6 || ',
max( decode( column6, ' || ctr_editnum.column6 || ', column6 ) ) c_6_' || ctr_editnum.column6 || ',
max( decode( column6, ' || ctr_editnum.column6 || ', column7 ) ) c_7_' || ctr_editnum.column6 || ',
max( decode( column6, ' || ctr_editnum.column6 || ', column8 ) ) c_8_' || ctr_editnum.column6 || ',
max( decode( column6, ' || ctr_editnum.column6 || ', column9 ) ) c_9_' || ctr_editnum.column6 || ',
';
dbms_output.put_line(v_string);
v_final := v_final || v_string;
end loop;
v_final := trim(v_final);
v_sqls := v_sqls || substr(v_final,1,length(v_final)-2) || ' from t
group by column1, column2, column3';
dbms_output.put_line(v_sqls);
end;

I have only concern when I have more than one column_6 values for same column1, column2, column3 value. I am getting only one row since we are taking the MAX.

Can you please suggest how can we achive the same?


Insert into "table_1"
(column_1,column_2,column_3,column_4,column_5,column_6,column_7,column_8,column_9) values
('VAL_1','VAL_2','00','I','07','dup_values',null,to_timestamp('01-JAN-01','DD-MON-RR HH.MI.SSXFF
AM'),to_timestamp('31-OCT-99','DD-MON-RR HH.MI.SSXFF AM'));

I want both the values of COLUMN_6 in pivot table for the given column1, column2, column3 value
if it differs on COLUMN_5, COLUMN_7, COLUMN_8 or COLUMN_9

Thanks for the help.
Tom Kyte
February 23, 2012 - 8:14 pm UTC

what do you want to do when there is more than one column_6 value?

do you perhaps need to group on column_3 and column_6?


when you say "i want both values in the pivot table", that is not specific enough. HOW do you want them in there.

did you want to stragg/listagg them?
did you want a separate row (just group by column_6 instead of aggregating it)

Followup

A reader, February 23, 2012 - 9:43 pm UTC


For this issue, I would have wanted to have the group on COLUMN3 as it is happening now. For the repeating columns, COLUMN5 to COLUMN9 I would have preferred to do a concatenation so that I can break them when I use them.

e.g if my data was having multiple values of column6, I would listagg them with comma..

So if I break the comma all the first element of the string of different columns specify the corresponding value.

For data like this

c1   c2   c3   c4   c5   c6   c7   c8          c9
F1   S1   00   I    E    37        20-JAN-00   31-JAN-99
F1   S1   00   I    E    37   PP   20-DEC-01   31-JAN-99
F1   S1   00   I    E    05        20-JAN-00   31-JAN-99



c1   c2   c3   c4   c5   c6   c7   c8                    c9                    C10    C11     C12    C13     C14         C15
F1   S1   00   I    E    37   ,PP  20-JAN-00,20-DEC-01   31-JAN-99,31-JAN-99   I      E       05             20-JAN-00   31-JAN-99


As you see since my first row had C7 as blank and second had value as PP for the column6=37 I concatenate them as ,PP

which means when column6=37 and C8 = 20-Jan-00 my C7 was NULL.

Please suggest.

Apologies for the bad TAG habit. As you see I had to concatenate two dates which takes some space here so please next time I will take care. Thanks again.
Tom Kyte
February 25, 2012 - 5:01 am UTC

use x/y/z or something for long fields (dates), make it fit. I cannot read this. Since C1 and C2 are constant, remove them from the situation, they do not count - they are not necessary.


are you sure you understand the technique I'm using - if you do understand it, and your question can be answered, you already have the answer - you can do it! Have you really really tried yourself?


you said you would listagg it - why doesn't that work for you?

why does it matter that c7 was null?

can't you ltrim/rtime ,'s if you don't want them?

not sure what the issue is really...

thanks

gaju, February 26, 2012 - 11:27 pm UTC

decode can do this for thanks

A reader, October 15, 2012 - 1:14 am UTC

Hi Tom,

Appreciate if you can help me to solve on this.

My table is like this

ACA_AC_NO ACA_RELATION_TXT ACA_PRIMARY_AC_F ACA_RELATION_CODE CUST_LAST_NAME
------------ ------ ----------- ---- --------------------
xxxxxxxxxxxx OWNER Y JFE Mr A
xxxxxxxxxxxx SON N JOE B
xxxxxxxxxxxx SON N JOE C
xxxxxxxxxxxx SPOUSE N JOE D

How can I display like this

Account Account 1 Relation 1 Account 2 Relation 2 Account 3 Relation 3
------- --------- ---------- --------- ---------- --------- ----------
xxxxxxxx Mr A OWNER B SON C SON
Tom Kyte
October 15, 2012 - 10:31 am UTC

no create
no inserts
no explanation (pictures are worth a thousand words, but a report output saying "make this look like that" says *nothing*)

no look

Dynamic Column Name

A reader, October 17, 2012 - 5:28 am UTC

Hi Tom,
We have data in the temp table as below:
Key column name column value
1 C1 V1
1 C2 V2
1 C3 V3
2 C1 V11
2 C2 V12
2 C3 V13
We want to present data (table/view) as below:
Key C1 C2 C3
1 V1 V2 V3
2 V11 V12 V13
Is it possible to achieve this through query/oracle built-in function?
Regards,

Tom Kyte
October 17, 2012 - 3:13 pm UTC

no create
no inserts
no look

Transforming rows into columns

Ajay, July 30, 2013 - 5:21 am UTC

Hello Tom,

I tried converting the rows into columns from your previous answers , i got the answers close enough and i cannot attain my desired output:
i need my output to be :

emp1
jenny
rita
jessy
rahul
vivek
bala

from the table :

col1 col2 col3
jenny rita jessy
rahul vivek bala

Thanks is advance ,

It would be greatful if you could me in solving this
Tom Kyte
August 02, 2013 - 5:49 pm UTC

ops$tkyte%ORA11GR2> with data
  2  as
  3  (
  4  select *
  5   from (select 'jenny' c1, 'rita' c2, 'jessy' c3 from dual union all
  6         select 'rahul' c1, 'vivek' c2, 'bala' c3 from dual)
  7  )
  8  select decode( r, 1, c1, 2, c2, 3, c3 )
  9    from data, (select 1 r from dual union all select 2 from dual union all select 3 from dual)
 10  /

DECOD
-----
jenny
rita
jessy
rahul
vivek
bala

6 rows selected.

Or UNPIVOT the columns (11.2 and onward)

Hoek, August 05, 2013 - 12:19 pm UTC

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

5 rows selected.

SQL> -- generating sample data:
SQL> with data as (
  2  select 'jenny' c1, 'rita' c2, 'jessy' c3 from dual union all
  3  select 'rahul' c1, 'vivek' c2, 'bala' c3 from dual
  4  )
  5  --
  6  -- actual query:
  7  --
  8  select emp1
  9  from   data
 10  unpivot (emp1 for column_list in (c1, c2, c3));

EMP1
-----
jenny
rita
jessy
rahul
vivek
bala

6 rows selected.


CHANIGING ROWS TO COLOUMNS

chavva kiran, August 14, 2013 - 8:27 am UTC

Hii tom,
i have a similar problem the above solution is not working here
can you please help


I have a table Employee_details having columns

create table employee_details(
employee_no number,
info_type varchar2(25),
Passport_no number,
visa_no number,
working_time_directive number,
FA_Formation number)
/

having values like

insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'passport_details',234567,null,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'visa_details',null,3456,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'passport_details',45678,null,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'worktime_details',null,null,7689,null)
/



and my result is to display data as
i want o\p:

employee_no passport_no visa_no working_time_no fa_formation
345618 234567 3456 7689 4567
345618 45678 5678
345618 5674

the output i required is

for example for an employeee i am having 5 passport numbers 2 visa numbers 1 fa_formation
then
i should get in output as 5 rows ("i.e WHAT IS THE MAXIMUM NUMBER OF VALUES PRESENT") HERE passport has maximum values soo

in the first coloumn i should get all the 5 passport numbers
in the second coloumn i should get the 2 visa numbers
in such a way i should get 1 in fa_ formation COLOUMN
THE REMAINING SHOLUD BE NULL

Tom Kyte
August 14, 2013 - 3:15 pm UTC

this is an ugly data model here. a variation (a bad variation) on an EAV (entity attribute value) model...

if you want 5 rows of output, why do you show (in poorly formatted format...) only three rows????

and why does your example only have 2 passport details, 1 visa detail and 1 worktime??


but in short, it sounds like this would be a simple self join.


ops$tkyte%ORA11GR2> variable empno number
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> exec :empno := 345618

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> with pp
  2  as
  3  (select passport_no, rownum r
  4     from employee_details
  5    where employee_no = :empno
  6      and info_type = 'passport_details' ),
  7  vn
  8  as
  9  (select visa_no, rownum r
 10     from employee_details
 11    where employee_no = :empno
 12      and info_type = 'visa_details' ),
 13  wt
 14  as
 15  (select working_time_directive, rownum r
 16     from employee_details
 17    where employee_no = :empno
 18      and info_type = 'worktime_details' ),
 19  pp_vn
 20  as
 21  (select coalesce( pp.r, vn.r ) r, pp.passport_no, vn.visa_no
 22    from pp full outer join vn on (pp.r = vn.r))
 23  select :empno, pp_vn.*, wt.working_time_directive
 24    from pp_vn full outer join wt on (pp_vn.r = wt.r)
 25  /

    :EMPNO          R PASSPORT_NO    VISA_NO WORKING_TIME_DIRECTIVE
---------- ---------- ----------- ---------- ----------------------
    345618          1      234567       3456                   7689
    345618          2       45678

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library