Thanks
Gaurav Srivastava, January 27, 2004 - 1:20 pm UTC
Many thanks Tom, this is what was required.
Mala, January 11, 2005 - 12:23 pm UTC
Tom, Thanks for the excellent forum. Its has been most helpful. I have a question..
I am using pivoting as demonstrated by you to transform rows into columns however the data is offset in each column by a row.. How can i fix this? THis example has only 2 rows of data. Actual data will be of 49, 75 or 121 measurement points on each wafer..
Here is Original data that i pulled into an external table.
<Begin>
Recipe: Diam 5mm exc
LotID: s4012926
WaferID: 1
Slot: 1
Probe: 1.6 mil Tip - Type A
Operator:
Shift:
Equipment:
Process:
Comment:
Processed: 2/3/2004 15:26
Collected: 2/3/2004 15:52
Test Pattern: Diameter Scan, 49-Site
Units: Ang
Initial Temperature: 19.4
Final Temperature: 19.53
TCR Name: <none>
TCR Value: 0
Correlation: Y=S1/S2 * A + BX' + CX'^2 + DX'^3
S1: 1
S2: 1
A: 0
B: 180
C: 0
D: 0
...
Pivoted like
INSERT INTO RS100_SUMMARY (RECIPE, LOTID, WAFERID, SLOT, PROBE,
OPERATOR, SHIFT, EQUIPMENT, PROCESS, COMMENTS, PROCESSED_TIME,
COLLECTED_TIME, TEST_PATTERN, UNITS, INITIAL_TEMPERATURE,
FINAL_TEMPERATURE, TCR_NAME, TCT_VALUE, DATA_POINTS, GOOD_DATA_POINTS,
MEAN, STDV, MIN, MAX, RANGE, S1, S2, A, B, C, D, X_TICK, Y_TICK)
SELECT
-- measurement_desc , reading,
MAX (DECODE (measurement_desc, 'Recipe:', reading,NULL)) Recipe
,MAX (DECODE (measurement_desc, 'LotID:', reading,NULL))LotID
,MAX (DECODE (measurement_desc, 'WaferID:', reading,NULL)) WaferID
,MAX (DECODE (measurement_desc, 'Slot:', reading,NULL)) Slot
,MAX (DECODE (measurement_desc, 'Probe:', reading,NULL))Probe
,MAX (DECODE (measurement_desc, 'Operator:', reading,NULL)) Operator
,MAX (DECODE (measurement_desc, 'Shift:', reading,NULL))Shift
,MAX (DECODE (measurement_desc, 'Equipment:', reading,NULL)) Equipment
,MAX (DECODE (measurement_desc, 'Process:', reading,NULL)) Process
,MAX (DECODE (measurement_desc, 'Comment:',reading,NULL)) COMMENTS
,MAX (DECODE (measurement_desc, 'Processed:', TO_DATE(reading, 'mm/dd/yyyy HH24:mi'),NULL)) Processed
,MAX (DECODE (measurement_desc, 'Collected:', TO_DATE(reading, 'mm/dd/yyyy HH24:mi'),NULL)) Collected
,MAX (DECODE (measurement_desc, 'Test Pattern:', reading,NULL))TestPattern
,MAX (DECODE (measurement_desc, 'Units:', reading,NULL)) Units
,MAX (DECODE (measurement_desc, 'INITIAL Temperature:', reading,NULL)) INITIALTemperature
,MAX (DECODE (measurement_desc, 'Final Temperature:', reading,NULL)) FinalTemperature
,MAX (DECODE (measurement_desc, 'TCR Name:', reading,NULL)) TCRName
,MAX (DECODE (measurement_desc, 'TCR Value:', reading,NULL))TCRValue
,MAX (DECODE (measurement_desc, 'Data Points:', reading,NULL)) DataPoints
,MAX (DECODE (measurement_desc, 'Good Points:', reading,NULL)) goodPoints
,MAX (DECODE (measurement_desc, 'Mean:', reading,NULL)) Mean
,MAX (DECODE (measurement_desc, 'Stdv:', reading,NULL)) Stdv
,MAX (DECODE (measurement_desc, 'MIN:', reading,NULL)) MIN
,MAX (DECODE (measurement_desc, 'MAX:', reading,NULL)) MAX
,MAX (DECODE (measurement_desc, 'RANGE:', reading,NULL)) "RANGE"
,MAX (DECODE (measurement_desc, 'S1:', reading,NULL)) S1
,MAX (DECODE (measurement_desc, 'S2:', reading,NULL)) S2
,MAX (DECODE (measurement_desc, 'A:' , reading,NULL)) A
,MAX (DECODE (measurement_desc, 'B:', reading,NULL))B
,MAX (DECODE (measurement_desc, 'C:', reading,NULL)) C
,MAX (DECODE (measurement_desc, 'D:', reading,NULL))D
,MAX (DECODE (measurement_desc, 'X:', reading,NULL)) X
,MAX (DECODE (measurement_desc, 'Y:', reading,NULL))Y
FROM RS100_POST_EXT
WHERE measurement_desc NOT LIKE 'Site%'
GROUP BY ROWNUM, measurement_desc, reading
COMMIT;
Here is the pivoted data
RECIPE LOTID WAFERID SLOT PROBE
-------------------------------------------------- --------------- --------------- ---------- -------------------------------
Diam 5mm exc
s4020247
12
2
1.6 mil Tip - Type A
I need the data to look like this a column in the same row for each lotid, waferid, slot combination.
RECIPE LOTID WAFERID SLOT PROBE ...more columns....
Diam 5mm exc s4020247 12 2 1.6 mil Tip - TypeA .. ..
Thanks
January 11, 2005 - 1:49 pm UTC
why are you grouping by rownum?
Mala, January 11, 2005 - 3:50 pm UTC
If I dont group by rownum the data sets get mixed up. Column values for different data sets are displayed together
We have set of summary data followed by detail which I have split into master and detail tables in the database. By retaining the original form I will be able link them (master-detail)properly..is my thinking.
I have copied as eg. 3 sets of raw data..
MEASUREMENT_DESC READING RECIPE LOTID WAFERID SLOT PROBE OPERATOR SHIFT EQUIPMENT PROCESS COMMENTS PROCESSED COLLECTED TESTPATTERN UNITS INITIALTEMPERATURE FINALTEMPERATURE TCRNAME TCRVALUE DATAPOINTS GOODPOINTS MEAN STDV MIN MAX RANGE S1 S2 A B C D X Y
A: 0 0
B: 180 180
C: 0 0
D: 0 0
S1: 1 1
S2: 1 1
X': 1/X
Y': Y
Max: 10210.37851
Max: 15076.89628
Min: 13305.11518
Min: 8110.022686
Mean: 9723.01967 9723.01967
Mean: 14766.02225 14766.02225
Slot: 1 1
Slot: 2 2
Stdv: 308.852554 308.852554
Stdv: 444.282933 444.282933
LotID: s4020247 s4020247
Probe: 1.6 mil Tip - Type A 1.6 mil Tip - Type A
Range: 2100.35582
Range: 1771.781104
Shift:
Units: Ang Ang
<Begin>
Recipe: Diam 5mm exc Diam 5mm exc
Comment:
Process:
WaferID: 11 11
WaferID: 12 12
Operator:
TCR Name: <none> <none>
Collected: 2/25/2004 16:38 2/25/2004 4:38:00 PM
Collected: 2/25/2004 16:39 2/25/2004 4:39:00 PM
Equipment:
Processed: 2/25/2004 16:35
DATA SET:
Here is how the csv file looks
<Begin>
Recipe: Diam 5mm exc
LotID: s4012926
WaferID: 1
Slot: 1
Probe: 1.6 mil Tip - Type A
Operator:
Shift:
Equipment:
Process:
Comment:
Processed: 2/3/2004 15:26
Collected: 2/3/2004 15:52
Test Pattern: Diameter Scan, 49-Site
Units: Ang
Initial Temperature: 19.4
Final Temperature: 19.53
TCR Name: <none>
TCR Value: 0
Correlation: Y=S1/S2 * A + BX' + CX'^2 + DX'^3
S1: 1
S2: 1
A: 0
B: 180
C: 0
D: 0
X': 1/X
Y': Y
Current used: 127204.9228
Data Points: 49
Good Data Points: 49
Mean: 16614.03523
Stdv: 214.582959
Min: 16075.49246
Max: 16887.92103
Range: 812.428568
Site Processed Radius Theta X Y Raw Rs Status
Site 1: 16887.92103 95 270 -95 0 0.010659 Valid Measurement
Site 2: 16080.20591 91.041635 270 -91.041635 0 0.011194 Valid Measurement
Site 3: 16268.99624 87.083365 270 -87.083365 0 0.011064 Valid Measurement
Site 4: 16388.02615 83.125 270 -83.125 0 0.010984 Valid Measurement
Site 5: 16511.49997 79.166635 270 -79.166635 0 0.010901 Valid Measurement
Site 6: 16562.98184 75.208365 270 -75.208365 0 0.010868 Valid Measurement
Site 7: 16677.62823 71.25 270 -71.25 0 0.010793 Valid Measurement
Site 8: 16725.2676 67.291635 270 -67.291635 0 0.010762 Valid Measurement
Site 9: 16763.89752 63.333365 270 -63.333365 0 0.010737 Valid Measurement
Site 10: 16838.34854 59.375 270 -59.375 0 0.01069 Valid Measurement
Site 11: 16863.95693 55.416635 270 -55.416635 0 0.010674 Valid Measurement
Site 12: 16860.61769 51.458365 270 -51.458365 0 0.010676 Valid Measurement
Site 13: 16873.43403 47.5 270 -47.5 0 0.010668 Valid Measurement
Site 14: 16853.26553 43.541635 270 -43.541635 0 0.01068 Valid Measurement
Site 15: 16852.38739 39.583365 270 -39.583365 0 0.010681 Valid Measurement
Site 16: 16798.2385 35.625 270 -35.625 0 0.010715 Valid Measurement
Site 17: 16763.5619 31.666635 270 -31.666635 0 0.010738 Valid Measurement
Site 18: 16719.12616 27.708365 270 -27.708365 0 0.010766 Valid Measurement
Site 19: 16650.73537 23.75 270 -23.75 0 0.01081 Valid Measurement
Site 20: 16589.39811 19.791635 270 -19.791635 0 0.01085 Valid Measurement
Site 21: 16532.64679 15.833365 270 -15.833365 0 0.010888 Valid Measurement
Site 22: 16492.24666 11.875 270 -11.875 0 0.010914 Valid Measurement
Site 23: 16481.13848 7.916635 270 -7.916635 0 0.010922 Valid Measurement
Site 24: 16411.53215 3.958365 270 -3.958365 0 0.010968 Valid Measurement
Site 25: 16343.19825 0 90 0 0 0.011014 Valid Measurement
Site 26: 16395.72617 3.958365 90 3.958365 0 0.010978 Valid Measurement
Site 27: 16444.02546 7.916635 90 7.916635 0 0.010946 Valid Measurement
Site 28: 16493.56185 11.875 90 11.875 0 0.010913 Valid Measurement
Site 29: 16531.34928 15.833365 90 15.833365 0 0.010888 Valid Measurement
Site 30: 16596.5521 19.791635 90 19.791635 0 0.010846 Valid Measurement
Site 31: 16667.85621 23.75 90 23.75 0 0.010799 Valid Measurement
Site 32: 16687.72894 27.708365 90 27.708365 0 0.010786 Valid Measurement
Site 33: 16736.1988 31.666635 90 31.666635 0 0.010755 Valid Measurement
Site 34: 16764.84371 35.625 90 35.625 0 0.010737 Valid Measurement
Site 35: 16796.94947 39.583365 90 39.583365 0 0.010716 Valid Measurement
Site 36: 16837.45619 43.541635 90 43.541635 0 0.01069 Valid Measurement
Site 37: 16838.98362 47.5 90 47.5 0 0.010689 Valid Measurement
Site 38: 16808.82764 51.458365 90 51.458365 0 0.010709 Valid Measurement
Site 39: 16793.93182 55.416635 90 55.416635 0 0.010718 Valid Measurement
Site 40: 16763.46114 59.375 90 59.375 0 0.010738 Valid Measurement
Site 41: 16705.45007 63.333365 90 63.333365 0 0.010775 Valid Measurement
Site 42: 16644.4755 67.291635 90 67.291635 0 0.010814 Valid Measurement
Site 43: 16582.26304 71.25 90 71.25 0 0.010855 Valid Measurement
Site 44: 16495.78318 75.208365 90 75.208365 0 0.010912 Valid Measurement
Site 45: 16403.08225 79.166635 90 79.166635 0 0.010974 Valid Measurement
Site 46: 16307.64586 83.125 90 83.125 0 0.011038 Valid Measurement
Site 47: 16161.88999 87.083365 90 87.083365 0 0.011137 Valid Measurement
Site 48: 16075.49246 91.041635 90 91.041635 0 0.011197 Valid Measurement
Site 49: 16763.9348 95 90 95 0 0.010737 Valid Measurement
<Begin>
Recipe: Diam 5mm exc
LotID: s4012926
WaferID: 2
Slot: 2
Probe: 1.6 mil Tip - Type A
Operator:
Shift:
Equipment:
Process:
Comment:
Processed: 2/3/2004 15:26
Collected: 2/3/2004 15:54
Test Pattern: Diameter Scan, 49-Site
Units: Ang
Initial Temperature: 19.42
Final Temperature: 19.54
TCR Name: <none>
TCR Value: 0
Correlation: Y=S1/S2 * A + BX' + CX'^2 + DX'^3
S1: 1
S2: 1
A: 0
B: 180
C: 0
D: 0
X': 1/X
Y': Y
Current used: 125770.5626
Data Points: 49
Good Data Points: 49
Mean: 16543.59685
Stdv: 280.774239
Min: 15804.50898
Max: 16996.90008
Range: 1192.391105
Site Processed Radius Theta X Y Raw Rs Status
Site 1: 16736.00724 95 270 -95 0 0.010755 Valid Measurement
Site 2: 15804.50898 91.041635 270 -91.041635 0 0.011389 Valid Measurement
Site 3: 16024.07999 87.083365 270 -87.083365 0 0.011233 Valid Measurement
Site 4: 16183.34917 83.125 270 -83.125 0 0.011123 Valid Measurement
Site 5: 16341.45275 79.166635 270 -79.166635 0 0.011015 Valid Measurement
Site 6: 16437.73016 75.208365 270 -75.208365 0 0.01095 Valid Measurement
Site 7: 16544.92021 71.25 270 -71.25 0 0.010879 Valid Measurement
Site 8: 16646.62458 67.291635 270 -67.291635 0 0.010813 Valid Measurement
Site 9: 16708.39067 63.333365 270 -63.333365 0 0.010773 Valid Measurement
Site 10: 16758.24991 59.375 270 -59.375 0 0.010741 Valid Measurement
Site 11: 16798.28582 55.416635 270 -55.416635 0 0.010715 Valid Measurement
Site 12: 16829.20186 51.458365 270 -51.458365 0 0.010696 Valid Measurement
Site 13: 16803.51295 47.5 270 -47.5 0 0.010712 Valid Measurement
Site 14: 16774.57361 43.541635 270 -43.541635 0 0.010731 Valid Measurement
Site 15: 16691.86288 39.583365 270 -39.583365 0 0.010784 Valid Measurement
Site 16: 16996.90008 35.625 270 -35.625 0 0.01059 Valid Measurement
Site 17: 16914.51541 31.666635 270 -31.666635 0 0.010642 Valid Measurement
Site 18: 16829.55804 27.708365 270 -27.708365 0 0.010695 Valid Measurement
Site 19: 16773.64768 23.75 270 -23.75 0 0.010731 Valid Measurement
Site 20: 16675.95183 19.791635 270 -19.791635 0 0.010794 Valid Measurement
Site 21: 16579.69729 15.833365 270 -15.833365 0 0.010857 Valid Measurement
Site 22: 16473.88849 11.875 270 -11.875 0 0.010926 Valid Measurement
Site 23: 16386.44183 7.916635 270 -7.916635 0 0.010985 Valid Measurement
Site 24: 16297.20474 3.958365 270 -3.958365 0 0.011045 Valid Measurement
Site 25: 16251.85088 0 90 0 0 0.011076 Valid Measurement
Site 26: 16284.40727 3.958365 90 3.958365 0 0.011054 Valid Measurement
Site 27: 16341.76548 7.916635 90 7.916635 0 0.011015 Valid Measurement
Site 28: 16389.54172 11.875 90 11.875 0 0.010983 Valid Measurement
Site 29: 16451.60413 15.833365 90 15.833365 0 0.010941 Valid Measurement
Site 30: 16526.71662 19.791635 90 19.791635 0 0.010891 Valid Measurement
Site 31: 16579.22035 23.75 90 23.75 0 0.010857 Valid Measurement
Site 32: 16635.27173 27.708365 90 27.708365 0 0.01082 Valid Measurement
Site 33: 16687.85099 31.666635 90 31.666635 0 0.010786 Valid Measurement
Site 34: 16745.70256 35.625 90 35.625 0 0.010749 Valid Measurement
Site 35: 16771.43857 39.583365 90 39.583365 0 0.010733 Valid Measurement
Site 36: 16799.45226 43.541635 90 43.541635 0 0.010715 Valid Measurement
Site 37: 16785.04147 47.5 90 47.5 0 0.010724 Valid Measurement
Site 38: 16789.0083 51.458365 90 51.458365 0 0.010721 Valid Measurement
Site 39: 16758.37457 55.416635 90 55.416635 0 0.010741 Valid Measurement
Site 40: 16729.9727 59.375 90 59.375 0 0.010759 Valid Measurement
Site 41: 16672.52909 63.333365 90 63.333365 0 0.010796 Valid Measurement
Site 42: 16585.31765 67.291635 90 67.291635 0 0.010853 Valid Measurement
Site 43: 16487.99029 71.25 90 71.25 0 0.010917 Valid Measurement
Site 44: 16375.02262 75.208365 90 75.208365 0 0.010992 Valid Measurement
Site 45: 16291.88942 79.166635 90 79.166635 0 0.011048 Valid Measurement
Site 46: 16166.72034 83.125 90 83.125 0 0.011134 Valid Measurement
Site 47: 15965.06341 87.083365 90 87.083365 0 0.011275 Valid Measurement
Site 48: 15806.1996 91.041635 90 91.041635 0 0.011388 Valid Measurement
Site 49: 16747.73726 95 90 95 0 0.010748 Valid Measurement
<Begin>
Recipe: Diam 5mm exc
LotID: s4012926
WaferID: 3
Slot: 3
Probe: 1.6 mil Tip - Type A
Operator:
Shift:
Equipment:
Process:
Comment:
Processed: 2/3/2004 15:26
Collected: 2/3/2004 15:56
Test Pattern: Diameter Scan, 49-Site
Units: Ang
Initial Temperature: 19.46
Final Temperature: 19.58
TCR Name: <none>
TCR Value: 0
Correlation: Y=S1/S2 * A + BX' + CX'^2 + DX'^3
S1: 1
S2: 1
A: 0
B: 180
C: 0
D: 0
X': 1/X
Y': Y
Current used: 125477.9846
Data Points: 49
Good Data Points: 49
Mean: 16431.81874
Stdv: 308.12566
Min: 15658.49856
Max: 16883.92836
Range: 1225.429806
Site Processed Radius Theta X Y Raw Rs Status
Site 1: 16499.27253 95 270 -95 0 0.01091 Valid Measurement
Site 2: 15658.49856 91.041635 270 -91.041635 0 0.011495 Valid Measurement
Site 3: 15886.39359 87.083365 270 -87.083365 0 0.01133 Valid Measurement
Site 4: 16107.02482 83.125 270 -83.125 0 0.011175 Valid Measurement
Site 5: 16261.15782 79.166635 270 -79.166635 0 0.011069 Valid Measurement
Site 6: 16360.27303 75.208365 270 -75.208365 0 0.011002 Valid Measurement
Site 7: 16464.99665 71.25 270 -71.25 0 0.010932 Valid Measurement
Site 8: 16563.13264 67.291635 270 -67.291635 0 0.010868 Valid Measurement
Site 9: 16645.3572 63.333365 270 -63.333365 0 0.010814 Valid Measurement
Site 10: 16685.54234 59.375 270 -59.375 0 0.010788 Valid Measurement
Site 11: 16737.08903 55.416635 270 -55.416635 0 0.010755 Valid Measurement
Site 12: 16756.20701 51.458365 270 -51.458365 0 0.010742 Valid Measurement
Site 13: 16745.74625 47.5 270 -47.5 0 0.010749 Valid Measurement
Site 14: 16704.33964 43.541635 270 -43.541635 0 0.010776 Valid Measurement
Site 15: 16630.23179 39.583365 270 -39.583365 0 0.010824 Valid Measurement
Site 16: 16883.92836 35.625 270 -35.625 0 0.010661 Valid Measurement
Site 17: 16857.3055 31.666635 270 -31.666635 0 0.010678 Valid Measurement
Site 18: 16728.32899 27.708365 270 -27.708365 0 0.01076 Valid Measurement
Site 19: 16640.86881 23.75 270 -23.75 0 0.010817 Valid Measurement
Site 20: 16575.38339 19.791635 270 -19.791635 0 0.010859 Valid Measurement
Site 21: 16438.12688 15.833365 270 -15.833365 0 0.01095 Valid Measurement
Site 22: 16306.51974 11.875 270 -11.875 0 0.011039 Valid Measurement
Site 23: 16198.77256 7.916635 270 -7.916635 0 0.011112 Valid Measurement
Site 24: 16087.08175 3.958365 270 -3.958365 0 0.011189 Valid Measurement
Site 25: 16032.65794 0 90 0 0 0.011227 Valid Measurement
Site 26: 16073.92882 3.958365 90 3.958365 0 0.011198 Valid Measurement
Site 27: 16144.94533 7.916635 90 7.916635 0 0.011149 Valid Measurement
Site 28: 16193.52199 11.875 90 11.875 0 0.011116 Valid Measurement
Site 29: 16300.62405 15.833365 90 15.833365 0 0.011043 Valid Measurement
Site 30: 16382.59835 19.791635 90 19.791635 0 0.010987 Valid Measurement
Site 31: 16469.48003 23.75 90 23.75 0 0.010929 Valid Measurement
Site 32: 16512.12212 27.708365 90 27.708365 0 0.010901 Valid Measurement
Site 33: 16594.59204 31.666635 90 31.666635 0 0.010847 Valid Measurement
Site 34: 16636.21235 35.625 90 35.625 0 0.01082 Valid Measurement
Site 35: 16694.52102 39.583365 90 39.583365 0 0.010782 Valid Measurement
Site 36: 16742.35281 43.541635 90 43.541635 0 0.010751 Valid Measurement
Site 37: 16710.24912 47.5 90 47.5 0 0.010772 Valid Measurement
Site 38: 16724.34937 51.458365 90 51.458365 0 0.010763 Valid Measurement
Site 39: 16704.64099 55.416635 90 55.416635 0 0.010775 Valid Measurement
Site 40: 16672.2926 59.375 90 59.375 0 0.010796 Valid Measurement
Site 41: 16606.52694 63.333365 90 63.333365 0 0.010839 Valid Measurement
Site 42: 16506.18814 67.291635 90 67.291635 0 0.010905 Valid Measurement
Site 43: 16382.50474 71.25 90 71.25 0 0.010987 Valid Measurement
Site 44: 16292.12688 75.208365 90 75.208365 0 0.011048 Valid Measurement
Site 45: 16151.29101 79.166635 90 79.166635 0 0.011145 Valid Measurement
Site 46: 16027.89372 83.125 90 83.125 0 0.01123 Valid Measurement
Site 47: 15849.82679 87.083365 90 87.083365 0 0.011357 Valid Measurement
Site 48: 15671.57914 91.041635 90 91.041635 0 0.011486 Valid Measurement
Site 49: 16660.5129 95 90 95 0 0.010804 Valid Measurement
<Begin>
Recipe: Diam 5mm exc
LotID: s4012926
WaferID: 4
Slot: 4
Probe: 1.6 mil Tip - Type A
Operator:
Shift:
Equipment:
Process:
Comment:
Processed: 2/3/2004 15:26
Collected: 2/3/2004 15:57
Test Pattern: Diameter Scan, 49-Site
Units: Ang
Initial Temperature: 19.49
Final Temperature: 19.61
TCR Name: <none>
TCR Value: 0
Correlation: Y=S1/S2 * A + BX' + CX'^2 + DX'^3
S1: 1
S2: 1
A: 0
B: 180
C: 0
D: 0
X': 1/X
Y': Y
Current used: 123062.0409
Data Points: 49
Good Data Points: 49
Mean: 16280.48665
Stdv: 337.669634
Min: 15410.83989
Max: 16799.21754
Range: 1388.377648
Site Processed Radius Theta X Y Raw Rs Status
Site 1: 16521.79761 95 270 -95 0 0.010895 Valid Measurement
Site 2: 15532.00515 91.041635 270 -91.041635 0 0.011589 Valid Measurement
Site 3: 15791.07203 87.083365 270 -87.083365 0 0.011399 Valid Measurement
Site 4: 16031.63133 83.125 270 -83.125 0 0.011228 Valid Measurement
Site 5: 16184.05092 79.166635 270 -79.166635 0 0.011122 Valid Measurement
Site 6: 16299.07893 75.208365 270 -75.208365 0 0.011044 Valid Measurement
Site 7: 16382.76899 71.25 270 -71.25 0 0.010987 Valid Measurement
Site 8: 16449.87101 67.291635 270 -67.291635 0 0.010942 Valid Measurement
Site 9: 16522.47716 63.333365 270 -63.333365 0 0.010894 Valid Measurement
Site 10: 16457.59779 59.375 270 -59.375 0 0.010937 Valid Measurement
Site 11: 16799.21754 55.416635 270 -55.416635 0 0.010715 Valid Measurement
Site 12: 16786.47309 51.458365 270 -51.458365 0 0.010723 Valid Measurement
Site 13: 16739.44143 47.5 270 -47.5 0 0.010753 Valid Measurement
Site 14: 16700.46183 43.541635 270 -43.541635 0 0.010778 Valid Measurement
Site 15: 16647.88475 39.583365 270 -39.583365 0 0.010812 Valid Measurement
Site 16: 16578.82791 35.625 270 -35.625 0 0.010857 Valid Measurement
Site 17: 16498.16383 31.666635 270 -31.666635 0 0.01091 Valid Measurement
Site 18: 16441.66248 27.708365 270 -27.708365 0 0.010948 Valid Measurement
Site 19: 16346.85266 23.75 270 -23.75 0 0.011011 Valid Measurement
Site 20: 16250.10328 19.791635 270 -19.791635 0 0.011077 Valid Measurement
Site 21: 16112.02886 15.833365 270 -15.833365 0 0.011172 Valid Measurement
Site 22: 16017.83101 11.875 270 -11.875 0 0.011237 Valid Measurement
Site 23: 15977.47417 7.916635 270 -7.916635 0 0.011266 Valid Measurement
Site 24: 15836.93127 3.958365 270 -3.958365 0 0.011366 Valid Measurement
Site 25: 15762.77165 0 90 0 0 0.011419 Valid Measurement
Site 26: 15861.28023 3.958365 90 3.958365 0 0.011348 Valid Measurement
Site 27: 15908.02568 7.916635 90 7.916635 0 0.011315 Valid Measurement
Site 28: 15959.48903 11.875 90 11.875 0 0.011279 Valid Measurement
Site 29: 16052.86544 15.833365 90 15.833365 0 0.011213 Valid Measurement
Site 30: 16147.3039 19.791635 90 19.791635 0 0.011147 Valid Measurement
Site 31: 16267.86687 23.75 90 23.75 0 0.011065 Valid Measurement
Site 32: 16363.54346 27.708365 90 27.708365 0 0.011 Valid Measurement
Site 33: 16426.98104 31.666635 90 31.666635 0 0.010958 Valid Measurement
Site 34: 16520.53156 35.625 90 35.625 0 0.010896 Valid Measurement
Site 35: 16570.3787 39.583365 90 39.583365 0 0.010863 Valid Measurement
Site 36: 16611.5956 43.541635 90 43.541635 0 0.010836 Valid Measurement
Site 37: 16608.57914 47.5 90 47.5 0 0.010838 Valid Measurement
Site 38: 16620.50902 51.458365 90 51.458365 0 0.01083 Valid Measurement
Site 39: 16593.1045 55.416635 90 55.416635 0 0.010848 Valid Measurement
Site 40: 16562.06049 59.375 90 59.375 0 0.010868 Valid Measurement
Site 41: 16495.23852 63.333365 90 63.333365 0 0.010912 Valid Measurement
Site 42: 16419.56563 67.291635 90 67.291635 0 0.010963 Valid Measurement
Site 43: 16325.89487 71.25 90 71.25 0 0.011025 Valid Measurement
Site 44: 16207.21957 75.208365 90 75.208365 0 0.011106 Valid Measurement
Site 45: 16068.43844 79.166635 90 79.166635 0 0.011202 Valid Measurement
Site 46: 15918.18563 83.125 90 83.125 0 0.011308 Valid Measurement
Site 47: 15696.48028 87.083365 90 87.083365 0 0.011468 Valid Measurement
Site 48: 15410.83989 91.041635 90 91.041635 0 0.01168 Valid Measurement
Site 49: 16459.3915 95 90 95 0 0.010936 Valid Measurement
for 3 sets of data (3 wafers from a lot, with 49 measurement sites)
Hope that explains it.
January 11, 2005 - 4:08 pm UTC
and if you group by rownum, nothing get "squashed"
we don't need pages and pages of data -- but do you conceptually understand how the pivot works?
You need a set of columns to pivot on -- do you have such a set, the introduce of rownum in there indicates to me either
a) you haven't conceptually understood the query yet
b) you don't have anything to actually pivot on and you cannot get there from here.
Mala, January 11, 2005 - 4:35 pm UTC
Thanks for your quick response. Sorry about all the data. I was hoping it could give you a better picture.
You are right I have not used pivot queries before and by browsing here it looked like could provide the solution..
BTW i just got your book Expert one-on-one and I will review it more.
1. I do need all the rows in the summary part of the data to be queryable columns in the table.
2. from there I need create the detail table data with the site measurements information
could you please suggest a good approach?
Regards
January 11, 2005 - 4:40 pm UTC
seems like these are your pivot columns:
each lotid,
waferid, slot combination.
select lotid, waferid, slot, max( decode( .... ) ), max( decode( ... ) )
from table
group by lotid, waferid, slot
/
perhaps
Mala, January 11, 2005 - 5:46 pm UTC
Tom,
The only columns I have are 'measurement_desc' and the 'reading' in the external table.
January 11, 2005 - 7:24 pm UTC
oh, oh oh i see.... tricky tricky....
got it, not a problem, i took your data and duped it for 3 groups (2cd and 3rd observations)
ps$tkyte@ORA9IR2> drop table et;
Table dropped.
ops$tkyte@ORA9IR2> create table et
2 (text varchar2(255))
3 ORGANIZATION EXTERNAL
4 ( type oracle_loader
5 default directory data_dir
6 access parameters
7 ( fields terminated by ',' )
8 location ('et.dat')
9 )
10 /
Table created.
ops$tkyte@ORA9IR2> column label format a10
ops$tkyte@ORA9IR2> column data format a40
ops$tkyte@ORA9IR2> column lotid format a10
ops$tkyte@ORA9IR2> column waferid format a10
ops$tkyte@ORA9IR2> column slot format a10
ops$tkyte@ORA9IR2> column probe format a10
ops$tkyte@ORA9IR2> column tcr_name format a10
ops$tkyte@ORA9IR2> column units format a10
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select grp,
2 max(decode(label,'LotID', data)) LotID,
3 max(decode(label,'WaferID', data)) WaferID,
4 max(decode(label,'Slot', data)) Slot,
5 max(decode(label,'Probe', data)) Probe,
6 max(decode(label,'Units', data)) Units,
7 max(decode(label,'TCR Name', data)) TCR_name
8 from (
9 select r, label, data,
10 max(grp) over (order by r) grp
11 from (
12 select r, label, data,
13 case when lead(label) over (order by r) = 'LotID'
14 then r
15 end grp
16 from (
17 select rownum r,
18 trim(substr(text,1,instr(text,':')-1)) label,
19 trim(substr(text,instr(text,':')+1 )) data
20 from et
21 )
22 )
23 )
24 group by grp
25 /
GRP LOTID WAFERID SLOT PROBE UNITS TCR_NAME
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 s4012926 1 1 1.6 mil Ti Ang <none>
p - Type A
26 s4012926 2 1 2cd obse 1 2cd obse 1.6 mil Ti Ang 2cd ob <none> 2cd
cd observa rvation rvation p - Type A servation observati
tion 2cd obser on
vation
51 s4012926 3 1 3rd obse 1 3rd obse 1.6 mil Ti Ang 3rd ob <none> 3rd
rd observa rvation rvation p - Type A servation observati
tion 3rd obser on
vation
obviously.... :)
a)
17 select rownum r,
18 trim(substr(text,1,instr(text,':')-1)) label,
19 trim(substr(text,instr(text,':')+1 )) data
20 from et
read the data, parse it into "label", "data" and assign rownum so we can order later...
b)
12 select r, label, data,
13 case when lead(label) over (order by r) = 'LotID'
14 then r
15 end grp
16 from (
17 select rownum r,
18 trim(substr(text,1,instr(text,':')-1)) label,
19 trim(substr(text,instr(text,':')+1 )) data
20 from et
21 )
when the next row is LotID -- we know this row is the beginning of a "group" -- output a marker so we can...
c)
carry it down (technique 123423 outlined in analytics to the rescue:
https://www.oracle.com/technetwork/issue-archive/2014/14-mar/o24asktom-2147206.html
9 select r, label, data,
10 max(grp) over (order by r) grp
11 from (
12 select r, label, data,
13 case when lead(label) over (order by r) = 'LotID'
14 then r
15 end grp
16 from (
17 select rownum r,
18 trim(substr(text,1,instr(text,':')-1)) label,
19 trim(substr(text,instr(text,':')+1 )) data
20 from et
21 )
22 )
d) and then simply pivot each group....
Most Brilliant!!!
Mala, January 12, 2005 - 1:35 pm UTC
Tom,
That was amazing..!!! You are a genius! It works just like I wanted. I am still trying to grasp the analytics functions.. Your explanations were very helpful.
Thank you so much for your time and this forum.. Great work
With lots of Regards
ROWS INTO COLUMNS
Fei, January 24, 2005 - 10:21 am UTC
Hi, It was a really good answer, and I try to use it, but I'm working in sql server, and It seems it doesn't support decode funcition or create | replace packages, so I'm stuck.
I tried to understand max(decode (...)) functions, I mean the logic and the syntax to transform rows into columns, because I 've never used those functions, and I think I just find out how it works, but like I said it doesn't seems to work in sql server 2000. Just to be sure and make it more easy your work this is what i have.
I Have a table like this
ID, Family, Freq
1 f1 23
1 f2 4
1 f20 2
2 f3 7
2 f5 4
2 f2 3
2 f7 3
2 f8 3
.... and so on
and I want the same as the first question, ie:
ID Famliy1 Freq Family2 Freq Family3 Freq Family4 Freq...
1 f1 23 f2 4 f20 2 - -
2 f3 7 f5 4 f2 3 f7 3 ...
And I try to use your code with a simple modifications in the columns names, but it didn't let me use decode
I apreciate your answer
January 24, 2005 - 11:27 am UTC
oh my gosh.
laughing totally out loud.
I would imagine many things you see on this site "won't actually work in sql server which is as different from Oracle as dirt is from orange juice"
guess you'll have to ask a sql server programmer "so, what is the functional equivalent of decode in sql server"
perhaps they have the ANSI case statement like we do as well.
decode( x, y, z )
would be
(case when x = y then z end)
Brilliant!
VA, January 24, 2005 - 2:35 pm UTC
I am confused :)
A reader, January 24, 2005 - 10:11 pm UTC
Tom can you also handle SQL-SERVER disability questions ?
May be MS SQL server can use Oracle-wrapper behind the scenes then the soln will definitely work.
Just kidding. Really funny, SQL SERVER guys asking for Oracle solutions to SQL SERVER problems.
I am confused :)
So for your soln to work :
1)Crucial step :Migrate DB from SQL-SERVER TO ORACLE
2)Run Tom's Query .
May be thats what he had in mind.See people can go to great lengths to solve problems may be thats the best thing to do here :)
Fei, January 26, 2005 - 2:38 pm UTC
I'm not a SQL SERVER guy!!!
In fact I really hate it... but I've to do this.. and googling I found this answer and I made the question, so sorry I didn't see the name of the URL.
Well thx anyway. and you are right CASE it's the equivalent
Not a pivot
bakunian, March 04, 2005 - 12:47 pm UTC
Tom: I was trying to come up with a query that returns result set as.
GRANTEE GRANTEE_ROLE
DBA SELECT_CATALOG_ROLE, DELETE_CATALOG_ROLE, ....
I was asked to create a query where all of the roles granted to a user would be comma delimited and on the same line with user's name.
I was trying pivot and other kinds of queries but without success.
We have application table that has only three application specific roles. However structure very similar to DBA_ROLE_PRIVS. Any suggestions are greatly appreciated.
March 04, 2005 - 1:56 pm UTC
stragg
search for it on this site.
It is a pivot after all
bakunian, March 04, 2005 - 1:17 pm UTC
Sorry Tom: Your site helped me figure it out.
select grantee, max(decode(granted_role, 'SELECT_CATALOG_ROLE', granted_role, null)) ||', '||
max(decode(granted_role, 'DELETE_CATALOG_ROLE', granted_role, null)) ||', '||
max(decode(granted_role, 'EXECUTE_CATALOG_ROLE', granted_role, null))
from dba_role_privs
group by grantee
/
Merging two rows into one
A reader, March 04, 2005 - 4:20 pm UTC
Tom,
I have a result set obtained by joining two tables. Here are the results (by joining Emp and Dept tables, Emp has an extra column called status_cd)
Deptno Empno Status_cd
10 1234 5
10 1234 6
20 3456 5
30 1345 5
Is it possible to get something like this from this result:
Deptno Empno Status desc
10 1234 in both statuses
20 3456 only in 5
30 1345 only in 5
I am not sure if this question belongs to this thread, but i couldn't find related thread for this one.
Thanks for your help,
March 04, 2005 - 6:21 pm UTC
take the query you have already and (pretend it is "Q")
ops$tkyte@ORA9IR2> select deptno, empno,
2 case when count(*) = 1
3 then 'In status ' || max(status_cd)
4 else 'In ' || count(*) || ' statuses'
5 end status
6 from ("Q") --- "q" is your current query...
7 group by deptno, empno
8 /
DEPTNO EMPNO STATUS
---------- ---------- ----------------------------------------------------
10 1234 In 2 statuses
20 1345 In status 5
20 3456 In status 5
Merging two rows into one
A reader, March 04, 2005 - 11:33 pm UTC
Thanks Tom, That's exactly what i was looking for. That solution is really interesting (never seen that before).
Thanks a bunch.
Rows to columns
A reader, April 12, 2005 - 10:12 pm UTC
create table c
(
c_pk int number primary key,
c_data varchar2(10)
);
insert into c values (1,'one');
insert into c values (2,'two');
insert into c values (3,'three');
insert into c values (4,'four');
insert into c values (5,'five');
create table e
(
e_pk int number primary key,
e_data varchar2(10)
);
insert into e values (1,'e_one');
insert into e values (2,'e_two');
insert into e values (3,'e_three');
insert into e values (4,'e_four);
create table c_e
(
c_pk int number references c,
e_pk int references e,
c_e_data varchar2(10),
constraint c1 primary key (c_pk,e_pk)
);
insert into c_e values (1,1,'one_one');
insert into c_e values (1,2,'one_two');
insert into c_e values (2,1,'two_one');
2 tables and a many-many resolver table.
Given a input of c_pk in (1,2,3,4) and e_pk in (1,2,3), I want the following output
1 one_one one_two null
2 two_one null null
3 null null null
4 null null null
The first column above represents the c_pks provided. The other 3 columns represents data corresponding to the 3 e_pk's provided. If only e_pk in (1,2) were provided, the output should have 1+2=3 columns. Hope I have explained this clearly.
Help? Thanks
April 13, 2005 - 9:01 am UTC
what if e_pk was in (1,3,4) instead.
A reader, April 12, 2005 - 11:02 pm UTC
SQL> SELECT
2 oj.c_pk,
3 max(decode(oj.e_pk,1,c_e.c_e_data)) c1,
4 max(decode(oj.e_pk,2,c_e.c_e_data)) c2,
5 max(decode(oj.e_pk,3,c_e.c_e_data)) c3
6 FROM c_e,(SELECT c_pk,e_pk FROM c,e
7 WHERE c.c_pk IN (1,2,3,4)
8 AND e.e_pk IN (1,2,3)) oj
9 WHERE 1=1
10 AND oj.c_pk=c_e.c_pk(+)
11 AND oj.e_pk=c_e.e_pk(+)
12 GROUP BY oj.c_pk
13 /
C_PK C1 C2 C3
---------- ---------- ---------- ----------
1 one_one one_two
2 two_one
3
4
seems to do it. Is this how you would do it?
Thanks
A reader, April 13, 2005 - 9:33 am UTC
"what if e_pk was in (1,3,4) instead"
Well, the output would still contain 1+3=4 columns and it would look like
1 one_one null null
2 two_one null null
3 null null null
4 null null null
Basically, the e_pk's determine which rows from e (and c_e of course) to show in the output
April 13, 2005 - 9:39 am UTC
what if it was 1,2,3,4 then (make the problem generic, is it always "3x4" or "nxm"
3xm -- OK
nxm -- you need to write a dynamic query for each value of N
A reader, April 13, 2005 - 9:41 am UTC
Any number of c_pk's are provided and upto 3 (1 or 2 or 3) e_pk's are provided.
Output will be NxM (where N is number of e_pk's provided and M is 1 or 2 or 3)
A reader, April 13, 2005 - 9:42 am UTC
Output will be NxM (where N is number of e_pk's provided and M is 1 or 2 or 3)
I meant c_pk's above
April 13, 2005 - 9:55 am UTC
then M will always be 3, else you need 3 queries.
ops$tkyte@ORA9IR2> variable bv1 number
ops$tkyte@ORA9IR2> variable bv2 number
ops$tkyte@ORA9IR2> variable bv3 number
ops$tkyte@ORA9IR2> variable c varchar2(20)
ops$tkyte@ORA9IR2> exec :bv1 := 1; :bv2 := 2; :bv3 := 3; :c := '1,2,3,4';
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select c_pk,
2 max( decode( e_pk, :bv1, c_e_data ) ) c1,
3 max( decode( e_pk, :bv2, c_e_data ) ) c2,
4 max( decode( e_pk, :bv3, c_e_data ) ) c3
5 from (select c.c_pk, c_e.e_pk, c_e.c_e_data
6 from c left join c_e on ( c.c_pk = c_e.c_pk )
7 )
8 where c_pk in ( select* from TABLE(cast( str2tbl(:c) as str2tblType) ) )
9 group by c_pk
10 /
C_PK C1 C2 C3
---------- ---------- ---------- ----------
1 one_one one_two
2 two_one
3
4
ops$tkyte@ORA9IR2> select c_pk,
2 (select c_e_data from c_e where c_pk = c.c_pk and e_pk = :bv1) c1,
3 (select c_e_data from c_e where c_pk = c.c_pk and e_pk = :bv2) c2,
4 (select c_e_data from c_e where c_pk = c.c_pk and e_pk = :bv3) c3
5 from c
6 where c_pk in ( select* from TABLE(cast( str2tbl(:c) as str2tblType) ) )
7 /
C_PK C1 C2 C3
---------- ---------- ---------- ----------
1 one_one one_two
2 two_one
3
4
see the query plans with temporary tables article
https://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54asktom-2279184.html for a hint on a hint that might also be useful (cardinality) and search this site for str2tbl to see that.
HTML DB
A reader, April 16, 2005 - 12:08 pm UTC
April 16, 2005 - 2:19 pm UTC
doesn't look like a sql problem to me.
Can I make a row always appear? sure, easy.
Can htmldb "know" to insert a row versus try to update a row that was just made up? Doubt it (that would be some procedural logic there...)
HTML DB
A reader, April 16, 2005 - 7:00 pm UTC
Hm, I agree. I hope someone on the HTML DB forum has an idea. But you know HTML DB quite well, can you think of a way to do this? Thanks
April 16, 2005 - 7:03 pm UTC
like I said, you would have to program it. you would have your own custom "do the update" routine that would have to try to update the record and if not found insert it, or look at some attribute of the record and know "this is to be inserted"
SQL%ROWCOUNT
A reader, April 21, 2005 - 3:53 pm UTC
Quick followup:
create table t
(
pk int primary key,
flag varchar2(1) check (flag in ('Y','N'))
);
I have a list of pks and flags. I want to update the flag to Y only if it isnt already Y. But if the record is not there itself, I want to insert it.
update t set flag='Y'
where pk=p_pk and flag!='Y';
if (sql%rowcount=0) then
insert into t values (p_pk,'Y');
end if;
The update can return sql%rowcount=0 either because
a) the record doesnt exist or
b) the flag is already Y
If it is a), the insert is fine. But if it is b), the insert would fail with a PK violation.
How would you handle this?
Thanks
April 22, 2005 - 10:02 am UTC
remove the flag!=y ?
A reader, April 22, 2005 - 11:31 am UTC
But I want to update the flag to Y only if it isnt already Y.
April 22, 2005 - 12:30 pm UTC
your choice then. path of least resistance = update it.
either:
a)
look for the record
if exists update
else insert
b)
update
if rowcount = 0
then
insert
exception to ignore dup_val error
end if
c)
just update it (it is a single character after all, no big deal)
if rowcount = 0
then
insert
end if
Rows to columns when values are text
Jairo Ojeda, April 22, 2005 - 2:42 pm UTC
Hi Tom, I need your help to write a query like these,
DROP TABLE T1;
CREATE TABLE T1
( CLIENT NUMBER(5), IDPHONE NUMBER(1), PHONE VARCHAR2(10) );
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20588', '1', '249-3501');
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20588', '2', '296-5301');
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20588', '3', '382-4411');
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20588', '3', '384-6371');
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20613', '1', '551-43-97');
INSERT INTO T1(CLIENT, IDPHONE, PHONE) VALUES('20613', '4', '233-33-33');
COMMIT;
SELECT client,
DECODE(idphone, 1, phone) home,
DECODE(idphone, 2, phone) office,
DECODE(idphone, 3, phone) selfphone,
DECODE(idphone, 4, phone) fax
FROM t1;
but I don't know how to hide the nulls, It will looks like,
SELECT client,
SUM(DECODE(idphone, 1, REPLACE(phone, '-'))) home,
SUM(DECODE(idphone, 2, REPLACE(phone, '-'))) office,
SUM(DECODE(idphone, 3, REPLACE(phone, '-'))) selfphone,
SUM(DECODE(idphone, 4, REPLACE(phone, '-'))) fax
FROM t1
GROUP BY client;
April 22, 2005 - 3:42 pm UTC
max, not sum
with all rows
Jairo Ojeda, April 22, 2005 - 3:55 pm UTC
But I need TO see ALL phone VALUES, I need an output LIKE,
CLIENT HOME OFFICE SELFPHONE FAX
20588 249-3501 296-5301 384-6371
20588 382-4411
20613 551-43-97 233-33-33
April 22, 2005 - 4:27 pm UTC
ops$tkyte@ORA9IR2> select client, rn,
2 max(decode(idphone,1,phone)) home,
3 max(decode(idphone,2,phone)) office,
4 max(decode(idphone,3,phone)) selfphone,
5 max(decode(idphone,4,phone)) fax
6 from (
7 select client, phone, idphone, row_number() over (partition by client, idphone order by phone) rn
8 from t1
9 )
10 group by client, rn
11 /
CLIENT RN HOME OFFICE SELFPHONE FAX
---------- ---------- ---------- ---------- ---------- ----------
20588 1 249-3501 296-5301 382-4411
20588 2 384-6371
20613 1 551-43-97 233-33-33
analytics rock, they roll, they are the coolest thing to happen to SQL since the keyword SELECT
Cool !!!
Jairo Ojeda, April 22, 2005 - 5:07 pm UTC
Thanks a lot, I had read about "analytics functions" but I don't understand its and I never thought that they are my solutions today,
Thanks Tom,
can not do using this....(PIVOT)
Peru, May 19, 2005 - 9:31 am UTC
Hi Tom,
We are working on ACCESS to ORACLE conversion. There is query which need to be converted in ORACLE.
Though this is big, I want to give many inserts.
------ DATA ----
CREATE TABLE STAFF (
STAFF_ID NUMBER (11),
STAFFLASTNAME VARCHAR2 (4000),
STAFFFIRSTNAME VARCHAR2 (4000),
STAFFDEPT_FK VARCHAR2 (4000),
STAFFTITLE VARCHAR2 (4000),
STAFFUSERID VARCHAR2 (4000))
/
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2110, 'Meskin, Ilana', 'Ilana', '4103', 'Sr Dir Human Resources', 'imeskin');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2112, 'Romero, Leo', 'Leo', '4910', 'Validation Eng I', 'lromero');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2113, 'Marian, Mihaela', 'Mihaela', '4640', 'Assoc Mgr Opns Reg Compl', 'mmarian');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2116, 'Baclaski, Jeff', 'Jeffrey', '3732', 'Principal Engineer', 'jbaclask');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2117, 'Miller, Janis', 'Janis', '3760', 'Scientist', 'jfuller');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2144, 'Yoshimoto, Marlo', 'Marlo', '3730', 'Sr Associate Scientist', 'marloy');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2153, 'Montes, Kiko', 'Cuitlahuatl', '3732', 'Supv II Pilot Plant', 'kmontes');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2154, 'Mead, Carol', 'Carol', '4830', 'QAL Associate III', 'cmead');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2159, 'Porras, Michael', 'Michael', '4632', 'Constr Engineer III', 'mporras');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2168, 'Ma, Yu-Heng', 'Yu-Heng', '3760', 'Scientist', 'yma');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2170, 'Ysais, Robert', 'Robert', '3304', 'Assoc Mgr Mfg', 'rysais');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2171, 'Santelli, Jayne', 'Jayne', '4823', 'Mgr Corp Quality Systems', 'jaynes');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2172, 'Meng, Tina', 'Shi-Yuan', '3720', 'Sr Principal Scientist', 'tmeng');
INSERT INTO STAFF ( STAFF_ID, STAFFLASTNAME, STAFFFIRSTNAME, STAFFDEPT_FK, STAFFTITLE,
STAFFUSERID ) VALUES (
2173, 'Pantess, Vincent', 'Vincent', '3304', 'Mfg Specialist II', 'vpantess');
CREATE TABLE STAFFROLE (
SR_ID NUMBER (11) NOT NULL,
SR_SPONSORCODE VARCHAR2 (4000),
SR_DESC VARCHAR2 (4000))
/
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
-2010208894, 'CV', 'Validation Lead');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
-832089475, 'BP', 'Business Process Owner');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
-488754689, 'VL', 'Validation Lead');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
-367269764, 'na', 'Team Member');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
1, 'TL', 'Team Lead');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
2, 'CL', 'Client');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
3, 'CH', 'Champion');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
4, 'BB', 'Black Belt');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
7, 'SME', 'Subject Matter Expert');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
8, 'GB', 'Green Belt');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
9, 'EC', 'Executive Champion');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
10, 'STH', 'Stakeholder');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
13, 'SC', 'Steering Committee');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
176541178, 'TR', 'Training Lead');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
1090007649, 'PM', 'Project Manager');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
1545110821, 'FL', 'Financial Lead');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
1636171103, 'QS', 'Quality System Owner');
INSERT INTO STAFFROLE ( SR_ID, SR_SPONSORCODE, SR_DESC ) VALUES (
2030030930, 'IS', 'IS Lead');
CREATE TABLE PROJECTRESOURCE (
PR_ID NUMBER (11) NOT NULL,
PR_PS_FK NUMBER (11),
PR_STAFF_FK NUMBER (11),
PR_TBD_FLAG NUMBER (1),
PR_ROLE_FK NUMBER (11),
PR_DEPT_FK VARCHAR2 (4000),
PR_RESOURCETYPE_FK NUMBER (11),
PR_SPONSORFLAG NUMBER (1),
PR_CHANGEFLAG VARCHAR2 (4000))
/
INSERT INTO PROJECTRESOURCE ( PR_ID, PR_PS_FK, PR_STAFF_FK, PR_TBD_FLAG, PR_ROLE_FK, PR_DEPT_FK,
PR_RESOURCETYPE_FK, PR_SPONSORFLAG, PR_CHANGEFLAG ) VALUES (
-2083323726, 1112142177, 2144, 0, 7, '3730', -1571072680, 0, NULL);
INSERT INTO PROJECTRESOURCE ( PR_ID, PR_PS_FK, PR_STAFF_FK, PR_TBD_FLAG, PR_ROLE_FK, PR_DEPT_FK,
PR_RESOURCETYPE_FK, PR_SPONSORFLAG, PR_CHANGEFLAG ) VALUES (
-1595702725, 1112142177, 2116, 0, 7, '3730', -1571072680, 0, NULL);
INSERT INTO PROJECTRESOURCE ( PR_ID, PR_PS_FK, PR_STAFF_FK, PR_TBD_FLAG, PR_ROLE_FK, PR_DEPT_FK,
PR_RESOURCETYPE_FK, PR_SPONSORFLAG, PR_CHANGEFLAG ) VALUES (
-248050869, 1112142177, 2116, 0, 7, '3732', -1571072680, 0, NULL);
INSERT INTO PROJECTRESOURCE ( PR_ID, PR_PS_FK, PR_STAFF_FK, PR_TBD_FLAG, PR_ROLE_FK, PR_DEPT_FK,
PR_RESOURCETYPE_FK, PR_SPONSORFLAG, PR_CHANGEFLAG ) VALUES (
-24777811, 1112202605, 2173, 0, 7, '3304', -764152196, 0, 'P');
INSERT INTO PROJECTRESOURCE ( PR_ID, PR_PS_FK, PR_STAFF_FK, PR_TBD_FLAG, PR_ROLE_FK, PR_DEPT_FK,
PR_RESOURCETYPE_FK, PR_SPONSORFLAG, PR_CHANGEFLAG ) VALUES (
853550083, 1106778468, 2116, 0, 7, '3730', 93489626, 0, 'P');
------ACCESS QUERY--------
TRANSFORM Min(Staff.StaffLastName) AS MinOfStaffLastName
SELECT ProjectResource.PR_PS_FK
FROM StaffRole INNER JOIN (Staff INNER JOIN ProjectResource ON Staff.Staff_Id = ProjectResource.PR_Staff_FK) ON StaffRole.SR_Id = ProjectResource.PR_Role_FK
WHERE (((StaffRole.SR_SponsorCode)<>"na"))
GROUP BY ProjectResource.PR_PS_FK
PIVOT StaffRole.SR_SponsorCode In ("EC","CH","PM","TL","BB","GB","TR","IS");
OUT PUT format
PR_PS_FK EC CH PM TL BB GB TR IS
-------- -- -- ---- --- --- --- --- ---
Kindly help us to achive this. I have tried you mathod. but can not do .
May 19, 2005 - 10:46 am UTC
how about an example of the output you expect from this?
for you see, i've not any idea what "transform" means.
One more solution
A reader, May 19, 2005 - 12:23 pm UTC
May 19, 2005 - 1:51 pm UTC
Ouch -- not a bind variable in place. Ouch Ouch Ouch.
Avoid that approach please.
stragg
search for that on this site.
Good
Mahmood Lebbai., June 16, 2006 - 2:51 pm UTC
The examples given could have been more simpler for a quicker understanding.Most of the people come up with very big questions. It would be easier if the questions asked are precise,specific and to-the-point,instead of beating around the bush.It would be of great help in undestanding for a budding oracle DBAs or developers.
June 16, 2006 - 7:20 pm UTC
the examples or the questions.
You see, I control the examples - YOU guys control the questions :)
In a single query
Anvar, August 24, 2006 - 9:51 am UTC
Tom,
How can we get the rows into columns using a single query?
For e.g., the requirement is to have the following output.
PRESIDENT MANAGER ANALYST CLERK SALESMAN
------------------------------------------------
KING BLAKE FORD ADAMS ALLEN
CLARK SCOTT JAMES MARTIN
JONES MILLER TURNER
SMITH WARD
August 27, 2006 - 7:52 pm UTC
scott%ORA10GR2> select rn,
2 max( decode( job, 'PRESIDENT', ename )) pres,
3 max( decode( job, 'MANAGER', ename )) mgr,
4 max( decode( job, 'ANALYST', ename )) analyst,
5 max( decode( job, 'CLERK', ename )) clerk,
6 max( decode( job, 'SALESMAN', ename )) sales
7 from (
8 select ename,
9 job,
10 row_number() over (partition by job order by ename) rn
11 from emp
12 )
13 group by rn
14 order by rn
15 /
RN PRES MGR ANALYST CLERK SALES
---------- ---------- ---------- ---------- ---------- ----------
1 KING BLAKE FORD ADAMS ALLEN
2 CLARK SCOTT JAMES MARTIN
3 JONES MILLER TURNER
4 SMITH WARD
tbl2str, for. arbitrary number of rows?
Duke Ganote, September 21, 2006 - 12:57 pm UTC
Just for practice (and to handle an arbitary number of columns), I reversed STR2TBL into TBL2STR:
create or replace type myTableType
as table of number
/
create or replace
function tbl2str( p_in IN myTableType )
return varchar2
as
l_str VARCHAR2(4000);
l_n number;
begin
FOR l_n IN 1..p_in.count loop
l_str := l_str
|| CASE WHEN l_str is not null
THEN ',' END
|| p_in( l_n );
end loop;
return l_str;
end;
/
select tbl2str(
cast(
multiset( select level as lvl
from dual
connect by level < 10
) as myTableType
)
) "one long string"
from dual
/
one long string
------------------
1,2,3,4,5,6,7,8,9
TBL2STR ~~ STRAGG
Duke Ganote, October 02, 2006 - 4:05 pm UTC
How do you fetch the refcursor if you don't know how many colums you have?
Michael Hanson, October 05, 2006 - 7:52 pm UTC
Near the begining of the thread you gave a perfect example of how I need to display rows as columns. The only problem is how do you fetch the refcursor into something when you don't know exactly how many columns you are going to return. In your example you printed the contents of the ref cursor by doing the following:
"variable x refcursor
set autoprint on
exec pivot.data( :x );"
Thats great for displaying in sql plus, but what about a stored procedure? I need to fetch the ref cursor in a procedure and do a lot more processsing. I did read another one of your threads and you said if you don't know how many columns your returning, you can't use a ref cursor. You suggested to use DMBS_SQL. I looked that up as well and its still not clear how to fetch the data into something when you don't know how many columns your returning. Is there any way to make your rows into columns example work within a procedure? Can you give me an example?
October 06, 2006 - 8:37 am UTC
if you need to procedurally process a result set in plsql, you use dbms_sql. All other 3gls have the ability to retrieve a ref cursor and "describe it" (so the program can figure out "hey, I got 15 columns this time"
So, if you need to do this in plqsl, you'll need to rewrite it using dbms_sql and use dbms_sql instead. there you can ask "how many columns" and get the "i'th column"
The solutions helped me a lot.
vijay, November 28, 2006 - 5:51 am UTC
The ideas, tricks given by Tom really helps a lot. Thanks very much !
Also: look out for the resource consumption.....
Jens, November 29, 2006 - 8:23 am UTC
We had a similar rows-to-columns problem, with a fixed number of columns (actually not fixed, just maximum 7), and came up with the 2 following solutions (in that order):
set timing on;
set autotrace traceonly statistics;
--/*
select v1.td_info_id, v1.value val1, v2.value val2, v3.value val3, v4.value val4, v5.value val5, v6.value val6, v7.value val7
from labdb_td_results v1
left join labdb_td_results v2 on ((nvl(v1.result_count,-1)+1) = v2.result_count and v1.td_info_id = v2.td_info_id and v2.field_id=2)
left join labdb_td_results v3 on ((nvl(v1.result_count,-1)+2) = v3.result_count and v1.td_info_id = v3.td_info_id and v3.field_id=3)
left join labdb_td_results v4 on ((nvl(v1.result_count,-1)+3) = v4.result_count and v1.td_info_id = v4.td_info_id and v4.field_id=4)
left join labdb_td_results v5 on ((nvl(v1.result_count,-1)+4) = v5.result_count and v1.td_info_id = v5.td_info_id and v5.field_id=5)
left join labdb_td_results v6 on ((nvl(v1.result_count,-1)+5) = v6.result_count and v1.td_info_id = v6.td_info_id and v6.field_id=6)
left join labdb_td_results v7 on ((nvl(v1.result_count,-1)+6) = v7.result_count and v1.td_info_id = v7.td_info_id and v7.field_id=7)
where v1.field_id = 1
and v1.td_info_id=235;
892 rows selected.
Elapsed: 00:00:00.42
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
519 consistent gets
55 physical reads
0 redo size
23409 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
892 rows processed
versus the analytics:
set timing on;
set autotrace traceonly statistics;
select td_info_id, rn
, max(decode(field_id, 1, value)) val1
, max(decode(field_id, 2, value)) val2
, max(decode(field_id, 3, value)) val3
, max(decode(field_id, 4, value)) val4
, max(decode(field_id, 5, value)) val5
, max(decode(field_id, 6, value)) val6
, max(decode(field_id, 7, value)) val7
from (
select td_info_id, field_id, value, row_number() over (partition by td_info_id, field_id order by result_id) rn
from labdb_td_results
where td_info_id=235
)
group by td_info_id, rn
order by td_info_id, rn;
892 rows selected.
Elapsed: 00:00:00.15
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
57 consistent gets
0 physical reads
0 redo size
24170 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
892 rows processed
So half the elapsed time, 10% of the consistent gets in the analytics case. This is in version 10.2.0.1.0.
I haven't bothered to post the entire test-case (and yes, we do use bind-variables in the application query..), my point just being:
Make sure to measure performance and resource consumption of the various alternatives.
Jens
A reader, March 04, 2007 - 11:30 pm UTC
Hi Tom,
Table having huge volume of data and want to split single row to multiple rows.I mean split based on acc1 and id1
ex:
SELECT acc1,id1,acc2, id2 FROM TGDW_ACCT
acc1 id1 acc2 id2
6000161114 002 6000251684 001
6000161115 001 6000251687 004
Expecting o/p
------------
acc1 id1
6000161114 002
6000251684 001
6000161115 001
6000251687 004
Thanks and Regards,
MR
March 05, 2007 - 12:40 pm UTC
with data
as
(select 1 r from dual union all select 2 r from dual)
select decode( r, 1, id1, 2, id2 ) id,
decode( r, 1, acc1, 2, acc2 ) acc
from tgdw_acct, data
/
If new rows are added every now and then
A reader, June 28, 2007 - 10:18 am UTC
I have this table
Class Teacher
A Margaritta Gonson
A Eric Billy
A Faruq Omar
B Antouan Schensez
B Michael Gabriella
C Haluk Gumuskaya
D Gokhan Yavuz
D Banu Diri
i get the below result after running this
SELECT c.class_name,
MAX(decode(r, 1, c.teacher_name, NULL)) TEACHER_1,
MAX(decode(r, 2, c.teacher_name, NULL)) TEACHER_2,
MAX(decode(r, 3, c.teacher_name, NULL)) TEACHER_3
FROM
(SELECT row_number() over(PARTITION BY class_name ORDER BY teacher_name) r,class_name,teacher_name FROM classes) c
GROUP BY c.class_name
ORDER BY c.class_name;
Class Teacher 1 Teacher2 Teacher 3
A Eric Billy Faruq Omar Margaritta Gonson
B Antouan Schensez Michael Gabriella
C Haluk Gumuskaya
D Banu Diri Gokhan Yavuz
If i want this for N number of teachers then how do i do it.
that mean the number of teahcer can increase and decrease. how do i handle that case then for once and for all.
I need a dynamic code for this with out using PLSQL codings/fuctions.
Thanks for your help
July 02, 2007 - 10:29 am UTC
you need to know the number of columns a query will return - when you parse the query.
think about it....
if you can deal with varrays/collections - we can return an "array", but not an unknown number of columns.
index-by-table rows to columns
Madhuri, June 23, 2008 - 2:03 am UTC
Tom,
Your website is such a huge help to many of us out here.
Thanks for all the help we get here.
Coming to my question , similar to converting rows to columns but what I am trying to do is :
Convert the elements of an index by table to be displayed as follows :
The index should be displayed as the column title.
The value should be displayed as the value in the row for that column.
eg:
Index Value
14227 X
14229 B
14232 Y
14236 A
The output ( which will be used in a report ) should look like :
14227 14229 14232 14236 ...
X B Y A
The number of elements is not known ahead of time. The index by table is built in a function. This function could probably return a refcursor to my actual query which also selects 4 other columns from a join on a few tables.
I will actually be replacing these numbers 14227 etc., to a corresponding "Code/Name" which will be the actual column title.
Thank you so much!
June 23, 2008 - 8:06 am UTC
you would have to
a) run a query to find the number of columns you'll have
b) dynamically construct such a query
that is the only way - it is not worth it. I cannot imagine how this would be useful in real life - data that stretches for miles left and right is useless by humans - we just cannot look at data that way - it is not normal.
It is useful
A reader, August 05, 2008 - 12:45 pm UTC
It's useful to be able to convert from rows to columns, when you don't know the number of columns that are being created, or their exact names.
I suspected that the only way to do that, currently, is to write a Query to determined the number of columns you would have, then dynamically construct the query.
That makes it not worth it, but it would be a great idea, for the simple fact, that when you are told to pivot the data, you must do it. It doesn't really matter exactly the names of the pivot, nor does it matter exactly the number.
That doesn't make it infinite. Whether its 8 or 12, I don't think thats important, its just important to have clean code that does the job. This MAX and DECODE seems like a kind of hack in this modern world.
My two cents.
Great resource, very good to find the 'answer' even if I didn't like it, better than not knowing.
August 05, 2008 - 1:18 pm UTC
... I suspected that the only way to do that, currently, is to write a Query to
determined the number of columns you would have, then dynamically construct the
query.
....
correct - and then the client needs to know what they just selected anyway.... so, they sort of need to know this.
it absolutely matters the number and datatypes of the pivoted data - the CLIENT (you are the client in this case, the thing executing the query) sort of needs to know this.
If you just want an "array of data", you can use the COLLECT aggregate - else you do care and need to know the number of columns and their types.
There is a PIVOT syntax nowadays, so yes, max/decode is "old", but you still (ANSI says so too) need to know their names and the number of them.
Need Your Help " TOM "
Keerthi C, September 03, 2009 - 5:03 am UTC
Hi Tom,
I have a table like this :
Project Date Hours
-------- ----- ------
01 - Holiday 08/29/2009 08
01 - Holiday 08/28/2009 08
o2 - ABC 08/27/2009 08
02 - ABC 08/26/2009 08
02 - ABC 08/25/2009 08
I want output as Follows ..
01 - Holiday 08/29/2009 08/28/2009
------------- ---------- ---------
08 08
02 - ABC 08/27/2009 08/26/2009 08/25/2009
------------- ---------- --------- ----------
08 08 08
Hope you respond to this ASAP :)
Thanx,
Keerthi C
September 03, 2009 - 7:54 am UTC
search this site for pivot
and unless you know how many columns a query will have at parse time, it won't work in sql.
meaning, I don't think SQL is going to do anything for you here, you don't know what column names you will have, you don't know how many you will have - for each ROW even.
You'll be doing this in code in whatever formats the output.
Need your help @ this also :)
Keerthi C, September 03, 2009 - 5:13 am UTC
Hi Tom,
I have a table like this :
Project Date Hours
-------- ----- ------
01 - Holiday 08/29/2009 08
01 - Holiday 08/28/2009 08
o2 - ABC 08/27/2009 08
02 - ABC 08/26/2009 08
02 - ABC 08/25/2009 08
I want output as Follows ..
08/29/2009 08/28/2009 08/27/2009 08/26/2009 08/25/2009
---------- --------- ---------
01 - Holiday 08 08
02 - ABC 08 08 08
Keerthi C, September 03, 2009 - 8:28 am UTC
Hi Tom,
I have a table like this ..
Project Date Hours
----- ---- -----
A 1 8
A 2 8
B 3 8
C 4 8
D 5 8
If table is queried , I need like ..
Date 1 2 3 4 5
Project A B C D E
Hours 8 8 8 8 8
Can anyone help me on this issue ?
Thanx,
Abaya
Monika, September 27, 2009 - 8:22 am UTC
Hi Tom,
I have a similar issue. My issue is that i have to take row values as column name. Here i am taking HRMS tables like per_elements_type_f table i have to select elemnt name (eg Cost To Company,HRA,LTA etc) and these for each employee i hve to find out the values for these so my table struecture has be like as shown below:
Employee Number Emp Name CTC HRA
1 ABC 1000000 5000
soo this CTC,HRA etc are column value from one table and these values are also column value from othere table so can anybody suggest me that how to write a select query for this case?
Best Regards,
Monika
September 30, 2009 - 7:05 am UTC
no create table
no inserts
no real example
no look
you seem to assume I work with the same applications you do :)
but if all you are asking to do is pivot some data, please re-read this page - it shows the technique.
if you have a table (assumption on my part) that looks like:
empno attribute value
------- ---------- ---------
1 name abc
1 ctc 100000
1 hra 5000
2 name def
......
In 10g and before you would
select empno,
max( decode( attribute, 'name', value ) ),
max( decode( attribute, 'ctc', value ) ),
max( decode( attribute, 'hra', value ) )
from t
group by empno
In 11g, you can use pivot instead (new sql construct)
pivot
A reader, October 02, 2009 - 9:21 am UTC
please tune this query with similar problem
A reader, December 21, 2009 - 3:39 pm UTC
Select /*+ FIRST_ROWS */
Rcta.Org_id Id_org, Rcus.Customer_id Customer_id, Rcus.Customer_name Nombre_cliente, Rcus.Customer_number Nro_cliente,
Decode(Rcus.Customer_type, 'I', 'Nacional', 'Extranjero') Tipo_cliente, Rcta.Customer_trx_id Id_documento, Rcta.Trx_number Nro_transaccion,
Rcta.Type Categoria, Rbso.Name Serie, Rcta.Name Tipo_transaccion, Rcta.Invoice_currency_code Moneda, Rcta.Exchange_rate Rate_cambio,
Rcta.Trx_date Fecha_transac, To_date(Decode(Rcta.Type, 'CM', Null, Psch.Due_date)) Fecha_vencimiento, Rcta.Ct_reference Referencia,
Cta_rec Cta_x_cob, Cta_rev Cta_ingresos, Decode(Rcta.Type, 'BR', Null, Cta_tax) Cta_iva, Decode(
Rcta.Type,
'BR', Null,
Cta_ret) Cta_retencion,
Xxcn_contratos_util_pkg.Get_cps(Rcta.Customer_trx_id) No_contrato, Xxcn_contratos_util_pkg.Get_cps_type(Rcta.Customer_trx_id) Tipo_contrato,
(Select User_name
From Apps.Fnd_user
Where User_id = Rcta.Last_updated_by) Usuario
From (Select Max( Decode(
Lgd.Account_class,
'REC', Gcco.Segment1
|| '-'
|| Gcco.Segment2
|| '-'
|| Gcco.Segment3
|| '-'
|| Gcco.Segment4
|| '-'
|| Gcco.Segment5
|| '-'
|| Gcco.Segment6
|| '-'
|| Gcco.Segment7,
Null)) Cta_rec,
Max( Decode(
Lgd.Account_class,
'REV', Gcco.Segment1
|| '-'
|| Gcco.Segment2
|| '-'
|| Gcco.Segment3
|| '-'
|| Gcco.Segment4
|| '-'
|| Gcco.Segment5
|| '-'
|| Gcco.Segment6
|| '-'
|| Gcco.Segment7,
Null)) Cta_rev,
Max( Case
When(Lgd.Account_class = 'TAX' And Gcco.Segment3 In('204', '015', '021', '036', '018')) Then Gcco.Segment1
|| '-'
|| Gcco.Segment2
|| '-'
|| Gcco.Segment3
|| '-'
|| Gcco.Segment4
|| '-'
|| Gcco.Segment5
|| '-'
|| Gcco.Segment6
|| '-'
|| Gcco.Segment7
End) Cta_tax,
Max( Case
When(Lgd.Account_class = 'TAX' And Gcco.Segment3 = '033') Then Gcco.Segment1
|| '-'
|| Gcco.Segment2
|| '-'
|| Gcco.Segment3
|| '-'
|| Gcco.Segment4
|| '-'
|| Gcco.Segment5
|| '-'
|| Gcco.Segment6
|| '-'
|| Gcco.Segment7
End) Cta_ret,
Lgd.Customer_trx_id
From Apps.Gl_code_combinations Gcco, Apps.Ra_cust_trx_line_gl_dist Lgd
Where Lgd.Code_combination_id = Gcco.Code_combination_id(+)
Group By Lgd.Customer_trx_id) Rglg,
Apps.Ra_batch_sources_all Rbso,
Apps.Ra_customers Rcus,
Apps.Ar_payment_schedules_all Psch,
(Select /*+ INDEX( Rctt,RA_CUST_TRX_TYPES_U1) */
Rct.Customer_trx_id, Rct.Trx_date, Rct.Trx_number, Rct.Ct_reference, Rct.Invoice_currency_code, Rct.Term_id,
Rct.Exchange_rate_type, Rct.Exchange_date, Rct.Exchange_rate, Rct.Cust_trx_type_id, Rct.Org_id, Rct.Batch_source_id, Rctt.Type,
Rctt.Name, Rct.Bill_to_customer_id, Rct.Last_updated_by,
Decode(
Rctt.Type,
'BR', (Select Trxf.Customer_trx_id
From Apps.Ra_customer_trx_all Trxf, Apps.Ra_customer_trx_all Rct_br, Apps.Ra_customer_trx_lines_all Rctl_br
Where Rctl_br.Customer_trx_id = Rct_br.Customer_trx_id
And Rctl_br.Br_ref_customer_trx_id = Trxf.Customer_trx_id
And Rct_br.Customer_trx_id = Rct.Customer_trx_id
And Rownum = 1),
Rct.Customer_trx_id) Customer_trx_id_cta,
Rct.Complete_flag
From Apps.Ra_customer_trx_all Rct, Apps.Ra_cust_trx_types_all Rctt
Where Rctt.Cust_trx_type_id = Rct.Cust_trx_type_id) Rcta
Where Rcta.Customer_trx_id = Psch.Customer_trx_id(+)
And Rcta.Batch_source_id = Rbso.Batch_source_id
And Rcta.Bill_to_customer_id = Rcus.Customer_id(+)
And Rcta.Customer_trx_id_cta = Rglg.Customer_trx_id(+)
And Rcta.Complete_flag = 'N'
In Above query taking more than 5 hours and "Ra_cust_trx_line_gl_dist" table has more than 22 milion records..please advise me some tuing.....
December 21, 2009 - 4:16 pm UTC
hah, my favorite kind of question. (not, No one can answer these)
remove the first rows hint, it would seem you are interested in throughput, not using indexes.
If you have 10g or up and you have the tuning diagnostic packs, let it have at it and see what it says.
A reader, December 22, 2009 - 8:51 am UTC
Respected Sir,
In our DWH environment we are bringing a new feed and it has 400 columns our data block is 8K. our team is think to split the table horizontally into 4 tables and join it for reporting purpose
My question is
(1) if the row is too big to fit in a single block will row chaining effect the performance
(2) what your opinion whether to split table in 4 table or place all column in a single table
Please kindly advice
Many thanks
December 31, 2009 - 7:49 am UTC
why do you think that. Please give us your "reasoning" behind that. I'm not saying "you are wrong", rather I'd like to hear your scientific, fact based, measured reasoning for doing something
... that is probably wrong.
That's all.
(1) we'll chain every row with more than 254 columns anyway - these rows will be stored in two row pieces regardless (you want 4??? really???). The row pieces WE store will be more efficient to put together than the 4 you store as we'll have a direct pointer from row piece to row piece - whereas you would have to use an index (3 to 5 extra IO's for each row) OR you'll have to hash one of the tables into memory and hash join to it. I'd rather have a chained row myself.
(2) probably NOT, but I'd have to hear YOUR reasoning for doing so first to evaluate - the answer, as always, is "IT DEPENDS". So, give me something to consider.
How 'bout
Stephan Uzzell, December 28, 2009 - 3:40 pm UTC
Is there some variant pivot technique that could take a column and split it into columns based on the contents of a different column?
Maybe a test bed will be easier than the description:
SQL> create table testdata ( code varchar2(30), id number );
Table created.
SQL> insert into testdata (select object_name, mod(rownum,2) from user_objects w
here rownum<30);
29 rows created.
SQL> commit;
Commit complete.
SQL> select * from testdata;
CODE ID
------------------------------ ----------
APS_VALIDATE 1
XYZABC 0
J 1
SYS_C00228674 0
PLSQL_PROFILER_RUNS 1
PLSQL_PROFILER_UNITS 0
Now, the desired output would be something like:
ID_0 ID_1
---------------------------- ------------------------------
XYZABC APS_VALIDATE
SYS_C00228674 J
PLSQL_PROFILER_UNITS PLSQL_PROFILER_RUNS
I've tried getting this with a max( decode( )) trick, but that seems to get me the (single) max CODE for each ID...
Thanks, Tom!
January 04, 2010 - 8:15 am UTC
since rows in a table have NO ORDER whatsoever, how did you know -what "logic" did you use - to put XYZABC and APS_VALIDATE together.
There is no reason the result set output would not be:
SQL> select * from testdata;
CODE ID
------------------------------ ----------
APS_VALIDATE 1
J 1
SYS_C00228674 0
PLSQL_PROFILER_RUNS 1
PLSQL_PROFILER_UNITS 0
XYZABC 0
what would your answer then be - you are missing a lot of stuff here.
Sunil, December 31, 2009 - 3:17 am UTC
Try following query:
SELECT *
FROM (SELECT LEAD (id0) OVER (ORDER BY rnum) id_0, id1 id_1
FROM ( SELECT DECODE (id, 0, code, NULL) id0,
id,
DECODE (id, 1, code, NULL) id1,
ROWNUM rnum
FROM testdata
ORDER BY id))
WHERE id_1 IS NOT NULL
January 04, 2010 - 10:11 am UTC
it presumes rows have some order.
It fails - the original poster is missing crucial information - information as to how to control the order of the rows.
once we have that, using analytics will be trivial.
until then, it is NOT POSSIBLE in a predicable fashion, not at all possible.
Groups are always the same size?
Matt McPeak, December 31, 2009 - 9:14 am UTC
Neat! But it assumes that the groups always have the same number of rows. What do you think of this variation (below)?
SELECT id_0, id_1
FROM (SELECT code id_0,
ID,
LEAD (code) OVER (PARTITION BY rn ORDER BY ID) id_1
FROM (SELECT code,
ID,
ROW_NUMBER () OVER (PARTITION BY ID ORDER BY code)
rn
FROM TESTDATA)
ORDER BY rn, ID)
WHERE ID = 0
January 04, 2010 - 10:12 am UTC
see above, they all fail - until there is a manner in which to ORDER the data.
or until the specification is changed.
Thank you Sunil and Matt!
Stephan Uzzell, December 31, 2009 - 2:52 pm UTC
Very cool, and I thank you both! Just to prove to myself that I understood what you were doing, I worked it up to do three columns:
SELECT id_0
, id_1
, id_2
FROM (
SELECT code id_1
, ID
, LEAD( code ) OVER ( PARTITION BY rn ORDER BY ID ) id_2
, LAG( code ) OVER ( PARTITION BY rn ORDER BY ID ) id_0
FROM (
SELECT code
, ID
, ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY code ) rn
FROM TESTDATA
)
ORDER BY rn
, ID
)
WHERE ID = 1
Thanks to both of you, and to Tom for hosting this great site!
Need help to convert rows of 1 table into columns
dev, March 04, 2010 - 6:12 am UTC
Hi Tom,
I have a question to ask you.
I have two tables t1 and t2, the structures of both are as follows:
t1(emp_id number, emp_name varchar2)
t2(emp_id references emp_id(t1), emp_location_access)
sample data both the tables is as follows:
select * from t1;
emp_id emp_name
123 john
456 michael
select * from t2;
emp_id emp_location_access
123 Australia
123 Indonesia
123 Malaysia
456 USA
456 UK
I want the output of the sql statement to be as follows:
emp_id emp_name emp_location_access1 emp_location_access2 emp_location_access3
123 john Australia Malaysia Indonesia
456 michael USA UK
The number of records in the table t2 for any emp_id is not fixed.
Is this possible to do using SQL only? If not then could you please tell us of any other way to achieve the above goal.
Thanks a lot in advance for your help.
Rgds,
dev
March 04, 2010 - 9:58 am UTC
no create table, no inserts, no look
you will have to tell us is there is a maximum number of rows in t2 for each row in t1 (if there isn't, then the answer is "this cannot be done using columns").
and if there is, suggest you search this site for the word pivot and try to see if you cannot get it going yourself first.
missing row data
prithviraj, April 01, 2010 - 5:42 am UTC
Hi Tom,
I have a question similar to this topic.
I have a table like
id name
1 aaa
2 bbb
3 ccc
6 ddd
7 fff
9 hhh
Id column is primary key. Now how do i get the data from 1 to 9 (id value.
Will i get all the records or the query will stuck in the middle after 3 because after 3 there is no 4 (id).
What will be the query to get all the data in situations like this where the PK is intermittant not continuous.
April 05, 2010 - 1:00 pm UTC
are you serious? I am seriously asking that.
if you query "select * from t where id between 1 and 9" do you really think that if the number "3" is missing - something bad would happen??
we are not only missing 3 in that set, we are missing 3.1, 3.01, 3.001, 3.0001, and so on - we are missing basically an infinite set of values in that range from 1..9.
But if you ask for everything between 1 and 9, you'll get everything that exists between 1 and 9.
Rows to Column
Prasanth, April 09, 2010 - 8:52 am UTC
Hi Tom,
For the given account no, I require the total amount for the account corresponding to month, product and rptlevel combination as a single column value.
CREATE TABLE product
(
ACCNO NUMBER(5),
MONTH NUMBER(1),
PRODUCT VARCHAR2(50),
RPTLEVEL VARCHAR2(50),
AMOUNT NUMBER(10,2)
);
INSERT INTO product ( ACCNO, MONTH, PRODUCT, RPTLEVEL, AMOUNT )
VALUES ( 100, 5, 'P1', 'R1', 10.1);
INSERT INTO product ( ACCNO, MONTH, PRODUCT, RPTLEVEL, AMOUNT )
VALUES ( 100, 5, 'P1', 'R1', 5.2);
INSERT INTO product ( ACCNO, MONTH, PRODUCT, RPTLEVEL, AMOUNT )
VALUES ( 100, 5, 'P1', 'R2', 40);
INSERT INTO product ( ACCNO, MONTH, PRODUCT, RPTLEVEL, AMOUNT )
VALUES ( 200, 5, 'P1', 'R2', 11.1);
COMMIT;
SELECT * FROM product;
ACCNO MONTH PRODUCT RPTLEVEL AMOUNT
100 5 P1 R1 10.10
100 5 P1 R1 5.20
100 5 P1 R2 40.00
200 5 P1 R2 11.10
The total rows (combination of 'ACCNO,PRODUCT,RPTLEVEL ) in an account to be pivoted will be knowing in runtime only.Hence I have created a procedure and in that first will take the max(count) of the rows to be pivoted.
Based on the count I will be iterating the count & creates the decode function for pivoting.For this example the count is 2.
SELECT accno,
max(decode(rn, 1, amount, null)) AS amount_1,
max(decode(rn, 2, amount, null)) AS amount_2
FROM (SELECT accno,product || rptlevel || month as grprec,sum(amount) amount,
row_number() over(partition by accno order by product, rptlevel,month) rn
FROM product
GROUP BY accno, month, product, rptlevel) GROUP BY accno;
ACCNO AMOUNT_1 AMOUNT_2
100 15.3 40
200 11.1
But I want the output as
ACCNO AMOUNT_1 AMOUNT_2
100 15.3 40
200 11.1
The amount of combination 'P1R15' requires under the column AMOUNT_1 and 'P1R25' is under AMOUNT_2
Thank you
April 13, 2010 - 9:00 am UTC
you'd need to case on the report level then, it looks like you want R1 in column 1, R2 in column 2.
It seems to me you are summing up over accno and pivoting rptlevel - REGARDLESS of product or month (you group by accno)
ops$tkyte%ORA10GR2> select accno,
2 sum( decode( rptlevel, 'R1', amount ) ) amt1,
3 sum( decode( rptlevel, 'R2', amount ) ) amt2
4 from product
5 group by accno
6 order by accno;
ACCNO AMT1 AMT2
---------- ---------- ----------
100 15.3 40
200 11.1
please use code button in the future to format.
But your example and you explanation of what you say you want are in 100% direct conflict with each other. If the query I provided "is not right" it is only because of that.
Rows to Column
Prasanth, April 16, 2010 - 9:24 am UTC
Hi Tom,
Thank you for the reply.
If another month is present for the account, the amount pivoted only based on rptlevel won’t be correct.
Sorry for providing incomplete data.
INSERT INTO product VALUES (200, 6, 'P1', 'R2', 10.1);
Commit;
I concatenated the month also with the rptlevel.
select accno,
sum( decode( rptlevel || month, 'R15', amount ) ) amt1,
sum( decode( rptlevel || month, 'R25', amount ) ) amt2,
sum( decode( rptlevel || month, 'R26', amount ) ) amt3
from product
group by accno
order by accno;
ACCNO AMT1 AMT2 AMT3
---------- ---------- ---------- ----------
100 15.3 40
200 11.1 35.1
Is this the correct approach?
Thank you
April 16, 2010 - 11:30 am UTC
beware implicit conversions, use a format on that data to make sure it always converts in the way you expect.
rows into columns
rajanikumar, June 10, 2010 - 7:10 am UTC
hello this is rajanikumar,
please help me how to "convert rows into columns" in asp designing from MsAccess database
i.e
id name class section
1 a 10 x
2 b 10 y
3 c 10 z
i want to this way
id 1 2 3
name a b c
class 10 10 10
section x y z
please help me
June 10, 2010 - 2:31 pm UTC
do you always know there will be "three rows". Are you able to do dynamic sql? Can you deal with a 10,000 column result set (I couldn't personally, it would be "not smart") if the answer to #1 is "no"?
New Question
pavani, November 05, 2010 - 4:20 pm UTC
Tom,
I have start visited your site, It is really excellent and I have learnt alot.I am gaining a lot of confidence too.
I have a question.
I have 2 tables and both tables are having a single common column. I have to pick only 1 Table A and most of the Columns from Table B and by using the CASE or DECODE, the First table's single column data will become the Columns of the TABLE C(it is nothing but PIVOT)...right now I have only 180 CASE or DECODE Statements and in Oracle 10g I dont have any PIVOT function. I need to add 300 more columns(CASE or DECODE). can you please let me know how to make it Dynamic?...Please provide me some sample code for this.
November 06, 2010 - 7:27 pm UTC
this makes no sense.
"i have to pick only 1 table a" - what does that mean?
"most of the columns from table b" - ok with that.
"and by using the CASE
or DECODE, the First table's single column data will become the Columns of the
TABLE C" - kaboom - totally lost, you started with "I have 2 tables (a and b)" and here we are with C - and furthermore how can a SINGLE column from what appears to be 1 row (that is the only way I can interpret "pick only 1 table a") become "the columnS of table C"
if you have three hundred rows in table A that join to a single row in table B and you need to make the rows of A - into columns - you will have to incorporate 300 decode statements - no other way really. To make it "dynamic" - rethink what you are thinking - dynamic should be used if and only if static cannot be used.
With that in mind, imagine how you might write a small program to read the data dictionary to print out a sql statement you could statically embed in your application - think along those lines.
Good One!
Kamesh Akundi, November 11, 2010 - 6:06 am UTC
The answer is working fine for more than one row and one column.
However, I have a doubt. Is this solution will work for single column with more rows. I mean is it possible to turn more than 1 row into a single column
The Data is as follows:
Column A
=========
ROW1: 20
ROW2: 30
ROW3: 40
Expected Output should be like this:
Column A
=========
ROW1: 20 ROW2:30 ROW3:40
This can be achieved using cursor loops also. But I don't want.
November 11, 2010 - 2:46 pm UTC
ops$tkyte%ORA10GR2> select substr( max( sys_connect_by_path( data, '-' ) ), 2 ) new_data
2 from (
3 select data, row_number() over (order by data) rn
4 from t
5 )
6 start with rn = 1
7 connect by prior rn = rn-1
8 /
NEW_DATA
-------------------------------------------------------------------------------
row1: 10-row2: 20-row3: 30-row4: 40
ops$tkyte%ORA11GR2> select listagg(data) within group (order by data)
2 from t
3 /
LISTAGG(DATA)WITHINGROUP(ORDERBYDATA)
-------------------------------------------------------------------------------
row1: 10row2: 20row3: 30row4: 40
kumar, March 31, 2011 - 5:31 am UTC
Hi tom,
First of all many many thanks for your invaluable information.
Actually i am trying to extract data from table in to text file using utl_file package. but I am getting error invalid directory path. i can't know how to add dir in init.ora file. can u plz tell me the actual path where the file in oracle directory.
thanks
April 12, 2011 - 10:04 am UTC
create or replace directory MY_DIR as 'some directory on the server';
and then use 'MY_DIR' in the utl_file.fopen call.
kumar, March 31, 2011 - 5:31 am UTC
Hi tom,
First of all many many thanks for your invaluable information.
Actually i am trying to extract data from table and write in to text file using utl_file package. but I am getting error invalid directory path. i can't know how to add dir in init.ora file. can u plz tell me the actual path where the file in oracle directory.
thanks
regarding "Rows into columns", version 10.2/10.1
Vinuth L, July 01, 2011 - 6:34 am UTC
This is regarding to rows to columns conversion (similar to Pivot), new to Oracle. Could you guide me?
create table bookings
(site varchar2(50),req_date varchar2(50),
total_amount decimal(18,2));
insert into bookings(site,req_date, total_amount) values ('abc','2011-01', '20.22');
insert into bookings(site,req_date, total_amount) values ('abc','2011-01', '21.22');
insert into bookings(site,req_date, total_amount) values ('abc','2011-02', '22.22');
insert into bookings(site,req_date, total_amount) values ('sas','2011-02', '23.22');
insert into bookings(site,req_date, total_amount) values ('sas','2011-02', '24.22');
Note: result required is shown below and req_date is dynamic input (req_date is based on user request date selection From and To date)
Result:
site 2011-01 2011-02
Abc 41.44 22.22
sas null 47.44
July 01, 2011 - 9:31 am UTC
see above - max(decode()) is how to pivot in 10g and before.
I don't really know what the user is inputting here - are they inputting the date 2011-1 and you just know to do that month and the next month? If so, you can use a static sql statement.
If not, if they input a range of dates and the number of columns changes from execution to execution - you need to use dynamic sql (which is totally demonstrated in the original answer above)...
@Kumar: Dumping data into file
A reader, July 04, 2011 - 7:26 am UTC
Hi Kumar,
In case you are on windows:
1. Connect as Sysdba e.g.
connect / as sysdba
2. Grant execute to util_file to the user in case it does not has privilege:
grant execute on UTL_FILE to public;
3. Make sure you manually create a folder into Windows(right click and new folder) and then logically map, with your SYS account, in Oracle with the following command, to create and Grant permission to folder:
create directory filesdir as 'c:\tempora';
grant read on directory filesdir to public;
grant write on directory filesdir to public;
4. Write into file:
DECLARE
EMPLOYEESFILE UTL_FILE.FILE_TYPE;
CURSOR EMPLOYEESCUR IS
SELECT ENAME FROM EMPLOYEES ORDER BY ENAME;
BEGIN
-- OPEN FILE FOR WRITING
EMPLOYEESFILE := UTL_FILE.FOPEN('FILESDIR','EMPLOYEES.TXT','W');
FOR REC IN EMPLOYEESCUR
LOOP
UTL_FILE.PUT_LINE(EMPLOYEESFILE, REC.ENAME);
END LOOP;
UTL_FILE.FCLOSE(EMPLOYEESFILE);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE( 'ERROR:' || SQLERRM);
RAISE;
END;
Once you run this code, you can look into c:\tempora for text file, 'EMPLOYEES.TXT'
You will find many examples/ways in the site. May you can look into post:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:88212348059 Hope it helps.
Rows to columns
devang, July 08, 2011 - 6:29 am UTC
Hi Tom,
I have a query to build in relation to the rows-to-columns mechanism as follows:
Suppose I have a table as Cust_Cost with data as follows:
Cust_id, Cust_name,Cust_short_name Cust_Inv_amt
123, ABN Amro Bank, ABNAMRO, 100
345, Bank of America, BOA, 200
788, Microsoft Corp, MICS, 1000
999, Ford Motor Company, FMC, 900
and so on.....
Now, the requirement is we have to prepare a wire transfer for the bank such that the customer short name & invoice amount is concatenated. The total no of wire transfers will vary depending on the month of billig.
The wire transfer has a limit of 40 characters
Sample wire transfer message:
'May-11 Export Billing-ABNAMRO$100;MICS$1000;BOA$200;FMC$900' <<1>>
This string can have maximum of 40 characters, once this limit is crossed, there should be a separate record for the new wire transfer, so above string will fail, query should give me o/p as follows:
May-11 Export Billing-ABNAMRO$100 # Length 33
May-11 Export Billing-MICS$1000;BOA$200 # Length 39
May-11 Export Billing-FMC$900 Length # 29
Your help would really be appreciated...
July 08, 2011 - 3:11 pm UTC
no create
no inserts
no look
not sure this can be done simply. It might be easily "hacked together" using the fact that you really have only 18 characters - the other 22 are 'fixed'. Since you can assume at least 7 characters (3 for name, 3 for amount, one for $), there can be AT MOST two rows on a line.
Actually, after typing that, I would say that I'd want to analyze the data first to see how many transfer strings I would avoid making by doing this - it doesn't seem like it would even be worth it if many transfers exceed 10k and the names exceed 3 characters.
Another query
Hash, August 30, 2011 - 2:30 pm UTC
Hello Sir
My question is similar to the original post but may be mine is a little simple. could you please solve it?
create table t (
group_no number,
job_code number,
total number);
insert into t values (11,0,3);
insert into t values (11,1,8);
insert into t values (11,2,178);
insert into t values (11,3,1157);
insert into t values (11,4,2132);
insert into t values (11,5,31);
insert into t values (11,6,22);
insert into t values (12,1,239);
insert into t values (12,2,59);
insert into t values (12,3,1039);
insert into t values (12,4,207);
insert into t values (12,5,32);
group_no 0 1 2 3 4 5 6 7 8 9
11 3 8 178 1157 2132 31 22 0 0 0
12 0 239 59 1039 207 32 0 0 0 0
August 31, 2011 - 8:31 am UTC
seriously? You couldn't get from A to B here? That really tells me you don't understand at all how this works. Which means.... You probably shouldn't use it, how will you ever be able to maintain the code???
Your question is virtually IDENTICAL (only easier) than the original post. I had to basically remove a single line of code (they needed two columns, you need one) and change some column names.
Please do make sure you actually understand how and why this works - you really should have been able to to this yourself in a few minutes.
ops$tkyte%ORA11GR2> create or replace package body pivot
2 as
3
4
5 procedure data( p_cursor in out rc )
6
7
8 is
9 l_stmt long;
10 begin
11
12
13 l_stmt := 'select group_no';
14 for x in ( select distinct job_code from t order by 1 )
15 loop
16 l_stmt := l_stmt ||
17 ', nvl( max(decode(job_code,' || x.job_code ||
18 ', total )), 0) job_' || x.job_code;
19 end loop;
20 l_stmt := l_stmt || ' from t group by group_no order by group_no';
21
22
23 open p_cursor for l_stmt;
24 end;
25
26
27 end;
28 /
Package body created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> variable x refcursor
ops$tkyte%ORA11GR2> set autoprint on
ops$tkyte%ORA11GR2> exec pivot.data( :x );
PL/SQL procedure successfully completed.
GROUP_NO JOB_0 JOB_1 JOB_2 JOB_3 JOB_4 JOB_5 JOB_6
---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
11 3 8 178 1157 2132 31 22
12 0 239 59 1039 207 32 0
Now, this probably still isn't what you wanted - since your example output goes to 9 for some inexplicable reason - perhaps you have a table of possible job codes somewhere. If so, you can figure out what to fix to 'correct' that (hint: it is one line of code again, the fix is trivial if you have a table of job codes available)
Hash, August 30, 2011 - 2:34 pm UTC
Sorry, I forgot to mention my question
The output should look like
group_no 0 1 2 3 4 5 6 7 8 9
11 3 8 178 1157 2132 31 22 0 0 0
12 0 239 59 1039 207 32 0 0 0 0
job_code becomes the column and total becomes the intersection of group_no and job_code
thanks
Job Code Table
Hash, August 31, 2011 - 10:17 am UTC
Sir, thank you for a quick reply. Yes I do have a job code table like this
job_code number,
job varchar2(30));
insert into j values(0, 'MANAGER');
insert into j values(1, 'ASST MANAGER');
insert into j values(2, 'JR MANAGER');
insert into j values(3, 'ACCOUNTANT');
insert into j values(4, 'CLERK');
The job codes are fixed from 0 to 9
Another question is that can you do it in SQL only?
thanks again
August 31, 2011 - 2:05 pm UTC
if the job codes are fixed from 0 to 9 forever - and will not change - then *of course* you can do it in sql only.
Think about it. What does this procedure generate? A SQL statement. What drives the construction of this sql statement? The set of job codes. If you run this procedure 1,000,000 times and do not change the job codes - it would generate the same exact sql 1,000,000 times. If and only if you change the job codes would the generated sql change.
Thanks
hash, August 31, 2011 - 3:56 pm UTC
Sir Thanks a lot. I extracted the generated sql from your code & it works like a charm. Thank you for your help
Converting rows into Columns
chandar, March 06, 2012 - 3:32 am UTC
Hi Tom,
I have two tables
1)PS_EMPLOYEES
Emplid Deptid
40001 67566
40002 67566
40003 78987
2)PS_HW_DEPT_ROLES
Setid Deptid Effdt Effseq HW_ROLE HW_ROLE_ID
CHSID 67566 10/1/2011 0 PRI 54768
CHSID 67566 10/1/2011 1 PRI 67676
CHSID 67566 10/1/2011 2 HRA 34544
CHSID 67566 10/1/2011 3 HRA 42344
CHSID 67566 10/1/2011 4 HRA 23423
CHSID 67566 10/1/2011 0 PRI 43345
CHSID 78987 10/1/2011 1 HRA 11232
CHSID 78987 10/1/2011 2 HRA 89789
CHSID 78987 10/1/2011 3 HRA 08978
CHSID 78987 10/1/2011 4 HRA 54456
CHSID 78987 10/1/2011 5 HRA 54545
likewise it consists of values
I want to list the values like
Emplid Deptid PRI PRI HRA HRA HRA HRA HRA
40001 67566 54768 67676 34544 42344 23423
40002 67566 54768 67676 34544 42344 23423
40003 78987 43345 11232 89789 08978 54456 54545
It consists of More than 5 HRA and PRI
Please Help me
Regards,
Chandar
March 06, 2012 - 7:08 am UTC
no create
no inserts
no look
Converting rows into columns
chandar, March 09, 2012 - 4:48 am UTC
Hi Tom,
I want rows into columns
Create table PS_EMPLOYEES(
EMPLID VARCHAR2(20) primary key,
deptid VARCHAR2(20),primary key)
another table PS_HW_DEPT_ROLES
Create table PS_HW_DEPT_ROLES(
setid VARCHAR2(20) primary key,
deptid VARCHAR(20) primary key,
effdt date primary key,
effseq number,
HW_ROLE VARCHAR2(20),
HW_ROLE_ID VARCHAR2(20));
insert into PS_EMPLOYEES VALUES('40001','67566')
insert into PS_EMPLOYEES VALUES('40002','67566')
insert into PS_EMPLOYEES VALUES('40003','78987')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','67566','10/1/2011',0,'PRI','54768')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','67566','10/1/2011',1,'HRA','67676')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','67566','10/1/2011',2,'HRA','34544')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','67566','10/1/2011',3,'HRA','42344')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','67566','10/1/2011',4,'HRA','23423')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',0,'PRI','43345')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',1,'HRA','11232')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',2,'HRA','89789')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',3,'HRA','08978')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',4,'HRA','54456')
insert into PS_HW_DEPT_ROLES VALUES('CHSID','78987','10/1/2011',5,'HRA','54545')
I want to list the values like
Emplid Deptid PRI HRA HRA HRA HRA HRA
40001 67566 54768 67676 34544 42344 23423
40002 67566 54768 67676 34544 42344 23423
40003 78987 43345 11232 89789 08978 54456 54545
Some times it consists more than five HRA ,
Please Help Me
Regards,
Chandar.
March 09, 2012 - 9:24 am UTC
then you found the right page? read away.
Rows to columns
aman, January 15, 2013 - 1:23 pm UTC
Below is my data in the table.
UDFID NAME UDFID VALUE FUNDKEY
1 LegalName 1 citi 6709
2 VintageYear 2 1999 6709
1 LegalName 1 a0000 6710
2 VintageYear 2 2500 6710
Now i want to create the below report in the info view
FundKey Legal Name Vintage Year
6709 citi 1999
6710 a0000 2500
can u please help
January 15, 2013 - 2:40 pm UTC
no create
no inserts
no look
changing rows to coloumns
kiran, August 14, 2013 - 10:10 am UTC
Hi,
I have a table Employee_details having columns
create table employee_details(
employee_no number,
info_type varchar2(25),
Passport_no number,
visa_no number,
working_time_directive number,
FA_Formation number)
/
having values like
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'passport_details',234567,null,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'visa_details',null,3456,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'passport_details',45678,null,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'worktime_details',null,null,7689,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'fa_formation_details',null,null,null,4567)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'visa_details',null,5678,null,null)
/
insert into employee_details(employee_no,info_type,passport_no,visa_no,working_time_directive,fa_formation)
values(345618,'passport_details',null,56745,null,null)
nd my result is to display data as
i want o\p:
employee_no passport_no visa_no working_time_no fa_formation
345618 234567 3456 7689 4567
345618 45678 5678
345618 5674
the output i required is
for example for an employeee i am having 5 passport numbers 2 visa numbers 1 fa_formation
then
i should get in output as 5 rows ("i.e WHAT IS THE MAXIMUM NUMBER OF VALUES PRESENT") HERE passport has maximum values soo
in the first coloumn i should get all the 5 passport numbers
in the second coloumn i should get the 2 visa numbers
in such a way i should get 1 in fa_ formation COLOUMN
THE REMAINING SHOLUD BE NULL
August 14, 2013 - 3:22 pm UTC
so, why do you feel compelled to post the same exact question in multiple locations?
look at the place where you posted this first.