Skip to Main Content

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Von.

Asked: March 26, 2005 - 8:36 pm UTC

Answered by: Tom Kyte - Last updated: December 06, 2017 - 1:03 pm UTC

Category: Database - Version: 9.2

Viewed 10K+ times! This question is

You Asked

Hi Tom
I never really understood the usage of GROUPING_ID function in OTN.I heard it avoids using multiple GROUPING functions.Can you please illustrate with a small example
Thanks

and we said...

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:18735507471922#18844335353167 <code>

You use grouping_id with "grouping sets" or group by cube/rollup -- it tells you the level of aggregation.

Consider, we'll show grouping sets, group by rollup and group by cube to see what you get:

ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
Table created.

ops$tkyte@ORA9IR2> select deptno, job, sum(sal),
2 grouping_id(deptno) deptno_g,
3 grouping_id(job) job_g,
4 case when grouping_id(deptno)||grouping_id(job) = '00'
5 then 'Dtl both'
6 when grouping_id(deptno)||grouping_id(job) = '10'
7 then 'Agg over deptno'
8 when grouping_id(deptno)||grouping_id(job) = '01'
9 then 'Agg over job'
10 when grouping_id(deptno)||grouping_id(job) = '11'
11 then 'Agg over both'
12 end what
13 from emp
14 group by grouping sets( (deptno), (job) )
15 /

DEPTNO JOB SUM(SAL) DEPTNO_G JOB_G WHAT
------ --------- ---------- ---------- ---------- ---------------
10 8750 0 1 Agg over job
20 10875 0 1 Agg over job
30 9400 0 1 Agg over job
ANALYST 6000 1 0 Agg over deptno
CLERK 4150 1 0 Agg over deptno
MANAGER 8275 1 0 Agg over deptno
PRESIDENT 5000 1 0 Agg over deptno
SALESMAN 5600 1 0 Agg over deptno

8 rows selected.

We asked for group bys only on DEPTNO and then only on JOB. So, that was like running the query:

select deptno, null, sum(sal) from emp group by deptno
union all
select null, job, sum(sal) from emp group by job;

in one query.... -- grouping_id tells us what level of detail we have on each row


ops$tkyte@ORA9IR2> select deptno, job, sum(sal),
2 grouping_id(deptno) deptno_g,
3 grouping_id(job) job_g,
4 case when grouping_id(deptno)||grouping_id(job) = '00'
5 then 'Dtl both'
6 when grouping_id(deptno)||grouping_id(job) = '10'
7 then 'Agg over deptno'
8 when grouping_id(deptno)||grouping_id(job) = '01'
9 then 'Agg over job'
10 when grouping_id(deptno)||grouping_id(job) = '11'
11 then 'Agg over both'
12 end what
13 from emp
14 group by rollup( deptno, job )
15 /

DEPTNO JOB SUM(SAL) DEPTNO_G JOB_G WHAT
------ --------- ---------- ---------- ---------- ---------------
10 CLERK 1300 0 0 Dtl both
10 MANAGER 2450 0 0 Dtl both
10 PRESIDENT 5000 0 0 Dtl both
10 8750 0 1 Agg over job
20 CLERK 1900 0 0 Dtl both
20 ANALYST 6000 0 0 Dtl both
20 MANAGER 2975 0 0 Dtl both
20 10875 0 1 Agg over job
30 CLERK 950 0 0 Dtl both
30 MANAGER 2850 0 0 Dtl both
30 SALESMAN 5600 0 0 Dtl both
30 9400 0 1 Agg over job
29025 1 1 Agg over both

13 rows selected.

Rollup is sort of like a running total report -- and grouping id tells us when the rollups happened. So the data is sorted by deptno, job and we have subtotals by job (agg over job) and by deptno, job (agg over both) along with the details by deptno/job

cube is similar but you get all possible aggregations:



ops$tkyte@ORA9IR2> select deptno, job, sum(sal),
2 grouping_id(deptno) deptno_g,
3 grouping_id(job) job_g,
4 case when grouping_id(deptno)||grouping_id(job) = '00'
5 then 'Dtl both'
6 when grouping_id(deptno)||grouping_id(job) = '10'
7 then 'Agg over deptno'
8 when grouping_id(deptno)||grouping_id(job) = '01'
9 then 'Agg over job'
10 when grouping_id(deptno)||grouping_id(job) = '11'
11 then 'Agg over both'
12 end what
13 from emp
14 group by cube( deptno, job )
15 /

DEPTNO JOB SUM(SAL) DEPTNO_G JOB_G WHAT
------ --------- ---------- ---------- ---------- ---------------
29025 1 1 Agg over both
CLERK 4150 1 0 Agg over deptno
ANALYST 6000 1 0 Agg over deptno
MANAGER 8275 1 0 Agg over deptno
SALESMAN 5600 1 0 Agg over deptno
PRESIDENT 5000 1 0 Agg over deptno
10 8750 0 1 Agg over job
10 CLERK 1300 0 0 Dtl both
10 MANAGER 2450 0 0 Dtl both
10 PRESIDENT 5000 0 0 Dtl both
20 10875 0 1 Agg over job
20 CLERK 1900 0 0 Dtl both
20 ANALYST 6000 0 0 Dtl both
20 MANAGER 2975 0 0 Dtl both
30 9400 0 1 Agg over job
30 CLERK 950 0 0 Dtl both
30 MANAGER 2850 0 0 Dtl both
30 SALESMAN 5600 0 0 Dtl both

18 rows selected.


so, grouping id doesn't avoid multiple grouping functions (grouping SETS does). but grouping id plays an important role in seeing what data is "what"


and you rated our response

  (16 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

GROUPING or GROUPING_ID?

March 29, 2005 - 8:35 am UTC

Reviewer: Philip from Cincinnati, Ohio

Hi Tom,

I believe that while your use of the "GROUPING_ID" function is valid - it doesn't highlight the true power of the function. The same result could be gleaned with the "GROUPING" function (to return a "1" or "0" based upon whether or not that particular column is summarized via ROLLUP or CUBE).

GROUPING_ID becomes EXTREMELY useful for determining the summarization of Multiple columns. It returns a decimal number which when converted to binary is a concatenation of the 1's and 0's which represent the summarization flag of each column passed into the GROUPING_ID function.

Here is an example:

SELECT lvl1
, lvl2
, lvl3
, TO_NUMBER(GROUPING(lvl1) || GROUPING(lvl2) || GROUPING(lvl3)) AS grouping_function
, GROUPING_ID( lvl1
, lvl2
, lvl3
) AS grouping_id_function_binary
, SUM(fact_amount) AS fact_amount_sum
FROM
(
SELECT 'World' AS lvl1
, 'North America' AS lvl2
, 'United States' AS lvl3
, 25 AS fact_amount
FROM dual
UNION ALL
SELECT 'World' AS lvl1
, 'Latin America' AS lvl2
, 'Mexico' AS lvl3
, 100 AS fact_amount
FROM dual
UNION ALL
SELECT 'World' AS lvl1
, 'North America' AS lvl2
, 'Canada' AS lvl3
, 50 AS fact_amount
FROM dual
) fact
GROUP BY ROLLUP ( lvl1
, lvl2
, lvl3
)

LVL1 LVL2 LVL3 GROUPING_FUNCTION GROUPING_ID_FUNCTION_BINARY
----- ------------- ------------- ----------------- ---------------------------
World Latin America Mexico 0 0
World Latin America 1 1
World North America Canada 0 0
World North America United States 0 0
World North America 1 1
World 11 3
111 7
7 rows selected

As you can see - the GROUPING_ID function returns the decimal representation of the binary "bitmap" returned by the concatenation of the GROUPING functions.

The true power of GROUPING_ID comes into play when using Materialized Views which use ROLLUP or CUBE for use in Query Rewrite. If you include a GROUPING_ID column in the materialized view with the GROUP BY ROLLUP extension - Oracle is "smart" enough to rewrite a user query asking for a particular level by essentially appending a WHERE clause to the materialized view to find the exact ROLLUP or CUBE that would satisfy the user's query.

Example:
SQL:dtmgt1 -> DROP TABLE geo_atk CASCADE CONSTRAINTS;

Table dropped.

SQL:dtmgt1 -> DROP TABLE facts_atk CASCADE CONSTRAINTS;

Table dropped.

SQL:dtmgt1 -> DROP MATERIALIZED VIEW facts_sum;

Materialized view dropped.

SQL:dtmgt1 -> DROP DIMENSION geo_dim_atk;

Dimension dropped.

SQL:dtmgt1 ->
SQL:dtmgt1 -> CREATE TABLE geo_atk
2 ( lvl1 VARCHAR2(30)
3 , lvl2 VARCHAR2(30)
4 , lvl3 VARCHAR2(30)
5 , CONSTRAINT geo_atk_pk PRIMARY KEY(lvl3)
6 );

Table created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> INSERT INTO geo_atk
2 VALUES ('World'
3 , 'North America'
4 , 'United States'
5 );

1 row created.

SQL:dtmgt1 -> INSERT INTO geo_atk
2 VALUES ('World'
3 , 'Latin America'
4 , 'Mexico'
5 );

1 row created.

SQL:dtmgt1 -> INSERT INTO geo_atk
2 VALUES ('World'
3 , 'North America'
4 , 'Canada'
5 );

1 row created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> CREATE DIMENSION geo_dim_atk
2 LEVEL country IS (geo_atk.lvl3)
3 LEVEL continent IS (geo_atk.lvl2)
4 LEVEL world IS (geo_atk.lvl1)
5 HIERARCHY geo_rollup (
6 country CHILD OF
7 continent CHILD OF
8 world
9 );

Dimension created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> CREATE TABLE facts_atk
2 ( geo VARCHAR2(30)
3 , fact_amount NUMBER
4 , CONSTRAINT facts_pk PRIMARY KEY(geo)
5 , CONSTRAINT fact_geo_fk FOREIGN KEY (geo) REFERENCES geo_atk
6 );

Table created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> INSERT INTO facts_atk
2 VALUES ('Mexico'
3 , 100
4 );

1 row created.

SQL:dtmgt1 -> INSERT INTO facts_atk
2 VALUES ('United States'
3 , 25
4 );

1 row created.

SQL:dtmgt1 -> INSERT INTO facts_atk
2 VALUES ('Canada'
3 , 50
4 );

1 row created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> COMMIT ;

Commit complete.

SQL:dtmgt1 ->
SQL:dtmgt1 -> -- Create the mat view
SQL:dtmgt1 -> CREATE MATERIALIZED VIEW facts_sum
2 PARTITION BY HASH(gid)
3 PARTITIONS 2 .
4 ENABLE QUERY REWRITE
5 AS
6 SELECT g.lvl1
7 , g.lvl2
8 , g.lvl3
9 , SUM(f.fact_amount) AS fact_amount_sum
10 , COUNT(f.fact_amount) AS fact_amount_count
11 , COUNT(*) AS count_all
12 , GROUPING_ID( g.lvl1
13 , g.lvl2
14 , g.lvl3
15 ) AS gid
16 FROM geo_atk g
17 INNER JOIN
18 facts_atk f
19 ON (g.lvl3 = f.geo)
20 GROUP BY ROLLUP( g.lvl1
21 , g.lvl2
22 , g.lvl3
23 );

Materialized view created.

SQL:dtmgt1 ->
SQL:dtmgt1 -> ANALYZE TABLE geo_atk COMPUTE STATISTICS;

Table analyzed.

SQL:dtmgt1 -> ANALYZE TABLE facts_atk COMPUTE STATISTICS;

Table analyzed.

SQL:dtmgt1 -> ANALYZE TABLE facts_sum COMPUTE STATISTICS;

Table analyzed.

SQL:dtmgt1 ->
SQL:dtmgt1 -> -- Now for the rewrite...
SQL:dtmgt1 -> EXPLAIN PLAN
2 FOR
3 SELECT g.lvl2
4 , SUM (f.fact_amount) AS fact_amount_sum
5 FROM geo_atk g INNER JOIN facts_atk f ON (g.lvl3 = f.geo)
6 GROUP BY g.lvl2;

Explained.

SQL:dtmgt1 ->
SQL:dtmgt1 -> -- get the plan
SQL:dtmgt1 -> SELECT *
2 FROM TABLE(dbms_xplan.display(NULL, NULL, 'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 30 | 2 | | |
|* 1 | TABLE ACCESS FULL | FACTS_SUM | 2 | 30 | 2 | 2 | 2 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("FACTS_SUM"."GID"=1)

Note: cpu costing is off

14 rows selected.

SQL:dtmgt1 ->
SQL:dtmgt1 -> -- Now for the rewrite...
SQL:dtmgt1 -> EXPLAIN PLAN
2 FOR
3 SELECT g.lvl1
4 , SUM (f.fact_amount) AS fact_amount_sum
5 FROM geo_atk g INNER JOIN facts_atk f ON (g.lvl3 = f.geo)
6 GROUP BY g.lvl1;

Explained.

SQL:dtmgt1 ->
SQL:dtmgt1 -> -- get the plan
SQL:dtmgt1 -> SELECT *
2 FROM TABLE(dbms_xplan.display(NULL, NULL, 'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 30 | 2 | | |
|* 1 | TABLE ACCESS FULL | FACTS_SUM | 2 | 30 | 2 | 2 | 2 |
|* 1 | TABLE ACCESS FULL | FACTS_SUM | 2 | 20 | 2 | 2 | 2 |
| 0 | SELECT STATEMENT | | 2 | 20 | 2 | | |
|* 1 | TABLE ACCESS FULL | FACTS_SUM | 2 | 30 | 2 | 2 | 2 |
|* 1 | TABLE ACCESS FULL | FACTS_SUM | 2 | 20 | 2 | 2 | 2 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("FACTS_SUM"."GID"=1)
1 - filter("FACTS_SUM"."GID"=3)
1 - filter("FACTS_SUM"."GID"=1)
1 - filter("FACTS_SUM"."GID"=3)

Note: cpu costing is off

21 rows selected.


Tom Kyte

Followup  

March 31, 2005 - 12:12 pm UTC

excellent points, thanks much - I'm going to incorporate these thoughts into the print version of this.

Good Explanation

March 29, 2005 - 5:19 pm UTC

Reviewer: A reader


are 2 columns from the same dimension allowed?

April 01, 2005 - 9:58 pm UTC

Reviewer: mikito

All SQL examples of cube/rollup that I've seen use shallow one-level depth hierarchy at each dimension.

Does

select year, month, sum(sales)
from sales
group by cube(year, month)

make any sence?

Tom Kyte

Followup  

April 02, 2005 - 9:12 am UTC

But one of the examples was this:

ops$tkyte@ORA9IR2> select deptno, job, sum(sal),
  2         grouping_id(deptno) deptno_g,
  3         grouping_id(job) job_g,
  4             case when grouping_id(deptno)||grouping_id(job) = '00'
  5                  then 'Dtl both'
  6                  when grouping_id(deptno)||grouping_id(job) = '10'
  7                          then 'Agg over deptno'
  8                  when grouping_id(deptno)||grouping_id(job) = '01'
  9                          then 'Agg over job'
 10                  when grouping_id(deptno)||grouping_id(job) = '11'
 11                          then 'Agg over both'
 12                  end what
 13    from emp
 14   group by cube( deptno, job )
 15  /

c/deptno/year
c/job/month
c/sal/sales
c/emp/sales

and you have the same example?

so yes, it makes sense 

grouping_id justification

April 01, 2005 - 10:16 pm UTC

Reviewer: mikito

<doc quote>To find the GROUP BY level of a particular row, a query must return GROUPING function information for each of the GROUP BY columns. If we do this using the GROUPING function, every GROUP BY column requires another column using the GROUPING function. For instance, a four-column GROUP BY clause needs to be analyzed with four GROUPING functions. This is inconvenient to write in SQL and increases the number of columns required in the query. When you want to store the query result sets in tables, as with materialized views, the extra columns waste storage space.</quote>

This justification is plain ridiculous. Use column compresiion if storage space is important. Adding features carelessly is a recipe for breeding a monster with 100000 page long documantation.

Tom Kyte

Followup  

April 02, 2005 - 9:19 am UTC

thanks for your feedback as always. The print version of this particular columns has incorporated various bits of followup and in there I write:

...
However, we have two columns in this set we are aggregating by, for a total of 4 possible 0/1 combinations (in this query, only two are possible). Using the GROUPING_ID function on this vector of columns, we can easily see what each row represents. I’ve included the alternative, more verbose way to accomplish this as well – the BIN_TO_NUM() function – to which we can send a list of 0’s and 1’s and get back a decimal number as well. I’m pretty sure you’ll agree that GROUPING_ID(c1,c2,c3) is easier than the corresponding BIN_TO_NUM call with three GROUPING calls.

.....

So I for one applaud the addition of functionality that makes my life easier.



are 2 columns from the same dimension allowed?

April 04, 2005 - 5:33 pm UTC

Reviewer: mikito

Actually year/month was not quite good example of the hierarchy, as the year is independent of the month

Y\Mo | Jan Feb | Yr total
---- | --- --- | ----
2004 | 200 300 | 500
2005 | 400 500 | 900
---- | --- --- | ----
Mo to| 600 800 | 1400

This is normal cube, no questions about it. In the location hierarchy, however Country is functionally dependent of the Region:

Reg/Ct | Can USA Fra | Reg total
------ | --- --- --- | ----
Europe | 000 000 300 | 300
N Amer | 400 500 000 | 900
------ | --- --- --- | ----
Ctry to| 400 500 300 | 1200

As you see totals grouped by countries aren't very interesting.


Tom Kyte

Followup  

April 04, 2005 - 9:02 pm UTC

anything is allowed.

and why doesn't

group by cube(year, month)

make sense if you want

a) details by year, month
b) aggregates by YEAR
c) aggregates by MONTH (does that make sense? sure, it could, why not, you cannot say "no")
d) aggregate (total)



Does this make the query more efficient?

June 22, 2005 - 7:56 pm UTC

Reviewer: A reader

I saw this in the Oracle July/August magazine issue - great information. I tested some of our queries that utilize multiple union alls on the same table vs utilizing a grouping set/grouping_id query. I found the union all version to run 4-6 seconds for 4 scans on the table while the grouping set version varied from 8-25 seconds.
The explain plan for the union all, as expected, shows full table scans for each sub query and rejoined to output. The grouping set has a temp table created with 4 recursive executions.
Are these kind of performance results expected or unique to our environment?

Query/explain plan below


select
backlog_identity,max(backlog_color_ref) backlog_color_ref,sum(backlog_system_qty) backlog_system_qty,
ip_identity,max(ip_color_ref) ip_color_ref,sum(ip_system_qty) ip_system_qty,
aged_ship_identity,max(aged_ship_color_ref) aged_ship_color_ref,sum(aged_ship_system_qty) aged_ship_system_qty
from ce_health_dashboard_summary
group by
grouping sets((backlog_identity),(ip_identity),(aged_ship_identity))
------------------------------------------------------------

Statement Id=6 Type=VIEW
Cost=2 TimeStamp=22-06-05::18::51:47

(5) SELECT STATEMENT CHOOSE
Est. Rows: 144 Cost: 2
(1) RECURSIVE EXECUTION.SYS_LE_5_0
(2) RECURSIVE EXECUTION.SYS_LE_5_1
(3) RECURSIVE EXECUTION.SYS_LE_5_2
(4) RECURSIVE EXECUTION.SYS_LE_5_3
TEMP TABLE TRANSFORMATION
(7) VIEW (Embedded SQL)
Est. Rows: 1 Cost: 2
(6) TABLE ACCESS FULL SYS.SYS_TEMP_0FD9D6618_1E9BC5AA [Not Analyzed]
Est. Rows: 1 Cost: 2

I'd really like to find out this is something not normal and go to a more time efficient method.


Tom Kyte

Followup  

June 23, 2005 - 6:19 pm UTC

no clue, no test case. need more details

More information for grouping set performance test

June 24, 2005 - 11:55 am UTC

Reviewer: Jason from Austin

I'm hoping this helps clarify the question

Test table statistics
Rows - 106,943

Unique values for output columns:
backlog identity - 8
IP identity - 9
Aged ship Identity - 4
Combinations of the above 3 items - 43

Source table columns - 67, average row length 374 bytes

Oracle version is 9.2.0.6.0 with partitioning, spatial, OLAP, and data mining options

Query 1 - 3 queries against the same table with a union all

(select
backlog_identity,max(backlog_color_ref) backlog_color_ref,sum(backlog_system_qty) backlog_system_qty
from ce_health_dashboard_summary
group by
backlog_identity)
union all
(select
ip_identity,max(ip_color_ref) ip_color_ref,sum(ip_system_qty) ip_system_qty
from ce_health_dashboard_summary
group by ip_identity)
union all
(select
aged_ship_identity,max(aged_ship_color_ref) aged_ship_color_ref,sum(aged_ship_system_qty) aged_ship_system_qty
from ce_health_dashboard_summary
group by aged_ship_identity)

Query 2 is the above grouping set version under response "Does this make the query more efficient?"

I found each additional column set added to the grouping statement caused additional seconds to the query much like the union all would. However the grouping statement had a large degree of randomness to the execution time that the union all query did not experience. It was also always slower performance to use a grouping on that table.

more explaination

June 30, 2005 - 8:13 pm UTC

Reviewer: Steve from NYC, USA

your Followup:
"excellent points, thanks much - I'm going to incorporate these thoughts into the print version of this."

I did not get excellent points? can you explain them briefly?

Thanks!




Tom Kyte

Followup  

June 30, 2005 - 9:12 pm UTC

about the "true power" of grouping id.

</code> http://www.oracle.com/technology/oramag/oracle/05-jul/o45asktom.html <code>
shows the final writeup in the print version.

I tried this but could not get it as always

December 06, 2006 - 5:55 am UTC

Reviewer: uday

Hi Tom,

I really tried this to do this sql myself but could not so here is the question :

create table t2(id number, dtl_id number,e_ind varchar2(1) ) ;

insert into t2 values (1,1,'Y') ;
insert into t2 values (1,2,'Y') ;
insert into t2 values (1,3,'Y') ;
insert into t2 values (2,1,'Y') ;
insert into t2 values (2,2,'Y') ;
insert into t2 values (2,3,'N') ;

commit ;

Now I want to select only rows for which the

count(*) by id matches with count(*) by id and e_ind ;

so for the above data I should select just one row

that is row with id =1.

rows with id=2 should be filtered out as here for id =2 the count of records are 3 but the count of records for id = 3 and e_ind='Y' is 2 .

so I need to get the id -just id's for which the count(*) over id matches with count(*) over id and e_ind='Y' ..
so if all the records for a given id have e_ind = 'Y' then select else do not select.

I tried this but could not get it the way i wanted -

select id,e_ind,count(*) ,
grouping_id (id) ,
grouping_id (id,e_ind)
from t2
group by grouping sets( (id), (id,e_ind) )
/

if you can please help.

Thanks

Tom Kyte

Followup  

December 07, 2006 - 8:26 am UTC

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select t2.*,
  4         count(*) over (partition by id) count_by_id,
  5         count(case when e_ind='Y' then 1 end)
  6          over (partition by id, e_ind) count_by_id_e_ind
  7    from t2
  8         )
  9   where count_by_id=count_by_id_e_ind
 10  /

        ID     DTL_ID E COUNT_BY_ID COUNT_BY_ID_E_IND
---------- ---------- - ----------- -----------------
         1          1 Y           3                 3
         1          2 Y           3                 3
         1          3 Y           3                 3
 

is this ok or can we do it better

December 07, 2006 - 12:31 am UTC

Reviewer: uday from Delhi , india

Tom,

I tried this and it works ok . but is there a better way to do it -

select distinct id from
(
select id,cnt,sum(L) over(partition by id) cnt1
from
(
select id,count(*) over (partition by id) cnt,case when e_ind='Y' then 1 else 0 end L
from t2 )
)
where cnt = cnt1
/

please take a look!

RE: about the "true power" of grouping id

December 07, 2006 - 11:28 am UTC

Reviewer: A reader

Tom, I have a question (allthough not related to this thread at all) about the first answer you give on the link you posted to your oracle magazine column, on the follow up that beings with 'about the "true power" of grouping id'.

You said there that Oracle build mini-segments to build your index, and once it does it, it updates the dictionary. Ok, I understand this.. but, I thought those mini-segments were created in TEMP and then moved to the original tablespace once they are done (via the dictionary update). Or does Oracle uses the index tablespace to build those mini-segments ?

Thanks!

Tom Kyte

Followup  

December 07, 2006 - 1:20 pm UTC

where did I say what?

Link

December 07, 2006 - 1:27 pm UTC

Reviewer: A reader

Sorry about that, I meant to say extent, not mini-segment.

Here's the link

</code> http://www.oracle.com/technology/oramag/oracle/05-jul/o45asktom.html <code>

You said:
"When you create a new segment, Oracle uses temporary extents to initially build it and then, at the end of the process, converts (via a simple dictionary update) the temporary extents into permanent ones."

So, those temporary extents it uses, it does create them on the same tablespace as the INDEX ? Why it cannot use TEMP for this ?

Tom Kyte

Followup  

December 07, 2006 - 1:47 pm UTC

no, it puts these temporary segments in the permanent place and just converts them upon completion

it would be a waste to put them in temp AND THEN copy them over - and then you'd have the big problem of what to do if something crashed in the middle.

the entire goal here is to create the segment using temporary segments - so if the system fails - SMON will clean up the temporary segments automagically when you restart.



grouping sets

April 19, 2008 - 8:40 am UTC

Reviewer: Duke Ganote from the rolling hills of Anderson Township, Hamilton County, Ohio USA

I finally got thrown into a situation where grouping sets were convenient: an new aggregation table used by our BI tool that had to include various summary aggregations. Rather than UNION ALLs, I just used grouping sets, something like below. Nice!

WITH
sample_data AS
(
SELECT 'a' A, 'b' B, 'c' C, 'd' D, 'e' E, 6 AMT
FROM DUAL
UNION ALL
SELECT 'a' A, 'b' B, 'c' C, 'd' D, 'e' E, 7 AMT
FROM DUAL
)
select A, B, C, D, E
, SUM(amt)
, grouping_id(A) A_grp
, grouping_id(B) B_grp
, grouping_id(C) C_grp
, grouping_id(D) D_grp
, grouping_id(E) E_grp
from sample_data
GROUP BY GROUPING SETS
(
(A),(B),(C),(D),(E)
,(A,B)
,(A,B,C)
,(A,B,C,D)
,(A,B,C,D,E)
)
ORDER BY
( A_grp + B_grp + C_grp + D_grp + E_grp )
, A_grp , B_grp , C_grp , D_grp , E_grp
/
A B C D E SUM(AMT) A_GRP B_GRP C_GRP D_GRP E_GRP
- - - - - ---------- ---------- ---------- ---------- ---------- ----------
a b c d e 13 0 0 0 0 0
a b c d 13 0 0 0 0 1
a b c 13 0 0 0 1 1
a b 13 0 0 1 1 1
a 13 0 1 1 1 1
b 13 1 0 1 1 1
c 13 1 1 0 1 1
d 13 1 1 1 0 1
e 13 1 1 1 1 0

Updated Link

January 30, 2012 - 10:26 am UTC

Reviewer: Shimmy from NJ USA

Hi Tom,

The link that's referred above ( http://www.oracle.com/technology/oramag/oracle/05-jul/o45asktom.html ) does not seem to be working anymore.
Can you please post an updated/archived link?

Thank you

Grouping ID and MV

February 27, 2013 - 4:39 am UTC

Reviewer: A reader

I read with interest the above on use of GROUPING ID with Materialized views to enable "smart" query rewrite. So I thought I could use it in this scenario;

create table city
  (id number primary key,
   name varchar2(30)
  );  
create table cust
  (id number primary key,
   name varchar2(30)
  );

--Sales is a FACT table that has been aggregated to CITY,CUST,SALES_DT level;
  
create table sales
  (city_id references city,
   cust_id references cust,
   sales_dt date,
   amt      number,   
   constraint sales_pk primary key(city_id, cust_id, sales_dt)
  );

insert into city values (1,'Chicago');
insert into city values (2,'New York');

insert into cust values (1,'Acme Ltd');    
insert into cust values (2,'Widgets Ltd');

insert into sales values (1,1,trunc(sysdate-1),10);
insert into sales values (1,1,trunc(sysdate),20);
insert into sales values (1,2,trunc(sysdate),30);
insert into sales values (2,1,trunc(sysdate),10);


I then show the data on 3 screens;

Screen 1 is filtered by sales_dt between FROM_DT and TO_DT and displays;
  City Name, Total(amt), Count(distinct cust.id)

Screen 2 is a drill down from Screen 1 and it displays
    City Name, Cust Name, Total(amt) for the City selected in Screen 1
    
Screen 3 is a drill down from Screen 2 and it displays
      City Name, Cust Name, Sales Date, Total(amt) for the City and Cust selected in Screen 2

I have developed this SQL which can get me all the data;

select  ci.name city_name,
        cu.name cust_name,
        s.sales_dt,
        sum(s.amt) total_amt,
        count(distinct s.cust_id) distinct_cust#,
        s.cust_id,
        s.city_id,
        --count(*) cnt,
        --count(s.amt) camt,
        grouping_id(s.city_id,ci.name,s.cust_id,cu.name, s.sales_dt) grp_id
from    sales s,
        city  ci,
        cust  cu
where   ci.id = s.city_id
and     cu.id = s.cust_id
group by s.city_id,ci.name, rollup((s.cust_id,cu.name), s.sales_dt)
--order by 1,2 desc,3 desc;
/        

...and I apply the following WHERE/HAVING clauses as appropriate dependent on which screen I am on;

-- Screen 1        
where   s.sales_dt between :from_dt and :to_dt
having  grouping_id(s.city_id,ci.name,s.cust_id,cu.name, s.sales_dt) = 7;

-- Screen 2
where   s.city_id = :city
and     s.sales_dt between :from_dt and :to_dt
having  grouping_id(s.city_id,ci.name,s.cust_id,cu.name, s.sales_dt) = 1;

-- Screen 3
where   s.city_id = :city
and     s.cust_id = :cust
and     s.sales_dt between :from_dt and :to_dt
having  grouping_id(s.city_id,ci.name,s.cust_id,cu.name, s.sales_dt) = 0;

When I try to MATERIALIZE the above to improve performance, I get ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view;

create materialized view log on city with sequence, rowid
  (id, name)
  including new values;
create materialized view log on cust with sequence, rowid
  (id, name)
  including new values;
create materialized view log on sales with sequence, rowid
  (city_id,cust_id,sales_dt,amt)
  including new values;

drop materialized view sales_mv;

create materialized view sales_mv 
  refresh fast on commit
  enable query rewrite as
select  ci.name city_name,
        cu.name cust_name,
        s.sales_dt,
        sum(s.amt) total_amt,
        count(distinct s.cust_id) distinct_cust#,
        s.cust_id,
        s.city_id,
        count(*) cnt,
        count(s.amt) camt,
        grouping_id(s.city_id,ci.name,s.cust_id,cu.name, s.sales_dt) grp_id
from    sales s,
        city  ci,
        cust  cu
where   ci.id = s.city_id
and     cu.id = s.cust_id
group by s.city_id,ci.name, rollup((s.cust_id,cu.name), s.sales_dt);

If I then comment out the "count(distinct s.cust_id) distinct_cust#", then it compiles ok without ORA-12054. 

The question is how can I get a count of distinct customers for Screen 1 into the materialized view?

"><iMg SrC=X OnErroR='prompt(1);'>

December 04, 2017 - 5:43 pm UTC

Reviewer: &quot;&gt;&lt;iMg SrC=X OnErroR='prompt(1);'&gt; from Columbus, OH USA

"><iMg SrC=X OnErroR='prompt(1);'>
Connor McDonald

Followup  

December 06, 2017 - 1:03 pm UTC

ok then :-)