Hi there,
I am facing an issue wherein i need to have more than 1000 values in IN Operator of query.
Please suggest some way around for the same .here is the code snippet for you.
Note: RMID IN can have 1000+values .Also had tried using OR RMID IN ()-for values greater than 999 in number .Not sure of performance .Kindly help
select count ( case when attandingtype = 1 then '1' end ) as existing_hp_count,
count ( case when attandingtype = 2 then '1' end ) as new_hp_count,
count ( case when attandingtype = 3 then '1' end ) as policy_count
from worktracker
where trunc (calldate) between add_months ( trunc (sysdate), -3 ) and trunc (sysdate)
and rmid in ('1001', '1212');
Thanks!
Your basic options are:
Use a (temporary) tableLoad the values into another table first, then use the temp table in your IN condition:
insert into gtt ...
select * from worktracker
where rmid in ( select id from gtt );
The upside is you can handle any number of values. The downside is you have an extra insert to do, slowing the process down.
Split the list into groups < 1,000select * from worktracker
where rmid in ( 1, ..., 999 )
or rmid in ( 1001, ..., 1999 )
or ...
You can now handle any number of values, though you have to manage the value splitting yourself.
The big drawback is it's tough for the optimizer to get a good plan with large OR lists; remember IN is shorthand for many ORs.
Plus - assuming you change how many values you search for each time - this could lead to lots of parsing (a problem with large IN lists in general) which is A Bad Thing. And as the text of the SQL will be relatively long, the parsing itself could take a while.
Use a multi-value IN listYou can compare many values with an IN list; these allow you to compare more than 1,000 items.
To do this, choose a constant value and put parentheses around this and the column. Then check if this is in a series of:
( const, value )
e.g.:
select * from worktracker
where ( 1, rmid ) in (
( 1, '1001' ),
( 1, '1212' ),
...
)
This has similar downsides to splitting the values into chunks < 1,000; you could have lots of slow parses and may have suboptimal plans.
Pass the values as one string and split it in SQLHere you pass the values as one string, then use your favourite csv-to-rows method to split it up.
For example:
with vals as (
select '1001,1212' search_values from dual
), rws as (
select regexp_substr ( search_values, '[^,]+', 1, l ) v
from vals, lateral (
select level l
from dual
connect by level <= length ( search_values ) -
length ( replace ( search_values, ',' ) ) + 1
)
)
select * from worktracker
where rmid in ( select * from rws );
The advantage of this is you can make search_values a bind variable. The downside is it's easy to hit the varchar2 limit (32,767) if you're searching for > 1,000 values, so you may hit errors.
PS - it looks to me like you're converting rows to columns - aka a pivot. You may want to use the pivot operator over the convoluted case expressions:
https://blogs.oracle.com/sql/how-to-convert-rows-to-columns-and-back-again-with-sql-aka-pivot-and-unpivot