Hi Alan,
Cardinality costing for TABLE() has always been a challenge. In most versions of Oracle, you'll often see a default cardinality chosen, for example:
SQL> select * from table(cast(:b1 as id_array));
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR PICKLER FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
You'll see "8168" pop up often in such queries. Hence when those queries become involved with joins to other structures, the optimizer decisions can be skewed.
Global temporary tables (GTT) are a common (and good) means to try tackle it, but it does still leave you with a costing issue, eg, one session using a GTT might put 1000 rows in there, whilst another session might be only 2 rows in there. Hence if you persist some statistics on the GTT in the dictionary, then someone will be getting the wrong stats. An alternative is to look at dynamic sampling for such queries.
Ultimately, a good place you really want to be for this is on version 12. For example, the execution plan for your query in 12c looks like this:
SELECT COUNT(*)
FROM
ALL_OBJECTS A WHERE OBJECT_ID IN (SELECT * FROM TABLE(:B1 ))
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=30 pr=0 pw=0 time=1491 us)
5 5 5 FILTER (cr=30 pr=0 pw=0 time=1481 us)
5 5 5 HASH JOIN (cr=12 pr=0 pw=0 time=1310 us cost=34 size=123 card=1)
5 5 5 NESTED LOOPS (cr=11 pr=0 pw=0 time=1059 us cost=34 size=123 card=1)
5 5 5 STATISTICS COLLECTOR (cr=11 pr=0 pw=0 time=1056 us)
5 5 5 HASH JOIN (cr=11 pr=0 pw=0 time=940 us cost=33 size=119 card=1)
5 5 5 NESTED LOOPS (cr=10 pr=0 pw=0 time=233 us cost=33 size=119 card=1)
5 5 5 STATISTICS COLLECTOR (cr=10 pr=0 pw=0 time=230 us)
5 5 5 HASH JOIN (cr=10 pr=0 pw=0 time=120 us cost=32 size=106 card=1)
5 5 5 NESTED LOOPS (cr=10 pr=0 pw=0 time=112 us cost=32 size=106 card=1)
5 5 5 STATISTICS COLLECTOR (cr=0 pr=0 pw=0 time=40 us)
5 5 5 SORT UNIQUE (cr=0 pr=0 pw=0 time=26 us cost=29 size=10 card=5)
5 5 5 COLLECTION ITERATOR PICKLER FETCH (cr=0 pr=0 pw=0 time=8 us cost=29 size=10 card=5)
5 5 5 TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=10 pr=0 pw=0 time=64 us cost=2 size=104 card=1)
[snip]
See the new "STATISTICS COLLECTOR" line - at run time, we are making a decision based on the number of rows we *really* found in the collection, and using that to try make a better optimizer decision.
Similarly, if you went the way of GTT's, 12c can let you associate session-level statistics for GTT which helps a lot.
In the interim, Adrian Billington wrote a nice little utility to tinker with cardinalities with collections. Check it out here:
http://www.oracle-developer.net/content/utilities/ccard.sql