Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Dawar.

Asked: July 09, 2007 - 2:35 pm UTC

Last updated: September 04, 2013 - 5:38 pm UTC

Version: 11

Viewed 1000+ times

You Asked

Tom,

Oracle introduces Beta version of Oracle 11g.

What are the new features of Oracle 11g?

Please compare 11g with the previous versions. thanks


Regards,
Dawar Naqvi



and Tom said...

will do....


after 11g is out there for all to use.

We call it the new features guide, saves me a lot of time....


Rating

  (90 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Oracle 11g Express

Wolfgang, July 10, 2007 - 4:31 pm UTC

Hi Tom,

do you know if there will also be an 11g Express Edition at the same time the "big" 11g is out?

Regards
Wolfgang
Tom Kyte
July 10, 2007 - 8:15 pm UTC

same time - no.

soon after - sure.

soon being an absolutely relative term

One for Tom

AJA, July 11, 2007 - 12:01 pm UTC

I read the other day about a new feature that Tom will probably appreciate.
The PL/SQL compiler will report a warning when it encounters a WHEN OTHERS exception handler that is not followed by a RAISE or RAISE_APPLICATION_ERROR statement.
-- At least according to Don Burleson's web site.
Tom Kyte
July 11, 2007 - 1:10 pm UTC

that is part of my next oracle magazine column....

yes, I was a proponent of that one :)

grants

A reader, July 11, 2007 - 4:58 pm UTC

I still have the hope there will be a way to avoid doing grants from one schema to another,
no only to do grant x on y to z;,
this is several times to regrant x on y to z with grant option;
This is one requirement that "unincentives" the use of several schemas to organize your info.
:]

11g and db links

pat b, July 11, 2007 - 5:08 pm UTC

It would be nice to be able to create a private db link owned by someother user while logged in as a DBA.

wizard based backup and recovery

Yusan, July 11, 2007 - 11:19 pm UTC

tools to do wizard-based backup and recovery will be very usefull in 11g!

to do backup, users just enter name (path) of drives (networked) or any kind of devices (usb hard disk, tape etc), Oracle does the rest (saving all the info needed to do recovery).

to do recovery, users just enter drives where all the backup infos are maintained, Oracle do the rest (possibly asking user date and time to which database should be restored).
recovery also includes restoring in other machine.
install Oracle software, start the wizard, enter the path where oracle keeps all the backup, Oracle does the rest.

no more entering any command line

yusan:

A reader, July 12, 2007 - 9:37 am UTC

wizard based backup is contained in oracle 10 enterprise manager

Don't rush to 11g

Dinesh, July 16, 2007 - 2:43 pm UTC

My experience is that ver x-1 is better than ver x. Wait for minor releases of 11g (11gR1, R2 etc.) before jumping the wagon!

Another excepted improvement

A reader, July 17, 2007 - 8:59 am UTC

Another "bug" I hope be "fixed" is when you are connected as X you can't grand from Y To X, I Think if you have the role then you can do it, this gives problems when you have long scripts to synchronize databases with schemas, obviously this can be fixed, but I'm not sure if this is a necessary restriction.

[1]: (Error): ORA-01749: you may not GRANT/REVOKE privileges to/from yourself

Granting to herself

Michel Cadot, July 17, 2007 - 3:05 pm UTC


Tom, I think what "A reader" means is:
SQL> set role dba;

Role set.

SQL> grant select on scott.emp to michel;
grant select on scott.emp to michel
                             *
ERROR at line 1:
ORA-01749: you may not GRANT/REVOKE privileges to/from yourself


SQL> show user
USER is "MICHEL"
SQL> @v

Version Oracle : 10.2.0.3.0

You can't grant an object privilege to yourself even if you have GRANT ANY OBJECT PRIVILEGE system privilege.

Regards
Michel

Download Oracle Database 11g - for Linux x86

Jignesh, August 13, 2007 - 5:45 am UTC

PL/SQL function result cache

Mark, August 16, 2007 - 5:05 pm UTC

I haven't tried 11g yet, but the PL/SQL function result cache sounds like a really nice feature. For those who haven't read about what's new, this feature allows you to write nondeterministic functions that cache the return values. Subsequent calls to the function having the same parameter values will return the cached return value rather than running the function again. Whenever tables in a specified list (presumably the tables used in the function for calculations) are modified, the cache is cleared out.

My initial thought was this feature would only be useful for functions that query read-only or mostly-read tables...But I doubt purging the cache requires much overhead even for moderately modified tables. Again I haven't tested this feature so I may be blowing air. :)

Does anyone else have comments on the new features?

Mark
Tom Kyte
August 20, 2007 - 10:15 pm UTC

but "moderately modified" are "mostly-read tables"

it is "cool" - this feature. it does what is purports to do.

Pasko, August 21, 2007 - 8:32 am UTC

Arup Nanda has started new series for 11G Features:

http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/index.html

I have read somewhere that almost all the new cool features are extra-cost Options.
Tom Kyte
August 22, 2007 - 11:32 am UTC

you would have read wrong then, there are some 400 new things.

resultset caching

Mark, August 21, 2007 - 10:10 am UTC

In my own ambiguous, unscientific language, "moderately modified" means more frequently modified than "mostly read". :)

I'm also looking forward to the server-side and client-side resultset caches, one reason being that these features will help eliminate those middle-tier caches. (I've more than once had to talk programmers out of adding "last_updated" timestamps to check if their middle tier cache should refresh...Yuck.)

FBI Vs Index on Virtual Column

Zahir M, September 07, 2007 - 3:38 pm UTC

What is the difference between FBI and Index on a Virtual Column ?
Is FBI made obsolete via Index on a Virtual Column ?

Tom Kyte
September 12, 2007 - 10:17 am UTC

an index on a virtual column would be a function based index.

think about it :) what is a virtual column - a function, an index on it is therefore....

A reader, September 09, 2007 - 4:18 am UTC

Read your article on
https://asktom.oracle.com/Misc/oramag/on-oracle-database-11g.html

I don't have 11g on my PC, but can I ask a few theoratical questions:

1) The Query resluts using hint "/*+ result_cache */", does it work for Multiple tables (joins).

2) Is there a little bit of parsing for the subsequent callers to cross check the query anyway and then are the cached rowid's checked with current SCN to decide to refresh it?

3)If 2) above is true then is it only a change to a stored rowid and not REST of the data in the table being queried causes a refresh of the query

4) The PL/SQL bit, is it possible to do multiple table relies_on(Table1, Table 2)

5) Does the PL/SQL function caches JUST the name of the function(including schema name), Input parameters and Output parameters and NOT the querie(s) it runs?

6) If above is correct, what happens with PIPELINED functions, considering results are piped, are they still cached?

7) What happens if the PL/SQL function has a return type of a collection, will they be refreshed itself or should the output always be a scalar datatype.

Thanks

Ravi
Tom Kyte
September 12, 2007 - 10:41 am UTC

1) yes.
2) it will be similar to a materialized view rewrite - a technology in the database for a long long time. I call these things "just in time materialized views".

3) it is currently segment based, not all of the way down to the individual row, it is like a materialized view - they are "stale" and need to be refreshed as soon as a row in the underlying table(s) change

4) yes

5) just the signature + inputs and outputs. You have to hard wire the dependencies for it

6) not sure, will have to play with it when I get a bit of time :)

7) not sure what you mean "will they be refreshed by itself"

Result set caching and pagination

Stew Ashton, September 12, 2007 - 1:01 pm UTC


Tom, can the /*+ result_cache */ hint be put in a subquery? If so, it would seem to be a very promising tool for result set pagination:
select * from
 (select a.*, rownum RNUM from (
select /*+ result_cache */ <MAIN QUERY WITH ORDER BY>
 ) a where rownum <= :lastline)
where RNUM >= :firstline;
The "main query" would then be in the cache for any subsequent pages.

Does it work this way? (I suspect not.)
Tom Kyte
September 15, 2007 - 6:47 pm UTC

query or query fragment...

http://docs.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elements006.htm#SQLRF20004

but, in a very dynamic environment, with lots of varying inputs to the search, I'm doubtful that it would have too much of an effect.

Oracle Magazine (Vol. XXI / Number 5)

Richard, September 13, 2007 - 5:31 am UTC

I saw your example of PL/SQL caching, with the 1 second sleep, in the latest Oracle Magazine, and a thought crossed my mind: One would have to exercise care when using sleep (or or similar), in such a way, as it is, if caching is enabled, ignored. So, it'd sleep the first time around (or the next time the "rely" table was changed, and the PL/SQL code subsequently run), but not the next.

A caveat, I suppose.

Can we migrate old partitioning to new partitioning?

Charlie B., September 28, 2007 - 2:35 pm UTC

Excellent information as always!

We have a lot of large tables (250+ GB) which are partitioned by date, so of course we keep going in and creating new partitions as time goes by. Will there be a way to migrate these to interval partitioning based on date, without rebuilding the table from scratch?

Since we're currently keeping one month per partition, I'd love to have a magic wand I could wave to apply, say,

partition by range (date_created)
interval (numtoyminterval(1,'MONTH'))

so that all future partitions were built automatically as needed. But you can't always get what you want...
Tom Kyte
October 03, 2007 - 12:55 pm UTC

ops$tkyte%ORA11GR1> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 10    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION junk VALUES LESS THAN (MAXVALUE)
 12  )
 13  /

Table created.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> alter table t set interval( numtodsinterval(1,'day') );
alter table t set interval( numtodsinterval(1,'day') )
*
ERROR at line 1:
ORA-14759: SET INTERVAL is not legal on this table.


ops$tkyte%ORA11GR1> alter table t drop partition junk;

Table altered.

ops$tkyte%ORA11GR1> alter table t set interval( numtodsinterval(1,'day') );

Table altered.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select partition_name, high_value from user_tab_partitions where table_name = 'T' order by partition_position;

PARTITION_ HIGH_VALUE
---------- -------------------------------------------------------------------------------------
PART1      TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
PART2      TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

ops$tkyte%ORA11GR1> insert into t values ( sysdate, 1, 'x' );

1 row created.

ops$tkyte%ORA11GR1> select partition_name, high_value from user_tab_partitions where table_name = 'T' order by partition_position;

PARTITION_ HIGH_VALUE
---------- -------------------------------------------------------------------------------------
PART1      TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
PART2      TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P89    TO_DATE(' 2007-10-04 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

ops$tkyte%ORA11GR1> insert into t values ( sysdate-30, 1, 'x' );

1 row created.

ops$tkyte%ORA11GR1> select partition_name, high_value from user_tab_partitions where table_name = 'T' order by partition_position;

PARTITION_ HIGH_VALUE
---------- -------------------------------------------------------------------------------------
PART1      TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
PART2      TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P90    TO_DATE(' 2007-09-04 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P89    TO_DATE(' 2007-10-04 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

Sweet! You just made my day!

Charlie B., October 05, 2007 - 8:49 am UTC


total recall

Piotr, November 09, 2007 - 3:14 am UTC

What are real differences between 'total recall' and historical records managment through workspace manager?? Some dude on oralce 11g launch conference in Poland said that workspace manager's solution was based on triggers and that was why it was slower. But im not sure should i belive in that.
Tom Kyte
November 09, 2007 - 12:18 pm UTC

"Some dude" :)

Anyway - workspace manager is trigger based, you can see the triggers. workspace manager saves everything in a single table. workspace manager piggy backs on your transaction - making it larger.

Flash data archive does not use triggers.
Flash data archive stored the history in separate segments - so the base table with current data is kept small.
Flash data archive happens in the background by mining UNDO information - so your existing transactions are not impacted by the use of this feature.
Flash data archive supports data retention and puring, workspace manager does not.

function result cache

Giridhar, November 28, 2007 - 10:08 am UTC

Tom,
i am reading the following article and trying to reproduce same set of statements to learn about function result cache.

https://asktom.oracle.com/Misc/oramag/on-oracle-database-11g.html

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for Solaris: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production

Elapsed: 00:00:00.00


Details about my output:


--------------------------------------------------------------------------
SQL> create or replace
function not_cached
( p_owner in varchar2 )
return number
as
l_cnt number;
begin
select count(*)
into l_cnt
from t
where owner = p_owner;
sys.dbms_lock.sleep(1);
return l_cnt;
end;
/

Function created.

Elapsed: 00:00:00.13
SQL> create or replace
function cached
( p_owner in varchar2 )
return number
result_cache
relies_on(T)
as
l_cnt number;
begin
select count(*)
into l_cnt
from t
where owner = p_owner;
dbms_lock.sleep(1);
return l_cnt;
end;
/

Function created.

Elapsed: 00:00:00.08
SQL> exec dbms_output.put_line( not_cached( 'SCOTT' ) );

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.06
SQL> exec dbms_output.put_line( not_cached( 'SCOTT' ) );


PL/SQL procedure successfully completed.

Elapsed: 00:00:01.01
SQL> SQL> exec dbms_output.put_line( not_cached( 'SCOTT' ) );


PL/SQL procedure successfully completed.

Elapsed: 00:00:01.02
SQL> SQL> set serveroutput on
SQL> exec dbms_output.put_line( not_cached( 'SCOTT' ) );

0

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.02
SQL> SQL> exec dbms_output.put_line( cached( 'SCOTT' ) );

0

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.02
SQL> SQL> exec dbms_output.put_line( cached( 'SCOTT' ) );

0

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.02
SQL> SQL> exec dbms_output.put_line( cached( 'SCOTT' ) );

0

PL/SQL procedure successfully completed.

SQL> exec dbms_output.put_line( cached( 'SCOTT' ) );
0

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.01
--------------------------------------------------------------------------




I am supposed to get the results quickly for "cached" call. However, i still dont see any change in the response time. May i know what i am missing here?

Thank you

Giridhar

Tom Kyte
November 28, 2007 - 11:09 pm UTC

what is the output of

show parameter result_cache



Output of "show parameter result_cache"

Giridhar, November 28, 2007 - 11:51 pm UTC

Thank you Tom. Output is given below:


SQL> show parameter result_cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
client_result_cache_lag              big integer 3000
client_result_cache_size             big integer 0
result_cache_max_result              integer     5
result_cache_max_size                big integer 0
result_cache_mode                    string      MANUAL
result_cache_remote_expiration       integer     0


Thanks
Giridhar
Tom Kyte
November 29, 2007 - 8:36 am UTC

your DBA disabled the result cache by setting it to zero.


Real-time Query capability of physical standby

Sokrates, November 29, 2007 - 10:29 am UTC

in my eyes that's really a great feature !
( http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/manage_ps.htm#i1017111
)

So our physical standby does not only protect us and helps us in taking backups without touching production, but we can run reports against it !

What I do not understand about it is: what was so difficult in implementing it ? can you give me an idea ?

Performance of TOTAL RECALL

Kevin Meade, December 05, 2007 - 5:26 pm UTC

Hi Tom

My customers want to take a seven year old system (not big), and ask historical questions of it. The database is not large, but the app is pretty intelligent with regards to business rules.

I was thinking that maybe TOTAL RECALL was made for this. But I have tried other Advanced Features of Oracle (workspace manager for example) and had difficulties with them, so I'd like to know if what I am contemplating is dumb or not, before I spend time on it. Specifically I am worried about performance.

The database has:

1) tables (doh!)
2) each table has an audit table xxx_A that looks just like it but with extra control data
3) _A tables have every change made to the table plus date, ordered sequence number etc.
4) this sequence number is actually serialized for all tables in the database so every change in every table can be order in its original order with respect to every other change made in the database.

As a proof of concept I was thinking about using FIXED_DATE as a way to load data into a new database appyling changes along the way so that this new database would think it loaded data over a period of years and thus TOTAL RECALL would be able to FLASH BACK.

1) create a new 11g database
2) use FIXED_DATE to set the database to at start time
3) copy the tables and _A tables to this new database
3) create another user with empty versions of these tables
4) process my changes one at a time into this new user's tables
a) get a change
b) set fixed_date to this change's date
c) apply the change
d) commit
** rinse and repeat till done


What do you think. Sound dumb? Sound like it might work? Does TOTAL RECALL even perform fast enough to make historical access real? You got a better idea?

Thanks, Kevin
Tom Kyte
December 10, 2007 - 8:02 am UTC

fixed_date will not work, you have to play with the actual system clock.

total recall is a way going forward.

it is not a way for existing data - it would take a lot of work to do what you describe and it would have to be done at the OS level. In theory it could work, in practice, I do not think I would recommend it.

I know it "could work" - because that is how they generated the test case - by setting the system clock back and simulating years and years of transactions.

thanks, I can use this

Kevin, December 10, 2007 - 10:07 am UTC

thanks very much. Once again you give me direction and leave me satisified.

Kevin

Adaptive cursor sharing

Stew Ashton, December 14, 2007 - 2:36 am UTC


Tom, thanks for the heads up on the new Oracle Optimizer blog, it's dynamite.
http://optimizermagic.blogspot.com/
However, in a followup on their December 3d post, they wrote:
"We hope that in the future, this feature will persuade people to set cursor_sharing to force."

I'm a bit worried that such a statement may send the wrong message. Developers may start to think that Oracle doesn't need bind variables anymore: they will just submit statements with literals and Oracle will turn the literals into bind variables and assign the right plan.

If that happens, statement caches won't work and soft parse rates will go up. SQL injection will rule.

Don't you agree that no matter how well the optimizer does its job in the future, good SQL development will always require proper use of bind variables?
Tom Kyte
December 14, 2007 - 1:12 pm UTC

yes, because cursor sharing = force can only turn a massive hard parse problem into a massive soft parse problem :)

that will never change.

Difference between Result Cache and MV

sriram vrinda, March 07, 2008 - 4:47 am UTC

Hi Tom,

Thanks for giving the details on RESULT CACHE.As mentioned by you they are "Just in Time Materialized Views" , But then when should i consider using a MV and when should i consider using the result cache, or does that mean 11g gives me a simplified Materialized view management and creation?

Correct me if i am wrong

Thanks again


Tom Kyte
March 10, 2008 - 9:58 am UTC

the result cache works when you don't have a materialized view. A materialized view is permanent, persistent, across reboots, just there.

the result cache is the answer - after someone made the answer - did the work.

Just think about what they are, what they do, how they 'exist', and apply them as you would *any* tool.


Export utility

bakunian, April 24, 2008 - 12:33 pm UTC

Tom,

Thank you for yesterday's great presentation in LA on 11 things about 11g. You mentioned that 11g is the last release that supports exp utility, so what would be the method for Transportable Table spaces will it be integrated into Data Pump or made part of DBMS_TTS package? Also would be impact on TTS encrypted tablespace feature since target database will not have the same key as source will the key be exported as part of metadata?
Tom Kyte
April 28, 2008 - 12:18 pm UTC

you have already been able to use expdp (data pump) to transport - in 10g...

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101

As for TTS with encryption...
http://docs.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007233


SQLPLUS GUI Tool in 11g?

Maverick, April 30, 2008 - 1:56 pm UTC

Tom, what happened to Sql*Plus tool in 11g database? I just installed 11g and saw only DOS based sqlplus. What happened to SQL*PLUS tool?

Thanks,
Tom Kyte
April 30, 2008 - 3:24 pm UTC

sqplusw was deprecated.

sql developer
sqlplus command line

that is what there is - sqlplus command line works (better) just like sqlplusw did.

relies_on(package) ?

Matthias Rogel, May 09, 2008 - 3:46 am UTC

Hallo Tom

I want to post here the question which was left yesterday on your DOAG talk (absolutely great to here you, you did in 150 minutes more than any other could do in 2 days !), and also two other ones that came into my mind on my way home:

I.
relies_on does not support package, according to
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/langelems.htm#i34368
only views and tables are supported.

Why ? - and what about (I don't have access to an 11g instance at the moment to play with it myself, sorry for that !)

consider
create package p is

function version return varchar2;

end p;


create package body p is

version_ constant varchar2(10) := 'V1.0';

function version return varchar2 is
return version_;
end version;

end p;

create function version version_of_p return varchar2
result_cache
is
begin
return p.version;
end;

Now, after one call to version_of_p, the result will be cached, and a new version of the body of p would not invalidate version_of_p, would it invalidate the cache ?

create or replace package body p is

version_ constant varchar2(10) := 'V2.0';

function version return varchar2 is
return version_;
end version;

end p;


?


II. (client-side result caching)

You said, the client is informed by the server whenever his client result cache will become invalid.
So, that's a asynchronous mechanism in the background, I assume
Which server process is responsible to notify the client ?
In case of dedicated Server, the one belonging to the client session, I assume, correct ?
Then, who notifies this server process of changes which invalidated the client cache (probably) ?
LGWR ?




III. (Flashback Data Archive)

You said, in the implementation of the Archive Tables the SCN is involved very deeply.
Does this means that an
open resetlogs
destroys the Archive Tables ?

Many thanks
Matthias





Tom Kyte
May 12, 2008 - 10:28 am UTC

I - implementation detail at this point, it is the way it is.

If a function says "I am cache-able", that implies that all code it invokes is "cache-able". It is sort of like saying

"I am a deterministic function, as long as the TABLES I rely on are not modified"

It is exactly like a deterministic function in that sense - as long as the tables are not modified - we can cache you.

I would argue (strongly) that dropping code into a running system isn't something we'd be doing like this anyway....


II - no, it is based on time and the client touching the database (the client result cache lag setting I described)


You tell the server: I am going to cache "select count(*) from T"


Now, every time you go to the server for anything - select * from emp, fetch a row, commit, run this procedure - whatever - the server has the ability to piggy back an invalidation message. If you run procedure P; - you might find out then that T was modified and the next time you go to "select count(*) from T", you will go to the server and actually execute it to refresh your cache.

The problem would be - what if you only do "select count(*) from T" and nothing else - you'd never be going to the server - so, after the result cache lag amount of time happens and you haven't been to the server - we'll automatically stick in a "hey server - anything change" call to find out if anything was invalidated.


Or, the problem would be - you ran some sql, you have gone idle and have been idle for a while. Now you suddenly decide to execute "select count(*) from T", so we'll go to the server to ensure the cached result we have is still OK.


Normally, we'll never go to the server for this "ping", your "select count(*) from T" will always know if it is OK or not - because you are frequently going to the server for other things. It would only be if this "select count(*) from T" was the first thing you did after grabbing a connection from the connection pool that we might have to go over to the server and ask 'is it still fresh'


III resetlogs doesn't reset an SCN.

ops$tkyte%ORA11GR1> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                54222408

ops$tkyte%ORA11GR1> connect / as sysdba;
Connected.
sys%ORA11GR1> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
sys%ORA11GR1> startup mount
ORACLE instance started.

Total System Global Area  849530880 bytes
Fixed Size                  1303216 bytes
Variable Size             608177488 bytes
Database Buffers          234881024 bytes
Redo Buffers                5169152 bytes
Database mounted.
sys%ORA11GR1> recover database until cancel;
Media recovery complete.
sys%ORA11GR1> alter database open resetlogs;

Database altered.

sys%ORA11GR1> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                54222785


thank you

Matthias Rogel, May 12, 2008 - 11:03 am UTC

for explaining

A reader, October 23, 2008 - 4:49 am UTC

Hi Tom,

SQL> DROP TABLE QRC;

Table dropped.

SQL> CREATE TABLE QRC AS SELECT * FROM ALL_OBJECTS;

Table created.

SQL> SET AUTOTRACE TRACEONLY EXPLAIN
SQL> SET TIMING ON
SQL> SELECT OBJECT_ID,OBJECT_NAME FROM QRC;
Elapsed: 00:00:00.06

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=287 Card=73537 Bytes=2206110)

   1    0   TABLE ACCESS (FULL) OF 'QRC' (TABLE) (Cost=287 Card=73537   Bytes=2206110)

SQL> SELECT /*+ RESULT_CACHE */ OBJECT_ID,OBJECT_NAME FROM QRC;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=287 Card=73537 Bytes=2206110)

   1    0   RESULT CACHE OF 'b3tfyd0mxygmcgzr5rdsvma6rh'
   2    1     TABLE ACCESS (FULL) OF 'QRC' (TABLE) (Cost=287 Card=7353
          7 Bytes=2206110)

SQL> SELECT /*+ NO_RESULT_CACHE */ OBJECT_ID,OBJECT_NAME FROM QRC;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=287 Card=73537 Byt
          es=2206110)

   1    0   TABLE ACCESS (FULL) OF 'QRC' (TABLE) (Cost=287 Card=73537
          Bytes=2206110)

My question is, when i use the result_cache once , is it sure that even if i use no_result_cache the timimg says that it is still in cache. why?

can u explain a bit on query_result_cache?

regards,
Manoj.V

Tom Kyte
October 23, 2008 - 1:36 pm UTC

...
SQL> SELECT OBJECT_ID,OBJECT_NAME FROM QRC;
Elapsed: 00:00:00.06

...


explain to me the difference between 00.00 and 00.06 sqlplus elapsed timewise?

Because if you ask me, 0.06 = 0.00 = 0.10 probably, meaning, it happens so fast, it is not accurately measurable using a watch with only 1/100th of a second resolution.


and besides, you do know that autotrace traceonly explain doesn't run the select at all right - you just asked for the plan, it knows "no need to actually RUN the select statement"

use tkprof and sql_trace to evaluate query response times and resource usage.

autotrace traceonly explain doesn't run the select at all

Sokrates, October 24, 2008 - 4:40 am UTC

- is this a new feature ?
- will it always show the same plan as if it would actually run the select ?
Tom Kyte
October 24, 2008 - 1:24 pm UTC

no, it is as old as autotrace.

autotrace traceonly explain never ran a SELECT

it does however execute the update/insert/delete/merge statements.


with autotrace traceonly explain, it knows

a) no output to be displayed
b) no execution statistics to be displayed
c) only the plan will be visible

so executing the select statement beyond getting the plan is not useful - it skips it.

will it be the plan as if it would actually run the select?

maybe - not assured. In 10g, it would be better to

a) run query
b) use dbms_xplan.display_cursor


for example:


ops$tkyte%ORA10GR2> create table t (x varchar2(30) primary key, y int );

Table created.

ops$tkyte%ORA10GR2> variable x number
ops$tkyte%ORA10GR2> exec :x := 5;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select * from t where x=:x;

Execution Plan
----------------------------------------------------------
Plan hash value: 1360331368

------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cos
------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |     1 |    30 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T            |     1 |    30 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C0030974 |     1 |       |
------------------------------------------------------------------------

<b>autotrace used explain plan under the covers, explain plan does not

a) peek at binds
b) see the DATATYPE of binds - it assumes all binds are strings
</b>

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"=:X)

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set serveroutput off
ops$tkyte%ORA10GR2> select * from t where x=:x;

no rows selected

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  020j3jtzyygd0, child number 0
-------------------------------------
select * from t where x=:x

Plan hash value: 1601196873

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |     1 |    30 |     2   (0)| 00:00:01
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_NUMBER("X")=:X)

Note
-----
   - dynamic sampling used for this statement


22 rows selected.




So, since we bound a number - the index cannot be used, but explain plan didn't know that because explain plan cannot/does not see the datatype, it assumed :x was a string - if it were, then the index could be used.



SQL Plan Management

Nir L, October 24, 2008 - 12:22 pm UTC

Dear Tom,
I've been reading a lot about this new feature, trying to understand what is the concept behind it and I was not able to fully understand it so far.

As I read, SQL Plan managements offers control over execution plan changes, by verifying the affect of the new plan on the performance.
2 questions:
1. The plan evolve process, is it meant to be done manually by the DBA? Or does Oracle do it automatically (executing the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE) and after executing it, does Oracle decides if the new plan is accepted?
2. When you manually want to load baselines, what does it mean that it can be loaded from a SQL Tuning Set? Meaning Outlining?

I guess what I'm trying to understand in general is whether this feature is an automatic intelligent SQL plan management by Oracle alone, or does it require a DBA intervention (even when the automatic capturing is used)?

Thanks in advance

Nir

Database Replay & Peformance Analyzer

plotnic, November 13, 2008 - 10:13 am UTC

Hi Tom,

Both features seem to perform some kind of replay.
I'm assuming Database Replay actually performs the DMLs (for example, replaying will cause actual inserts, updates etc...).

What about performance analyzer? Does assessing DML statement impact is done by actually replaying the statements physically (insert, update...)?


Thanks,
plotnic
Tom Kyte
November 14, 2008 - 4:38 pm UTC

it has to have executed them in general - in order to see if they do more, less or the same amount of work...

Datawarehousing in 11g - whats new ?

A reader, June 22, 2009 - 2:45 pm UTC

Hi - All our databases are currently in version 10.2.0.4. We are working on the design of a new data warehouse BI project and we are thinking about 11g. What would you suggest - continue using 10g or definitely go to 11g? Whats new in datawarehousing features in 11g that you think are far better than 10g ?

Database Replay & Peformance Analyzer

Alen, June 26, 2009 - 4:55 am UTC

Hi,

just a few comments regarding the question of reader Plotnic - it seems that SQL Performance Analyzer is actually performing only the query part of DML statements.

This is the tkprof trace of an update statement. See, there are 5 rows updated actually:

update t
set object_id = object_id
where object_id = 2

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.26 149 73 0 0
Execute 1 1.45 11.46 69307 69431 7 5
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.47 11.72 69456 69504 7 5


while this is a tkprof output of the same update executed by the dbms_sqlpa.execute_analysis_task with execution_type=TEST EXECUTE:

/* SQL Analyze(384,0) */ update t
set object_id = object_id
where object_id = 2

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 1.43 12.34 69307 69431 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.43 12.34 69307 69432 0 0

See, there are 0 rows updated, but all other metrics are almost identical.

I also checked the values of ora_rowscn pseudo column and it didn't change after dbsm_sqlpa.execute_analysis_task.

It seems that simple insert statements are not executed at all.

11g standby

A reader, September 28, 2009 - 10:04 am UTC

1. I just tried to follow this link step by step to create a 11g standby database.

http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-dataguard.html

These are the list of steps I have done

Database test11 on srv1
Database tst11sb on srv2

Plan to create standby database on srv2

a. Check existence of spfile on srv1 for test11

b. Create standby redo logs on primary database

c. Add an entry on listener.ora for the standby database on srv2

    (SID_DESC =
      (GLOBAL_DBNAME = tst11sb)
      (ORACLE_HOME = /orahome/OraHome11)
      (SID_NAME = tst11sb)
    )

d. Reload the listener

e. Add TNS entry on primary server to connect to standby

f. Create a file on srv2 at /orahome/OraHome11/dbs/inittst11sb.ora with just one line

db_name=tst11sb

g. On srv2, create an adump directory.

h. Check for existence of orapwtest11 on srv1 and copy that to orapwtst11sb on srv2

i. Startup the standby database on srv2

SQL> startup nomount
ORACLE instance started.

Total System Global Area  217219072 bytes
Fixed Size                  2087216 bytes
Variable Size             159385296 bytes
Database Buffers           50331648 bytes
Redo Buffers                5414912 bytes
SQL> 

j. Connect to rman on the primary database

RMAN> @cr_sby_11g.sql

RMAN> connect target *

RMAN> connect auxiliary *

RMAN>  
RMAN> run {
2>    allocate channel c1 type disk;
3>    allocate auxiliary channel s1 type disk;
4>  
5>    duplicate target database
6>         for standby
7>         from active database
8>         dorecover
9>         spfile
10>         parameter_value_convert 'test11','tst11sb'
11>         set 'db_unique_name'='tst11sb'
12>         set db_file_name_convert='/orahome/oradata/test11/','/orahome/oradata/tst11sb/'
13>         set log_file_name_convert='/orahome/oradata/test11/','/orahome/oradata/tst11sb/'
14>         set control_files='/oradata/tst11sb/control01.ctl'
15>         set standby_file_management='AUTO'
16>    ;
17>   sql channel c1 "alter system archive log current";
18>   sql channel s1 "alter database recover managed standby database using current logfile disconnect";
19> }
allocated channel: c1
channel c1: SID=150 device type=DISK

allocated channel: s1
channel s1: SID=98 device type=DISK

Starting Duplicate Db at 25-SEP-09

contents of Memory Script:
{
   backup as copy reuse
   file  '/orahome/OraHome11/dbs/orapwtest11' auxiliary format 
 '/orahome/OraHome11/dbs/orapwtst11sb'   file 
 '/orahome/OraHome11/dbs/spfiletest11.ora' auxiliary format 
 '/orahome/OraHome11/dbs/spfiletst11sb.ora'   ;
   sql clone "alter system set spfile= ''/orahome/OraHome11/dbs/spfiletst11sb.ora''";
}
executing Memory Script

Starting backup at 25-SEP-09
Finished backup at 25-SEP-09

sql statement: alter system set spfile= ''/orahome/OraHome11/dbs/spfiletst11sb.ora''
released channel: c1
released channel: s1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 09/25/2009 15:47:06
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on clone_default channel at 09/25/2009 15:47:06
RMAN-11003: failure during parse/execution of SQL statement: alter system set spfile= '/orahome/OraHome11/dbs/spfiletst11sb.ora'
ORA-32017: failure in updating SPFILE
ORA-32019: The parameter SPFILE cannot be updated in the server parameter file.

RMAN> 
RMAN> **end-of-file**

Can you help with this ? I tried to search on this error but I am not getting much help.

2. I have installed 11g Rel 2, what is the default CPU patch that comes with that release ?

Intelligent cursor sharing - 11G

Rajeshwaran, Jeyabal, October 21, 2009 - 12:19 am UTC

CREATE TABLE T AS
SELECT CASE WHEN ROWNUM=1 THEN 1 ELSE 99 END AS id,
  a.*
FROM ALL_OBJECTS a;

CREATE INDEX T_IND ON T(ID);
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'T',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE);

rajesh@11G> select id,count(*)
  2  from T
  3  group by id;

        ID   COUNT(*)
---------- ----------
         1          1
        99      68501
  
rajesh@11G> alter system flush shared_pool;

System altered.

rajesh@11G>   DECLARE
  2     l_Cursor sys_refcursor;
  3     l_data   sys.ODCINumberList := sys.ODCINumberList(99,1);
  4     Type t_Typ is table of T%ROWTYPE;
  5     l_rec t_Typ;
  6    BEGIN
  7     FOR i in 1..l_data.count
  8     loop
  9      open l_cursor for 'select * from T where id = :x ' using l_data(i);
 10      Loop
 11       fetch l_cursor BULK COLLECT into l_rec limit 500;
 12       exit when l_cursor%notfound;
 13      End loop;
 14      close l_cursor;
 15     end Loop;
 16    END;
 17  /

PL/SQL procedure successfully completed.

rajesh@11G> SELECT SQL_ID
  2  FROM V$SQL
  3  WHERE UPPER(SQL_TEXT) LIKE 'SELECT * FROM T%';

SQL_ID
-------------
grgx3v25r6hc0
grgx3v25r6hc0

rajesh@11G> select child_number, 
  2  stats_row_mismatch
  3  FROM V$SQL_SHARED_CURSOR 
  4  WHERE SQL_ID = 'grgx3v25r6hc0'
  5  /

CHILD_NUMBER S
------------ -
           0 N
           1 N
     
rajesh@11G> SELECT * FROM TABLE (dbms_xplan.display_cursor('grgx3v25r6hc0','0'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
SQL_ID  grgx3v25r6hc0, child number 0
-------------------------------------
select * from T where id = :x

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   290 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    | 34251 |  3478K|   290   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=:X)


18 rows selected.

rajesh@11G> SELECT * FROM TABLE (dbms_xplan.display_cursor('grgx3v25r6hc0','1'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
SQL_ID  grgx3v25r6hc0, child number 1
-------------------------------------
select * from T where id = :x

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   290 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    | 34251 |  3478K|   290   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=:X)


18 rows selected.

rajesh@11G> select * from product_component_version;

PRODUCT                                            VERSION                        STATUS
-------------------------------------------------- ---------------------------    --------------------------- 
NLSRTL                                             11.1.0.6.0                     Production
Oracle Database 11g Enterprise Edition             11.1.0.6.0                     Production
PL/SQL                                             11.1.0.6.0                     Production
TNS for 32-bit Windows:                            11.1.0.6.0                     Production

Tom,
I am working on your example at https://asktom.oracle.com/Misc/oramag/on-tuning-by-tracing.html
I am not able to see two different plan used by 11G Optimizer. I am not able to reproduce what you did. Can you please help me?
Tom Kyte
October 23, 2009 - 5:55 am UTC

first, you have to make the mistake more than once - you only made the mistake once, you might have given adaptive cursor sharing a chance to figure out "made a mistake", but you didn't give it a chance to fix it.

second, you gathered stats in a way that did not assure histograms - based on your estimated card=34,251 - I don't think we'd see a different plan for id=1 versus id=99.

third , you estimated stats - and given our example has a single exceptional row - you'd probably want to compute to ensure consistency (we have a strange case, you normally would not have a single row)

forth - there is an issue in plsql 11.1.0.6 whereby adaptive cursor sharing may not always be able to kick in due to plsql handling of cursors (caching and the like)

In 11.1.0.7 and above, your test would look like this:


ops$tkyte%ORA11GR1> CREATE TABLE T AS
  2  SELECT CASE WHEN ROWNUM=1 THEN 1 ELSE 99 END AS id,
  3          a.*
  4  FROM ALL_OBJECTS a;

Table created.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> CREATE INDEX T_IND ON T(ID);

Index created.

ops$tkyte%ORA11GR1> exec dbms_stats.gather_table_stats(user,'T',method_opt=>'for all indexed columns size 254',estimate_percent=>100);

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR1> alter system flush shared_pool;

System altered.

ops$tkyte%ORA11GR1> set autotrace traceonly explain
ops$tkyte%ORA11GR1> select * from t where id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 69649 |  7073K|    80   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    | 69649 |  7073K|    80   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=99)

ops$tkyte%ORA11GR1> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |   104 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   104 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IND |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=1)

ops$tkyte%ORA11GR1> set autotrace off
ops$tkyte%ORA11GR1>   DECLARE
  2     l_Cursor sys_refcursor;
  3     l_data   sys.ODCINumberList := sys.ODCINumberList(99,1,1,1);
  4     Type t_Typ is table of T%ROWTYPE;
  5     l_rec t_Typ;
  6    BEGIN
  7     FOR i in 1..l_data.count
  8     loop
  9      open l_cursor for 'select * from T where id = :x ' using l_data(i);
 10      Loop
 11       fetch l_cursor BULK COLLECT into l_rec limit 500;
 12       exit when l_cursor%notfound;
 13      End loop;
 14      close l_cursor;
 15     end Loop;
 16    END;
 17  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR1> column sql_id new_val sqlid
ops$tkyte%ORA11GR1> SELECT sql_text, SQL_ID
  2  FROM V$SQL
  3  WHERE UPPER(SQL_TEXT) LIKE 'SELECT * FROM T WHERE ID = :X%';

SQL_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID
-------------
select * from T where id = :x
grgx3v25r6hc0

select * from T where id = :x
grgx3v25r6hc0


ops$tkyte%ORA11GR1> SELECT * FROM TABLE (dbms_xplan.display_cursor('&sqlid','0'));
old   1: SELECT * FROM TABLE (dbms_xplan.display_cursor('&sqlid','0'))
new   1: SELECT * FROM TABLE (dbms_xplan.display_cursor('grgx3v25r6hc0','0'))

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  grgx3v25r6hc0, child number 0
-------------------------------------
select * from T where id = :x

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    80 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    | 69649 |  7073K|    80   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=:X)


18 rows selected.

ops$tkyte%ORA11GR1> SELECT * FROM TABLE (dbms_xplan.display_cursor('&sqlid','1'));
old   1: SELECT * FROM TABLE (dbms_xplan.display_cursor('&sqlid','1'))
new   1: SELECT * FROM TABLE (dbms_xplan.display_cursor('grgx3v25r6hc0','1'))

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  grgx3v25r6hc0, child number 1
-------------------------------------
select * from T where id = :x

Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   104 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IND |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=:X)


19 rows selected.

ops$tkyte%ORA11GR1> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production



New Features in 11g

Santosh Vijayan, October 29, 2009 - 12:30 am UTC

Greetings Tom,

I have a question regarding RMAN 11g multisection backup of datafile. Assume a datafile of 4GB size having used blocks of size around 1GB. Also block changing tracking is enabled. If I have to take backup of datafile using multiple channels with section size of 500MB, will the RMAN channel process span entire 4GB or will it be only 1 GB of used blocks.

RMAN> run {
allocate channel c1 type disk format ‘/backup1/%U’;
allocate channel c2 type disk format ‘/backup2/%U’;
backup section size 500m datafile 6;
}

Please let me know

Thanks

Santosh Vijayan
Tom Kyte
October 29, 2009 - 8:35 am UTC

rman nowadays only reads blocks that contain data that must be backed up. So, it would read at most 1gb, but probably less since you have changed block tracking on - it would only read those that changed if you did an incremental.

New Features 11g

Santosh Vijayan, November 02, 2009 - 4:30 am UTC

Thanks a lot

Flash cache

Alexander, December 15, 2009 - 10:22 am UTC

Tom,

Is flash cache an exadata only thing? From what little I've found in the documentation, it looks like it's another additional cache in Oracle's architecture.

What's the benifit of using this? Is this something you'd recommend moving towards for new 11g installations?

Also, it does not look like this will be allocated from the new memory parameter that combines sga + pga.
Tom Kyte
December 15, 2009 - 10:32 am UTC

Exadata and Oracle Enterprise Linux (OEL)

It uses a solid state device as a secondary SGA - when you read a block into the cache and we go to flush it out later - we not only write it to disk, but to the flash cache as well.

The next time we need to read it, we read from the flash cache - not disk - into the SGA.

If the buffer cache is say 100x faster (in isolation, assume NO contention for the cache) than disk, the flash cache might be 30-40 times faster than disk - so not as good as getting a hit in the SGA, but way better than disk.

It takes a small bit of your SGA to manage the pointers to the flashcache, but the vast preponderance of the 'memory' is the flash cache itself.

Alexander, December 15, 2009 - 10:53 am UTC

Sounds like "double buffering" to me, which I thought was to be avoided. Can I assume the likelihood the blocks not found in the buffer cache will be in flash cache as opposed to say a file system's cache? (I heard the flash storage is intended to be quite large, in the 500 GB range...)

We had an Oracle consultant on site, that's where I heard these things. He also mentioned something very interesting about the storage having some kind of intelligence to be able to process data from queries, and only send the results to the database. Instead of all the data needing to be processed being sent to the database, it does the processing, then sends the results to the client. Any idea what I'm talking about?
Tom Kyte
December 15, 2009 - 11:08 am UTC

you have databases, they measure in the hundreds to thousands of gigabytes.

you have DRAM, that typically measures in the 10's of gigabytes.

database does not fit in DRAM.


you have solid state devices which can measure in the hundreds to thousands of gigabytes.

database can mostly, or a large percentage can fit, into flash cache. Hence, it makes sense to use flash cache to provide a backing store for a buffer cache in memory.


Double buffering should avoided when it can be obviously. If you have 32gb of memory on a database machine and give 16gb to the database and 16gb to the operating system, then what would happen is the database would cache say 10gb of data from the database - and the OS would try to cache another 10gb in the filesystem cache.

Now, we COULD enlarge the SGA from 16 to 30gb right - and instead of the OS buffering some and the SGA buffering some - we just have the SGA buffer it.

But with the flashcache - you don't have nearly the same situation. You have 32gb of DRAM and presumably the database is using all of it and you have say 1tb of flash cache - might as well use it as faster disk to buffer with. Then you have 100tb of disk - to store the data persistently.

It is not double buffering as the flash cache cannot be rolled into the SGA like the file system cache can be rolled into the SGA. Double buffering happens when you have two things doing the same thing - the flash cache is NOT doing the buffer cache work.


... e
also mentioned something very interesting about the storage having some kind of
intelligence to be able to process data from queries, and only send the results
to the database. ...

that is exactly the Exadata storage array, yes. It is an Oracle SAN if you will, the database gets a query and instead of saying to disk "hey disk, send me that 10 terabytes of data", the database says to Exadata disk "hey disk, I need the sales data for 27-sep-2009 in the state of VA for blue widgets - would you get me that please".

Exadata allows us (in a nutshell, it does a lot more) to push the where clause down to the storage - instead of moving 10tb from storage to server, we move question down to storage, have the storage process the question and return to the server just the rows and columns that make up the answer - we send back maybe 100gb of data for 27-sep-2009 in VA for blue widgets - not 10tb of raw data to process.


Alexander, December 15, 2009 - 11:16 am UTC

Thanks very much.

So I can't setup flash cache on RedHat? I need Oracle's O/S, which is what they took over from Sun?
Tom Kyte
December 15, 2009 - 1:07 pm UTC

OEL - Oracle Enterprise Linux - has been around for years, way behind any talks with Sun. It is a Red Hat fork.

OEL or Sun Operating systems...

http://structureddata.org/2009/11/24/oracle-11gr2-database-flash-cache-patch-for-oracle-enterprise-linux/

A reader, February 01, 2010 - 12:41 pm UTC

Is Windows 2008 Server ( 32 or 64 bit) certified for 11g Release 1 ?

Per Installation Document , it is certified for Windows 2003 Server .

Per another metalink note ( id : 782450.1) , applies to Windows 2008 . Which one is right ?
Tom Kyte
February 01, 2010 - 12:45 pm UTC

please utilize support for this, you have metalink access ....

RMAN compression in 11g

A reader, February 01, 2010 - 2:48 pm UTC

Our biggest database is around 200 GB and it runs on 10.2.0.4. When we were building this database in production, we were considering about using RMAN compression but then later found that the CPU goes up much higher, so we gave away that idea.
Now we are building a new database in 11g and I want to know if there are any improvements in RMAN 11g compression. Can we definitely use the compressed backup sets and not get a high CPU at the same time ? Please advise.
Tom Kyte
February 01, 2010 - 4:13 pm UTC

to compress will always take.....


more cpu, you never get something for nothing.


... Can we definitely use the compressed
backup sets and not get a high CPU at the same time ? ...

not until the laws of physics are changed to accommodate you. Please give this some consideration - think about what you are asking....


You want to conserve on disk space? Use more of another resource- CPU - to do that.

The compression Thing

Marwan, February 02, 2010 - 1:48 am UTC

"not until the laws of physics are changed to accommodate you" that is just made me laugh out load..

I think what the "reader" meant to ask is whether a new compression algorithem is used in 11g RMAN that would use less CPU than the one was used in 10g RMAN..




Compression

Marwan, February 02, 2010 - 2:01 am UTC

I have found a link which discuesses this issue:

http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-rman.html

apparently a new algorithem "ZLIB" is faster than the standard one in 11g - same used in 10g- which is "BZIP2"

But as Tom indicated physics are still playing its part and you won't get the same space savings..

Search for "ZLIB compression" in the above link and you will get what you need

Database replay and triggers

Lise, February 08, 2010 - 6:16 am UTC

Hi,
A replay of a workload will reapply the transactions. Does this mean that any transactions part of a table that has a trigger attached, will cause the trigger to fire?
Tom Kyte
February 15, 2010 - 10:13 am UTC

sure, it replays the client messages to the database - stored procedure calls, SQL, whatever the client sent over.

You can use this to test a new trigger implementation, a new stored procedure, a new view, etc.

ORA-32017 ORA-00384

Prasad, March 19, 2010 - 9:10 am UTC

on oracle 11.1.0.6 on 64bit. their memory_target was set to 5G with sga_target and all pool parameters set to 0. pga_aggregate_target was explicitly set to 750MB. however oracle would not allocate more than 64MB RAM to the buffer cache. we looked through multiple snapshots and never found a case where db_cache_size was above 64MB, despite a staggering amount of disk reads. so we tried to change the db_cache_size manually set to minimum.

SQL> alter system set db_cache_size=1024M;

alter system set db_cache_size=1024M

*

Error at line 1:
ORA-32017: failure in updating SPFILE
ORA-00384: Insufficient memory to grow cache

Tom Kyte
March 19, 2010 - 10:04 am UTC

I'll say the same thing I just wrote on my blog in response to you posting the same exact thing over there


you must have had some other parameter, other than pga_aggregate_target, set greater than zero to observe what you say... I believe you might have - and not realized it.

RESULT_CACHE

Sarayu K.S., April 23, 2010 - 11:14 pm UTC

Hi Tom,

I have a question on RESULT_CACHE. Please let me know if my understanding is correct.

I am executing these statements in order on Oracle 11.1.0.6.0 Enterprised Edition.

1) exec dbms_result_cache.flush
2) alter system flush shared_pool;
3)alter system flush buffer_cache;

Then i run the query without hint of result_cache

4) Select count(*) from objs; 

--> Above query is not using result cache, it is running for first time, so we will see physical reads & consistent gets

5) select count(*) from objs; 

--> again same query, this time only consistent gets as data is read from buffer cache

6) select /*+ result_cache */ count(*) from objs; 

-- Lets introduce result_cache now

Will the above query gets data from buffer cache or will it use result_cache and gets result from memory instead of running query? Or will this 6th statement run same as (5) and prepares ground for usage of result_cache and
any other queries later can benefit from result_cache?

What i see from my results below is first time when we use result_cache, it is actually preparing for usage of result_cache and any further execution of query using result_cache is actually using result_cache? Would you please confirm if my understanding is correct.

Thanks
Sarayu

====


00:08:30 SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
PL/SQL Release 11.1.0.6.0 - Production
CORE    11.1.0.6.0      Production
TNS for Solaris: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production

Elapsed: 00:00:00.02
00:08:32 SQL> exec dbms_result_cache.flush

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
00:08:36 SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.32
00:08:40 SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.10
00:08:43 SQL> set autotrace on
00:08:50 SQL> Select count(*) from objs;

  COUNT(*)
----------
     69918

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 386529197

-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |   289   (1)| 00:00:04 |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| OBJS | 69918 |   289   (1)| 00:00:04 |
-------------------------------------------------------------------


Statistics
----------------------------------------------------------
        348  recursive calls
          0  db block gets
       1088  consistent gets
       1039  physical reads
          0  redo size
        524  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed

00:08:52 SQL> -- Lets run same query again
00:08:59 SQL> Select count(*) from objs;

  COUNT(*)
----------
     69918

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 386529197

-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |   289   (1)| 00:00:04 |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| OBJS | 69918 |   289   (1)| 00:00:04 |
-------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1033  consistent gets
          0  physical reads
          0  redo size
        524  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

00:09:03 SQL> -- Lets introduce result cache
00:09:11 SQL> select /*+ result_cache */ count(*) from objs;

  COUNT(*)
----------
     69918

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 386529197

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |   289   (1)| 00:00:04 |
|   1 |  RESULT CACHE       | cnsc9rw3p17364cbg4975pad6y |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| OBJS                       | 69918 |   289   (1)| 00:00:04 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(MYUSER.OBJS); attributes=(single-row); name="select /*+ result_cache */ count(*) from objs"


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1033  consistent gets
          0  physical reads
          0  redo size
        524  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

00:09:16 SQL> -- Lets run same query using result_cache again..
00:09:29 SQL> select /*+ result_cache */ count(*) from objs;

  COUNT(*)
----------
     69918

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 386529197

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |   289   (1)| 00:00:04 |
|   1 |  RESULT CACHE       | cnsc9rw3p17364cbg4975pad6y |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| OBJS                       | 69918 |   289   (1)| 00:00:04 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(MYUSER.OBJS); attributes=(single-row); name="select /*+ result_cache */ count(*) from objs"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        524  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

00:09:33 SQL>




Tom Kyte
April 26, 2010 - 8:12 am UTC

5) MAYBE - depends on size of cache and size of table, long table full table scans are not cached like short table ones.

6) it has to actually run the query to get the answer, it'll read from disk and the buffer cache, build the result set and then write the result set answer into the shared pool for subsequent reuse.

dba_users_with_defpwd

Giridhar, May 29, 2010 - 9:49 am UTC

Hi Tom,

I have few questions on DBA_USERS_WITH_DEFPWD.

dba_users_with_defpwd view in Oracle 11G provides list of User ids which have default passwords.
My question is any user who has select_catalog_role might be able to access this view and that user can review all the users whose password is default.

Wont it be a risk as now we expose more information to all users who has access to this view???
Do we have to revoke access to this view from non-dba users?? I agree SELECT_CATALOG_ROLE wont be granted to all users. But, with this new feature, are we not letting out other users to know about all users who have default passwords from dba_users_with_defpwd?

Would you please share your views about my question

Thank You
Giridhar

Note: I posted same question on
http://forums.oracle.com/forums/message.jspa?messageID=4330011#4330011



Tom Kyte
May 29, 2010 - 10:38 am UTC

... Wont it be a risk as now we expose more information to all users who has access
to this view???
...

well, are you not the one that granted them select any catalog? If you feel it is a security issue - you should get right on revoking that - no?


do you HAVE TO, of course not
do you WANT TO, maybe - you tell me


However, even if you restrict access to this view, have you done anything to secure yourself?

Nope, not at all.

Why?

Because everyone already knows what the accounts are called and what their default passwords are, they would just try them - why bother reading this view when you can just

connect scott/tiger

and see if it works.

11g

A reader, June 08, 2010 - 4:57 pm UTC

Tom:

Do you lose any 11g features if the oacle client stayed at 9.0 and the server is 11g. The client here is used by a power builder app to connect to database.
Tom Kyte
June 10, 2010 - 11:24 am UTC

yes, you lose new features that require client support - such as client side result caching for example.

do I have an exhaustive list? No... I don't. But there would be various client, networking and security features that would not be available.

What you would need to do is ascertain if there was an 11g feature that was "do or die" for you - and then we can tell you if it might be affected by a 9i client. (probably not, it would be the exception, not the rule)

OEL isn't technically a "fork"...

Richard, June 15, 2010 - 12:37 pm UTC

I hope your comment was/is not taken in the traditional sense of the term "fork":

Followup   December 15, 2009 - 1pm Central time zone:

OEL - Oracle Enterprise Linux - has been around for years, way behind any talks with Sun. It is a Red Hat fork. 


Usually when folks talk about forking code, they mean there are two independent paths of development that may or may not converge again down the line.

With OEL/RHEL there is technically no "fork" in the code since OEL is RHEL with Oracle re-branding and some bug fixes added in for good measure --- all of which are then released back to Red Hat to avoid divergent code. Subsequent updates to OEL are, presumably, originated from a RHEL baseline update.

Tom Kyte
June 22, 2010 - 10:21 am UTC

you are correct, it is as you describe, sorry for the incorrect terminology.

Teymur, August 26, 2010 - 7:59 am UTC

Dear Tom.

As you are too busy, I could not post as New Question.

I asked my question 1st in OTN forums, however I could not get any feedback: http://forums.oracle.com/forums/thread.jspa?messageID=4531100

Could you pls help?

Thanks for your time.
Tom Kyte
August 26, 2010 - 1:31 pm UTC

I see an answer in the forums..

To: Timur

Bakunian, August 26, 2010 - 1:03 pm UTC

Timur - this feature is nothing new and existed in Oracle 9i as well. I also answered your question in OTN.

flashback transaction backout

Sokrates, September 01, 2010 - 2:34 am UTC

I yesterday learnt about the new
flashback transaction backout feature in 11g
( DBMS_FLASHBACK.TRANSACTION_BACKOUT )

interestingly, it isn't mentioned in the New Feature's Guide
is this a documentation bug or is this feature too unstable to use ?
Tom Kyte
September 09, 2010 - 11:52 am UTC

there are thousands of things not mentioned in the new features guide that are only mentioned in the supporting documents.

that is why each document itself has a "whats new in" chapter at the beginning of them as well.

http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e10471/whatsnew.htm#CEGHGGIE

The new features guide is a guide to the "big" new features, there are many that are not "big" enough to be in there.

Parallel Application Versions,

A reader, September 15, 2010 - 9:50 am UTC

Hello,

I haven't heard about the feature called "parallel application versions" in 11gR2. Could you please give me more information about that or point it to a white paper or any other document about that?

Thanks,

Tom Kyte
September 15, 2010 - 10:44 am UTC

the only place "parallel" appears on this page is... In your use of it.

I think you might mean "Edition Based Redefinition"

http://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html
http://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.html

New feature,

A reader, September 15, 2010 - 2:08 pm UTC

Thanks for clarifing. So in your words, that feature (parallel application versions) doesn't exist in 11gR2.

Besides this, can you give more information about Stored outline migration to SQL Plan management? I believe this is also a 11gR2 feature.

3rd question: What is Metrocluster?

Regards,
Tom Kyte
September 15, 2010 - 2:21 pm UTC

1) parallel application versions is a term I've never heard before today. The only tie in I found was to edition based redefinition - which I think is what you mean

http://www.google.com/search?q=parallel+application++versions&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a#sclient=psy&hl=en&client=firefox-a&hs=wXd&rls=org.mozilla%3Aen-US%3Aofficial&q=%22parallel+application++versions%22&aq=f&aqi=&aql=&oq=&gs_rfai=&pbx=1&fp=280187d6f0589da0



2) http://www.oracle.com/pls/db112/search?remark=quick_search&word=migrate+stored+outlines+to+plan+management


3) http://www.google.com/search?q=Metrocluster&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a


search, it rocks and rolls sometimes, doesn't it.

question on physical schema change,

A reader, September 27, 2010 - 3:55 pm UTC

Tom,

I am going through the articles you have written about Edition-based redefinition. In the part 2 and part 3 articles, you have renamed employees table and created a view (matching the earlier table name) before making DDL changes on employees_rt table. I have few questions.

1. At the end, does employees_rt table still exist?

2.Does employee view continue to exist?

3. When will employee table (not view) gets the newly added columns?

4. How can I bring back ORA$BASE as default edition for all the users? I don't want VERSION2 as default edition for all other users.

5. Later, if another change need to happen in the database, create VERSION3 edition and later making it as default. What will happen to changes earlier made during VERSION2?

Thanks,



Thanks,

Tom Kyte
September 27, 2010 - 5:46 pm UTC

1) yes, it exists forever - but everyone only knows about employees - we use the editioning view to protect the application from the physical schema (which is what views were invented for in the first place)

2) yes, forever, it is there to protect the application from the physical schema and changes to it. There are multiple versions of the editioning view - for when the structure of the underlying table changes

3) as soon as you add in the editioning view you can start adding new columns - the existing code will not be able to see them since the editioning view hides them.

4) alter the database, make ora$base the default - wait till all are done using version2 and then drop it, then go back and undo any physical schema changes you need to undo.

5) you tell me - what would you like to have happen - you can keep them in there so you can see what v2 looked like OR you can drop v2 and anything that was just in v2 (and thus inherited by v3) will become part of v3.

Follow up,

A reader, September 28, 2010 - 8:52 am UTC

Thanks for the explanation.

About your answer for my question #4, I have a follow up question.

I didn't understand "wait till all are done using version2 and then drop it then go back and undo any physical schema changes you need to undo".

so you mean, undo physical changes made to EMPLOYEE_RT table (by dropping the newly added columns) and rewind the stored procedure/package the way it was earlier? then how will I continue with my changes? My question is, I want these changes to be permanent (the two new columns are permanent) but I also want ORA$BASE to be my default edition. how is this possible?


Thanks again.




Tom Kyte
September 28, 2010 - 9:23 am UTC

You would not have ora$base as your default edition - what would be the point? editions are 100% transparent to anyone and everyone - you never 'see it'.

You've moved on - you have a new edition - ora$base is gone.


I guess in theory you could:

a) drop ora$base - you are left with just version2
b) create a new edition ora$base as a child of version2
c) drop version2 without having made any changes to ora$base - ora$base is now version2

but that would be something that would just not make sense to do. The edition name isn't really relevant.

Follow up,

A reader, September 28, 2010 - 9:25 am UTC

Thanks Tom. I wanted to understand the concepts before making such kind of decisions (like dropping editions). Now I am convinced.


Total Recall

Manuel Glez, October 04, 2010 - 8:38 am UTC

Hi Tom,
First of all, thanks for your help with these post.

We need to save historical data for several years (5, 10 or 15 in some cases).
Versions: oracle RAC 10gR2 and 11gR2
We need to extract all I/O operations and the result of these operations. We need to extract the result of all "select" operations for five tables during 10 years.
We have to know "who" launch tha select and "which" is the result of these select and we have to save this information.
We are thinking about Audit Vault to know "who", but the problem for us is the result of the select and I/O operations. Total Recall can be an option but as you explain in other post Total Recall only works with the database that receive that change (you can not save these changes in another database).
The problem for us is that we need to know which data is the table at this moment and in the last 10 years.
Any idea?

Many thaks in advance
Tom Kyte
October 05, 2010 - 1:24 am UTC

... We
need to extract the result of all "select" operations for five tables during 10
years. ...

rethink that - that is not practical. What would be practical would be to store the queries executed and the "as of scn" they were executed as of - and using flashback data archive you can run the queries as of that point in time to see what they saw. But keeping every result set - that is just not practical. Just keep the query and the as of scn and use flashback data archive. Fine grained auditing can easily capture all of the relevant SQL for you - including the as of time and the "who" - flashback query will show you the "what"

and yes, the archive will be in the database that has the source data - you only need to execute the query in that database to see what they saw.

database replay,

A reader, October 04, 2010 - 1:36 pm UTC

Hello,

Are edition-based redefinition and database replay feature same? If not, what is the difference between them?

Thanks,

Tom Kyte
October 05, 2010 - 1:56 am UTC

not even close.

EBR (edition based redefinition) allows you to upgrade an application in the database online - it is a High Availability tool

Real Application Testing allows a single DBA to record a production workload, take it to a test machine - change something on the test machine (like change the database version) and replay that workload to see "what would have happened in production if we had made that change.."

11G - PLSQL Fine Grained Dependency tracking

Hari, October 05, 2010 - 4:32 am UTC

Tom ,
This is regrading another 11 PLSQL new feature - Fine Grained Dependency tracking.

i wanted to undertstand how this feature is implemented. so i created couple of packges and tried to do same set of below task both in 10g , 11g database.

Oracle 10g Implementation

1
Create new package pkg_emp as below 

 
 create or replace
 PACKAGE PKG_EMP AS
 
 PROCEDURE EMP_Vu ;
 
 PROCEDURE EMP_UPDATE;
 
 END;
 /

create or replace
 PACKAGE BODY PKG_EMP AS
 
 
   PROCEDURE EMP_VU  AS
   BEGIN
     DBMS_OUTPUT.PUT_LINE('Proceudre emp_Vu');
   END;
 
   PROCEDURE EMP_UPDATE AS
   BEGIN
     DBMS_OUTPUT.PUT_LINE('Procedure Emp_Update');
   END;
 
 
 
 END ;
 /

2
Create new package my_pkg as below
 
 create or replace
 PACKAGE My_Pkg AS
 
   Procedure Call_pkg_emp; 
 
 END;
 /
 create or replace
 package body My_Pkg  as
 
   PROCEDURE CALL_PKG_EMP IS
   BEGIN
     PKG_EMP.EMP_UPDATE;
   END;
 
 END ;
 /

3 Execute My_pkg.call_pkg_emp;
hr@IRADSDB> exec My_pkg.call_pkg_emp;
Procedure Emp_Update

PL/SQL procedure successfully completed.

4 Modify Pacakge pkg_emp specification - add new procedure emp_delete

create or replace
 PACKAGE PKG_EMP AS
 
 PROCEDURE EMP_Vu ;
 
 PROCEDURE EMP_UPDATE;
 
 procedure emp_delete;
 
 END
 /
 

5 Complie the pkg_emp specification with the chages.
6 Execute My_pkg.call_pkg_emp;

hr@IRADSDB> exec My_pkg.call_pkg_emp;
BEGIN My_pkg.call_pkg_emp; END;

*
ERROR at line 1:
ORA-04063: package body "HR.PKG_EMP" has errors
ORA-06508: PL/SQL: could not find program unit being called: "HR.PKG_EMP"
ORA-06512: at "HR.MY_PKG", line 5
ORA-06512: at line 1

hr@IRADSDB> SELECT OBJECT_NAME ,OBJECT_TYPE, STATUS FROM USER_OBJECTS WHERE OBJECT_NAME='PKG_EMP';

 OBJECT_NAME                    OBJECT_TYPE         STATUS
 ------------------------------ ------------------- -------
 PKG_EMP                        PACKAGE             VALID
 PKG_EMP                        PACKAGE BODY        INVALID

In 10g , since there is change in definition of pkg_emp specification , pacakge body is getting invalidated.
this requires both package specification and package body to be recompiled .

Due to Fine Granied Dependencey check in 11g this is eliminated.

Oracle 11g Implementation:

1 Create pkg_emp[without delete procedure], my_pkg
2 Execute my_pkg.call_pkg_emp
3 observe the results
4 Modify the Pkg_emp specification to have a delete_emp procedure
5 Complie the pkg_specification
6 execute my_pkg.call_pkg_emp
7 Observe the results
hr@11G> SELECT OBJECT_NAME ,OBJECT_TYPE, STATUS FROM USER_OBJECTS WHERE OBJECT_NAME='PKG_EMP';

OBJECT_NAME OBJECT_TYPE STATUS
------------------------------ ------------------- -------
PKG_EMP PACKAGE VALID
PKG_EMP PACKAGE BODY INVALID

hr@11G> exec My_pkg.call_pkg_emp;
BEGIN My_pkg.call_pkg_emp; END;

*
ERROR at line 1:
ORA-04063: package body "HR.PKG_EMP" has errors
ORA-06508: PL/SQL: could not find program unit being called: "HR.PKG_EMP"
ORA-06512: at "HR.MY_PKG", line 5
ORA-06512: at line 1

Since 11g implements fine grained dependency Tracking , i expected output without errors.

my query is
Am i using correct example or is my understanding about this feature correct. please correct me.
Tom Kyte
October 05, 2010 - 12:16 pm UTC

where are you implementing the package body of the PKG_EMP?????

you gave a spec that said it would have that 3rd procedure - but it doesn't. where is that code????


The MY_PKG code is fine, the code you broke (totally broke) is the PKG_EMP code - you said there would be a third procedure, but there isn't - so we cannot execute the body of that code.

This is not a dependency issue

Michael Tefft, October 05, 2010 - 10:08 am UTC

Your package body *is* invalid, becuase it does not have an implementation of your new procedure. If you attempt to compile it you will get an error.

Fine-grained dependency tracking avoids invalidating objects that *can* remain valid. That is not your situation here.

Hari - 11G - PLSQL Fine Grained Dependency tracking

Rajeshwaran, October 05, 2010 - 10:16 am UTC

More information about Finer Grained Dependencies available at
http://download.oracle.com/docs/cd/B28359_01/server.111/b28279/chapter1.htm#FEATURENO07519

rajesh@10GR2> create or replace package pkg as
  2     procedure p1;
  3     procedure p2;
  4  end pkg;
  5  /

Package created.

Elapsed: 00:00:00.04
rajesh@10GR2> create or replace package body pkg as
  2     procedure p1 as
  3     begin
  4             dbms_output.put_line ('p1 invoked');
  5     end;
  6
  7     procedure p2 as
  8     begin
  9             dbms_output.put_line ('p2 invoked');
 10     end;
 11  end;
 12  /

Package body created.

Elapsed: 00:00:00.03
rajesh@10GR2>
rajesh@10GR2> create or replace procedure p as
  2  begin
  3             pkg.p1;
  4  end;
  5  /

Procedure created.

Elapsed: 00:00:00.06
rajesh@10GR2>
rajesh@10GR2> select object_name,status
  2   from user_objects
  3  where object_name in ('PKG','P');

OBJECT_NAME                    STATUS
------------------------------ -------
P                              VALID
PKG                            VALID
PKG                            VALID

Elapsed: 00:00:00.01
rajesh@10GR2> create or replace package pkg as
  2     procedure p1 ;
  3     procedure p2;
  4     procedure p3;   ---newly implemented method
  5  end pkg;
  6  /

Package created.

Elapsed: 00:00:00.04
rajesh@10GR2> create or replace package body pkg as
  2     procedure p1 as
  3     begin
  4             dbms_output.put_line ('p1 invoked');
  5     end;
  6
  7     procedure p2 as
  8     begin
  9             dbms_output.put_line ('p2 invoked');
 10     end;
 11
 12     procedure p3 as
 13     begin
 14             dbms_output.put_line ('p3 invoked');
 15     end;
 16  end;
 17  /

Package body created.

Elapsed: 00:00:00.04
rajesh@10GR2>
rajesh@10GR2> select object_name,status
  2   from user_objects
  3  where object_name in ('PKG','P');

OBJECT_NAME                    STATUS
------------------------------ -------
P                              INVALID
PKG                            VALID
PKG                            VALID

Elapsed: 00:00:00.03
rajesh@10GR2>


Now the Standalone procedure 'P' became INVALID in 10g
But when repeated the same logic in 11g, I founded below.

rajesh@11GR1> create or replace package pkg as
  2     procedure p1;
  3     procedure p2;
  4  end pkg;
  5  /

Package created.

Elapsed: 00:00:05.40
rajesh@11GR1> create or replace package body pkg as
  2     procedure p1 as
  3     begin
  4             dbms_output.put_line ('p1 invoked');
  5     end;
  6
  7     procedure p2 as
  8     begin
  9             dbms_output.put_line ('p2 invoked');
 10     end;
 11  end;
 12  /

Package body created.

Elapsed: 00:00:00.78
rajesh@11GR1> create or replace procedure p as
  2  begin
  3             pkg.p1;
  4  end;
  5  /

Procedure created.

Elapsed: 00:00:01.11
rajesh@11GR1>
rajesh@11GR1> select object_name,status
  2   from user_objects
  3  where object_name in ('PKG','P');

OBJECT_NAME                    STATUS
------------------------------ -------
P                              VALID
PKG                            VALID
PKG                            VALID

Elapsed: 00:00:00.86
rajesh@11GR1>
rajesh@11GR1>
rajesh@11GR1> create or replace package pkg as
  2     procedure p1 ;
  3     procedure p2;
  4     procedure p3;   ---newly implemented method
  5  end pkg;
  6  /

Package created.

Elapsed: 00:00:00.29
rajesh@11GR1> create or replace package body pkg as
  2     procedure p1 as
  3     begin
  4             dbms_output.put_line ('p1 invoked');
  5     end;
  6
  7     procedure p2 as
  8     begin
  9             dbms_output.put_line ('p2 invoked');
 10     end;
 11
 12     procedure p3 as
 13     begin
 14             dbms_output.put_line ('p3 invoked');
 15     end;
 16  end;
 17  /

Package body created.

Elapsed: 00:00:00.21
rajesh@11GR1>
rajesh@11GR1> select object_name,status
  2   from user_objects
  3  where object_name in ('PKG','P');

OBJECT_NAME                    STATUS
------------------------------ -------
PKG                            VALID
PKG                            VALID
P                              VALID

Elapsed: 00:00:00.03
rajesh@11GR1>


Now the Standalone procedure 'P' is not INVALID, Instead its VALID.By reducing the consequential invalidation of dependent objects in response to changes in the objects they depend upon, application availability is increased IN Oracle 11g.


But looking from here.

1 Create pkg_emp[without delete procedure], my_pkg
2 Execute my_pkg.call_pkg_emp
3 observe the results
4 Modify the Pkg_emp specification to have a delete_emp procedure
5 Complie the pkg_specification

6 execute my_pkg.call_pkg_emp
7 Observe the results

Since you modified Package specification, you have to recompile Package body. but that is missing from above steps.

Intelligent cursor sharing - LIKE Predicates

Rajeshwaran, Jeyabal, October 14, 2010 - 1:04 pm UTC

rajesh@11GR2> select * from v$version;

BANNER
--------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

Elapsed: 00:00:00.01
rajesh@11GR2> drop table t purge;

Table dropped.

Elapsed: 00:00:00.06
rajesh@11GR2>
rajesh@11GR2> create table t
  2  as
  3  select 'aaa' as id,
  4       a.*
  5  from all_objects a;

Table created.

Elapsed: 00:00:05.76
rajesh@11GR2>
rajesh@11GR2> insert into t
  2  select 'bbb' as id,
  3        a.*
  4  from all_objects a
  5  where rownum = 1;

1 row created.

Elapsed: 00:00:00.09
rajesh@11GR2>
rajesh@11GR2> commit;

Commit complete.

Elapsed: 00:00:00.00
rajesh@11GR2>
rajesh@11GR2> create index t_ind on t(id) nologging;

Index created.

Elapsed: 00:00:00.48
rajesh@11GR2>
rajesh@11GR2> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'T',estimate_percent=>100,method_opt=>'for all indexed columns size 254');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.85
rajesh@11GR2>

variable x varchar2(10);

exec :x := 'bbb';


SELECT /* ACS */ *
FROM
 T LOOK_FOR_ME WHERE ID like :X

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91 

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID T (cr=3 pr=0 pw=0 time=0 us cost=3 size=101 card=1)
      1   INDEX RANGE SCAN T_IND (cr=2 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 74805)

The plan indicates that the optimizer chose an index range scan, which is expected because of the selectivity (only 1%) of the value 'bbb'.

exec :x := 'aaa';
SELECT /* ACS */ *
FROM
 T LOOK_FOR_ME WHERE ID like :X

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      478      0.15       0.15          0       2156          0       71490
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      480      0.15       0.15          0       2156          0       71490

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 91 

Rows     Row Source Operation
-------  ---------------------------------------------------
  71490  TABLE ACCESS BY INDEX ROWID T (cr=2156 pr=0 pw=0 time=136187 us cost=3 size=101 card=1)
  71490   INDEX RANGE SCAN T_IND (cr=628 pr=0 pw=0 time=50003 us cost=2 size=0 card=1)(object id 74805)

Now I have given adaptive cursor sharing a chance to figure out "made a mistake",
exec :x := 'aaa';
SELECT /* ACS */ *
FROM
 T LOOK_FOR_ME WHERE ID like :X

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      478      0.09       0.15          0       2156          0       71490
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      480      0.09       0.15          0       2156          0       71490

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 91 

Rows     Row Source Operation
-------  ---------------------------------------------------
  71490  TABLE ACCESS BY INDEX ROWID T (cr=2156 pr=0 pw=0 time=135291 us cost=3 size=101 card=1)
  71490   INDEX RANGE SCAN T_IND (cr=628 pr=0 pw=0 time=51538 us cost=2 size=0 card=1)(object id 74805)



exec :x := 'aaa';
SELECT /* ACS */ *
FROM
 T LOOK_FOR_ME WHERE ID like :X

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      478      0.18       0.15          0       2156          0       71490
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      480      0.18       0.15          0       2156          0       71490

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 91 

Rows     Row Source Operation
-------  ---------------------------------------------------
  71490  TABLE ACCESS BY INDEX ROWID T (cr=2156 pr=0 pw=0 time=135291 us cost=3 size=101 card=1)
  71490   INDEX RANGE SCAN T_IND (cr=628 pr=0 pw=0 time=51538 us cost=2 size=0 card=1)(object id 74805)


Tom:

Is that, LIKE predicates doesnot kick off Adaptive cursor sharing in 11gR2?
Tom Kyte
October 15, 2010 - 8:51 am UTC

Works currently with equality and range operations (but not “like” predicates)




Total Recall

Manu, October 18, 2010 - 6:40 am UTC

Hi again,

Fist of all thank you for your responses.

In my last post I asked you about total recall and how can I have the information about who and what people have done, and saw in the database at any moment for 5 and 15 years in several cases.
I don`t have the solution yet, but i thinking different solutions.
Could be possible use Active Data Guard or Data Guard, and install Total Recall in the standby database to know the data that I have in my database at any moment?


Tom Kyte
October 25, 2010 - 8:48 am UTC

why???

but - no, not really. Data Guard is designed to provide a FAILOVER site - the tables that are 'copied' there (to the standby) are restricted in their use - since their entire goal in life is to be identical to the production system (for failover use).

why would you not just have flashback data archiving in the source system - where it belongs and makes sense?

Flashback Data Archive

oxbow, October 26, 2010 - 4:59 am UTC

Hi Tom,

I am reading about Flashback Data Archive and have some doubts.
1- When using Flashback Data Archives, can I export and import these archives?
2- If my storage capacity is full, can I export data to tape, and use this tape a few months later?
3- If 1 and 2 = NO, how can I solve this problem?
Tom Kyte
October 26, 2010 - 7:59 pm UTC

1) no
2) no (storage is dirt cheap, tapes cost most many times than disk does)
3) buy storage needed to store the data you need to store?


Look - you have identified a need - the need to store N years/months/whatever of data. That need costs in terms of storage, you need that storage (you have identified that need).

Intelligent cursor sharing -11G

Rajeshwaran, Jeyabal, December 06, 2010 - 6:15 am UTC

Tom:

I am reading the online oracle magazine and wanted to know that Cardinality Feedback discussed here
http://www.oracle.com/technetwork/issue-archive/2010/10-sep/o50asktom-165477.html

is the same Intelligent cursor sharing - 11G or they are different?

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:349660700346823279#2051582600346105387
Tom Kyte
December 07, 2010 - 9:41 am UTC

they are very different.

cardinality feedback is the optimizer getting smarter about the actual number of wrows coming back from a row source step. And then developing a new plan because of it.


adaptive cursor sharing is the optimizer realizing at hard parse time that bind peeking might lead to problems with the query - and marking the query as such. Then during the executions if the resources used by the query (IO's and the like) deviate from some expected average over time - the optimizer will generate (more than one) a new set of plans - for the bind variable values observed.

Cardinality Feedback - 11GR2

Rajeshwaran, Jeyabal, December 07, 2010 - 10:53 am UTC

rajesh@11GR2> select * from v$version;

BANNER
------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

Elapsed: 00:00:00.00
rajesh@11GR2>
rajesh@11GR2> create or replace type vcarray is table of varchar2(4000);
  2  /

Type created.

Elapsed: 00:00:00.06
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> create or replace function parse_list(p_varchar_list varchar2,p_delimiter in varchar2)
  2  return vcarray
  3  pipelined
  4  as
  5  begin
  6     for x in (      select
  7                             trim( substr (txt,
  8                                                      instr (txt, p_delimiter, 1, level  ) + 1,
  9                                                      instr (txt, p_delimiter, 1, level+1)
 10                                                      - instr (txt, p_delimiter, 1, level) -1 )
 11                                     )
 12                                               as discrete_value
 13                             from   (select  p_delimiter||p_varchar_list||p_delimiter txt from dual)
 14                             connect by level <= length(p_varchar_list)-length(replace(p_varchar_list,p_delimiter,''))+1 )
 15     loop
 16             pipe row(x.discrete_value);
 17     end loop;
 18  end;
 19  /

Function created.

Elapsed: 00:00:00.03
rajesh@11GR2>
rajesh@11GR2> variable x varchar2(40);
rajesh@11GR2> exec :x := 'A_B_C';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from table ( cast( parse_list(:x,'_') as vcarray ) ) t;

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 58440541

------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |            |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PARSE_LIST |  8168 | 16336 |    29   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         33  recursive calls
          0  db block gets
         71  consistent gets
          0  physical reads
          0  redo size
        469  bytes sent via SQL*Net to client
        416  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

rajesh@11GR2>
rajesh@11GR2> set autotrace off;
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from table ( cast( parse_list(:x,'_') as vcarray ) ) t;

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 58440541

------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |            |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PARSE_LIST |  8168 | 16336 |    29   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         13  recursive calls
          0  db block gets
         25  consistent gets
          0  physical reads
          0  redo size
        469  bytes sent via SQL*Net to client
        416  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

rajesh@11GR2>
rajesh@11GR2> set autotrace off;
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from table ( cast( parse_list(:x,'_') as vcarray ) ) t;

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 58440541

------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |            |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PARSE_LIST |  8168 | 16336 |    29   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        469  bytes sent via SQL*Net to client
        416  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

rajesh@11GR2>
rajesh@11GR2> set autotrace off;
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2> set autotrace traceonly explain statistics;
rajesh@11GR2>
rajesh@11GR2> select * from table ( cast( parse_list(:x,'_') as vcarray ) ) t;

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 58440541

------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |            |  8168 | 16336 |    29   (0)| 00:00:01 |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| PARSE_LIST |  8168 | 16336 |    29   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        469  bytes sent via SQL*Net to client
        416  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

rajesh@11GR2>
rajesh@11GR2> set autotrace off;
rajesh@11GR2>
rajesh@11GR2>
rajesh@11GR2>


Tom:

Even after four times executing this query Cardinality Feedback is not getting started. Am I doing things wrong here? Please correct me If i am doing wrong.
Tom Kyte
December 07, 2010 - 11:53 am UTC

don't use autotrace. Use dbms_xplan.display cursor.

http://asktom.oracle.com/Misc/something-new-i-learned-about-estimated.html

cardinality feedback

Sokrates, December 07, 2010 - 2:53 pm UTC

looks interesting

I am used to use the cardinality-hint when using pipelined table functions, like Jonathan Lewis advises on http://jonathanlewis.wordpress.com/2010/11/30/collection-costs/ just a week ago.

Does this mean, you say this is not necessary in Oracle11 anymore ?
Tom Kyte
December 08, 2010 - 9:24 am UTC

.... Does this mean, you say this is not necessary in Oracle11 anymore ?
...

no, that isn't what I'm saying. If you know the estimated cardinality - you should use the techniques here:

http://www.oracle.com/technetwork/issue-archive/o54asktom-086284.html
(see the "Query Plans with Temporary Tables" section)

In the case of pipelined functions - of the three I discuss - only the cardinality hint would apply.


The cardinality feedback will be useful for people that DON'T do that. But - remember, the optimizer has to first guess - then discover "I made a big mistake" and recompute the plan - every time you hard parse a query against that pipelined function.

You can avoid the extra hard parse AND the making of that mistake (which likely led to a bad plan) if you continue to use the cardinality hint

ok

Sokrates, December 08, 2010 - 10:19 am UTC

thanks for clarification

New Features in 11g

Kirill Loifman, November 02, 2011 - 4:35 am UTC

Hi Tom

Here are a few tiny but valuable things that still miss or not fixed in 11g. Please correct me if they are already implemented in 11gR2.

• Ability to prevent a schema user to perform following tasks:
- dropping own objects (probably introduce drop sys rights)
- creating indexes on own tables
- some others..
So the idea that schema should not have any rights by default.

• Ability to create a private DB link in different schema

• More info about the user required:
- Would be nice also to be able to recognise a schema from end DB user. Maybe with another field
- New dba_users field: comments (to put some info about a user), last logon date, etc.

• Possibility for a DBA user to create a materialized view in a different schema (if a schema user has no extra rights, like create table)

• Possibility to grant read access ONLY on PL/SQL source code to different schema (probably select or read privilege on procs, triggers, views, etc.).

• Default possibility to grant read-only and read/write access on all the schema objects to another user (probably “system roles” can be introduced like datareader / datawriter that can be granted for every schema user, similar to MS SQL). Basically to avoid manual grants with a system trigger implementation. It should be standard DB functionality.

• Improve mechanism of redirecting a user to a different schema:
- Implement a concept of “default schema”. Just add an additional field in dba_users “default_schema” & extend create/alter user command & implement standard mechanism of redirecting on logon. So again this is to have a standard functionality for system trigger workaround.
- Add “set schema” command to sql+ (alter session… is a pain)
- Improve “alter session set current_schema” behaviour since it does not work for all the cases properly

• I would add another system privilege: create temporary table. Since create table is very powerful right for a schema user and many of them use temp tables.

• Possibility to do incomplete recovery of one schema / tablespace on the same DB instance.

I explained some of them in more details here:
http://www.dadbm.com/2011/10/oracle-database-restrictions-workarounds-daily-dba-work/

Thanks

Kirill Loifman

A reader, November 03, 2011 - 5:09 pm UTC

Hi Tom,

I am new to Oracle 11g r2 , I want to learn basics and new features of 11g -

which edition of 11g r2 is FREE and can be installed on my personal laptop ?
is it express / enterprise / standard ?

thanks

Tom Kyte
November 04, 2011 - 1:14 am UTC

Express Edition is always free and exists for 11gR2

SQL Performance Analyzer

Rajeshwaran, Jeyabal, November 03, 2011 - 9:57 pm UTC

Tom:

I was reading about SQL Performance Analyzer from product doc's

http://download.oracle.com/docs/cd/E11882_01/server.112/e16540/spa_upgrade.htm#CIAJDJBD
On the SQL Performance Analyzer system, create a SQL Performance Analyzer task using the SQL tuning set as its input source.

Remotely test execute the SQL statements in the SQL tuning set on the test system over a database link to build a pre-upgrade SQL trial that will be used as a baseline for comparison, as described in "Testing Database Upgrades from Oracle Database 10g Release 2 and Newer Releases".

Upgrade the test system.

Remotely test execute the SQL statements a second time on the upgraded test system over a database link to build a post-upgrade SQL trial, as described in "Testing Database Upgrades from Oracle Database 10g Release 2 and Newer Releases".


So, Instead of this

1) Can I have 2 test systems one for each 10GR2 and 11GR2 (having same Hardware and OS configration as of production) and production running on 10GR2
2) Capture the workload from production and transfer to 2 test systems.

3) Create and execute the SQL Performance Analyzer task on 10GR2 and 11GR2 Test machine and compare the performance.

Is that okay to do like this?
Tom Kyte
November 04, 2011 - 1:16 am UTC



sure, you can do that.

Locking behavior change

A reader, November 07, 2011 - 9:12 am UTC

Tom,
Please let me know your thoughts about this.

=====================
Oracle 11g introduced another change to the locking code, transactions holds lock in row-X (SX) LMODE 3 on the child tables, not a LMODE 2 SS level lock as it did in 10g

This is a higher level lock mode which has the following consequence on the other session now attempting to either delete or update the PK in the parent table:

This change was introduced by Oracle to eliminate an ORA-600 issue that could occur when deleting a row from a table with a PK while rebuilding an associated FK index that referenced the PK.

Test case

------------

Session#1

create table t1 ( x int primary key );

create table t2 ( y references t1 );

insert into t1 values ( 1 );

insert into t1 values ( 2 );

commit;

Session#2

SQL > insert into T1 select 1 from dual where 1=2;

0 rows created.

Go back to Session#1

--------------------

SQL> delete from T1 where x=1;

hung..........

FIX: create FK indexes on all child tables
=====================

Tom Kyte
November 07, 2011 - 12:07 pm UTC

I've always said:

if you delete from the parent
if you update the parent primary key

then index the foreign keys - nothing here that would affect that rule.

Besides, you'd be doing a FULL SCAN of the child table every time you issue that delete - for every row it deletes otherwise. So, to avoid the locking issue - and the obvious performance issue on any table of size - index the foreign key.

RETRY_ON_ROW_CHANGE

Sokrates, November 30, 2011 - 12:45 pm UTC

Could you give a use case when the
RETRY_ON_ROW_CHANGE
hint might be useful ?
Tom Kyte
December 01, 2011 - 7:31 am UTC

I'll try to get back to you on this one ;) Traveling right now and just trying to answer 'easy things'.... I've bookmarked this.

Cardinality Feedback

rucha pathak, March 14, 2012 - 12:01 am UTC

Hi tom,

As u said cardinality feedback is helpful for the optimizer.to get the better explain plan for the query. but when i am trying to get explain plan for the following query i wonder it is showing different explain plan for the query when child number gets changed

SELECT COUNT(1) FROM AETB_EOC_RUNCHART A,
AETM_EOC_GROUP_BRANCHES B WHERE A.BRANCH_CODE = B.BRANCH_CODE
AND B.GROUP_CODE = :B1 AND NVL(EOC_STAGE_STATUS, 'O')
IN('N', 'W')


Plan hash value: 4177335138

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 84 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | NESTED LOOPS | | 1 | 13 | 84 (0)| 00:00:02 |
| 3 | TABLE ACCESS FULL| AETB_EOC_RUNCHART | 1054 | 5270 | 83 (0)| 00:00:01 |
| 4 | INDEX UNIQUE SCAN| PK01_AETM_EOC_GROUP_BRANCHES | 1 | 8 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Note
-----
- cardinality feedback used for this statement
when i am trying to get the explain plan for the same query with different child number then it is showing different execution plan for different child number as follows



SQL_ID 0rh5f8aygs42j, child number 5
-------------------------------------
SELECT COUNT(1) FROM AETB_EOC_RUNCHART A, AETM_EOC_GROUP_BRANCHES B
WHERE A.BRANCH_CODE = B.BRANCH_CODE AND B.GROUP_CODE = :B1 AND
NVL(EOC_STAGE_STATUS, 'O') IN ('N', 'W')

Plan hash value: 4177335138

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 67 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | NESTED LOOPS | | 1 | 13 | 67 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| AETB_EOC_RUNCHART | 2079 | 10395 | 66 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN| PK01_AETM_EOC_GROUP_BRANCHES | 1 | 8 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter((NVL("EOC_STAGE_STATUS",'O')='N' OR NVL("EOC_STAGE_STATUS",'O')='W'))
4 - access("B"."GROUP_CODE"=:B1 AND "A"."BRANCH_CODE"="B"."BRANCH_CODE")

Note
-----
- cardinality feedback used for this statement

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 0rh5f8aygs42j, child number 2
-------------------------------------
SELECT COUNT(1) FROM AETB_EOC_RUNCHART A, AETM_EOC_GROUP_BRANCHES B
WHERE A.BRANCH_CODE = B.BRANCH_CODE AND B.GROUP_CODE = :B1 AND
NVL(EOC_STAGE_STATUS, 'O') IN ('N', 'W')

Plan hash value: 4177335138

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 67 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | NESTED LOOPS | | 1 | 13 | 67 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| AETB_EOC_RUNCHART | 1230 | 6150 | 66 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN| PK01_AETM_EOC_GROUP_BRANCHES | 1 | 8 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter((NVL("EOC_STAGE_STATUS",'O')='N' OR NVL("EOC_STAGE_STATUS",'O')='W'))
4 - access("B"."GROUP_CODE"=:B1 AND "A"."BRANCH_CODE"="B"."BRANCH_CODE")

Note
-----
- cardinality feedback used for this statement


22 rows selected.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 0rh5f8aygs42j, child number 3
-------------------------------------
SELECT COUNT(1) FROM AETB_EOC_RUNCHART A, AETM_EOC_GROUP_BRANCHES B
WHERE A.BRANCH_CODE = B.BRANCH_CODE AND B.GROUP_CODE = :B1 AND
NVL(EOC_STAGE_STATUS, 'O') IN ('N', 'W')

Plan hash value: 4177335138

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 67 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | NESTED LOOPS | | 1 | 13 | 67 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| AETB_EOC_RUNCHART | 1 | 5 | 66 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN| PK01_AETM_EOC_GROUP_BRANCHES | 1 | 8 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter((NVL("EOC_STAGE_STATUS",'O')='N' OR NVL("EOC_STAGE_STATUS",'O')='W'))
4 - access("B"."GROUP_CODE"=:B1 AND "A"."BRANCH_CODE"="B"."BRANCH_CODE")


24 rows selected.


According to my understanding the last explain plan is the best but how will optimizer will choose that plan each time. plz clarify my doubt as it very confusing.


Tom Kyte
March 14, 2012 - 6:47 am UTC

where did "U" say anything???? seriously??

COUNT(1)

why are you counting ones? did you really mean to ask for the count of matching records - if so, the expedient way to do that - the correct way to do that is COUNT(*)



how is the last plan "the best", they all look the same to me. The estimated cardinality might be slightly different but the all *do the same thing*.

how could any of them be better than the other - if they do the same exactly thing?

Should we purchase Total Recall?

A reader, August 23, 2013 - 7:06 pm UTC

I have been working with architects and management on a requirement to capture all changes to a set of tables.
Rather than reinvent the wheel (most likely would be square) I have been selling Total Recall with 11gR2 as my preferred solution.
The architects are pushing back on the solution since they know we will move to 12C one day and have read that Total Recall will be defunct.
Will there be an easy way to upgrade from the Total Recall to Temporal Database Functionality when they upgrade to 12C?
They don’t want to be stuck with 11g due to the fact that we are using Total Recall and they don’t want to have to move to a custom solution in the future.
I have not found a lot on the web about Total Recall and how it would impact a 12C upgrade.

Tom Kyte
August 28, 2013 - 6:32 pm UTC

defunct???

are you kidding???


flashback data archive is a feature of EE starting with 11.2.0.4 and a feature of 12c. It is a no cost feature of EE (no longer an option) starting in those releases.

In order words - flashback data archive is open for use by more people than ever before - for free with enterprise.


the "total recall option" - that name - actually went away about 1 year ago and became part of the advanced compression option. Now the feature is a feature:

http://docs.oracle.com/cd/E11882_01/license.112/e47877/editions.htm#sthref61



Their Reference line

A reader, August 28, 2013 - 6:56 pm UTC

This is the line they were Referencing:
http://www.oracle.com/technetwork/database/application-development/total-recall-1667156.html

which was part of the now defunct Total Recall database option

If we start with Total Recall and move to 12C will the data easily move into the Temporal Database?
Tom Kyte
September 04, 2013 - 5:38 pm UTC

they worded that really bad.


a) flashback data archive as an extra cost OPTION to the database is gone

b) flashback data archive is now a FEATURE (for free) to the datbase with 11.2.0.4 and above

c) flashback data archive is NOT going a way

d) the only "option" to consider here would be whether you want the flashback archive compressed - if you do you need the advanced compression option



this bit:

Transaction Time is tracked with Flashback Data Archive, which was part of the now defunct Total Recall database option.

could have been worded better. What they are saying is

a) flashback data archive is what we use to track your transaction time. it is what we used to use, it is what we currently use, it is what we are going to use.

b) total recall database option is defunct, it is no longer an option - it is a feature. Much like plsql used to be an option but is now a feature. Much like objects were an option but are now a feature. Much like row level locking (believe it or not) was an option but are now a feature. Much like distributed was an option, but is now a feature. and so on. Many things used to fall into "defunct" options - but are now for free features of the database.




that entire page, with the exception of that poorly worded sentence, talks about how flashback data archive is the backbone, the crux, the center of the temporal database capabilities.


it is not going away.

Answered

A reader, August 29, 2013 - 7:23 pm UTC

Answered in the documentation.
Thanks:

http://docs.oracle.com/cd/E16655_01/server.121/e17906/chapter1.htm#AREANO02857

1.3.1.2 Flashback Data Archive Improvements
Several improvements have been made to Flashback Data Archive (FDA). They are:
• User-context tracking
The metadata information for tracking transactions including the user context is now tracked making it easier to determine which user made which changes to a table.
• Hybrid Columnar Compression (HCC)
FDA can now be fully utilized on HCC compressed tables on Exadata and other Oracle storage platforms.
• Import and export of history
Support for importing user-generated history into FDA tables has been added. Customers who have been maintaining history using some other mechanism, such as triggers, can now import that history into FDA.

Migrate historical data into FDA

A reader, September 13, 2013 - 4:50 pm UTC

Thank you for your help Tom!

I have almost sold management on the FDA approach to solve their customer’s requirement (currently 140 applications with custom code for audit history).

The last hurled would be how to migrate the existing historical data from their custom solution tables into FDA.
The current application I am working with has millions of rows of historical data.
The customer views this historical data relatively often through the application.

Is there any way that I can populate the FDA tables with this historical data?

DISASSOCIATE_FBA

A reader, September 16, 2013 - 8:57 pm UTC

From documentation:
It appears that I can use DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA to free up the system created history table:
EXEC DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA('scott','emp_test');

I am then able to execute DML on the table:
DELETE FROM scott.SYS_FBA_HIST_61527 WHERE empno=3968;

That is great for update and delete but how can I insert a million rows into the history table before I reassociate the table:
EXEC DBMS_FLASHBACK_ARCHIVE.REASSOCIATE_FBA('scott','emp_test')

How do I convert my date into the format that the history table requires and generate an SCN that will meet the table requirement?

Thanks for the Help!

How to load existing history data into newly created flashback data archive tables

A reader, September 26, 2013 - 5:15 pm UTC

I am sad to report that the Oracle ticket asking the question above was answered with the following:
As per my understanding FDA history table architecture is different and internally managed and a user defined HISTORY table will not be compatible with the FDA history table and hence migrating the data from your history table into FDA history table will not work. Kindly let me know your inputs on the same.

Yet through our own research we were able to find packages that allow for this task to be accomplished in 11g 11.2.0.3.0:

We are making use of the following packages:
dbms_fda_import
prvt_fda_import
dbms_fda_mappings

My initial testing has shown that our historical data now resides in our newly created flashback data archive tables as if they were created when the data was modified.

FDA

A reader, January 22, 2015 - 10:11 pm UTC