Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Pierre.

Asked: May 07, 2006 - 3:48 pm UTC

Last updated: July 23, 2006 - 7:31 am UTC

Version: 9.2.

Viewed 1000+ times

You Asked

Hi Tom,

I have questions about following OTN thread </code> http://forums.oracle.com/forums/thread.jspa?threadID=383224 <code>
In the same database, we have different schemas: each schema has exactly the same DDL (tables, indexes, etc.)
but each schema has its own data. This database hosts the same application for different sites: 1 appplication works
only with 1 schema and let's suppose we have 2 cases:
C1: the SQL code does not hard-code the schema name whether it's client SQL code or PL/SQL code in stored procedures/functions.
C2: the SQL code hard-codes the schema name whether it's client SQL code or PL/SQL code in stored procedures/functions.

1. Is is true to say that in both cases each SQL query for each schema will have its own parsed query in the shared pool
or in other words that 2 users running the same query in 2 differents schemas will not share the same parsed
query ? Does this depend on the way the client code connect to the database: If the client code uses the
schema name as connection account or if each user has its own Oracle account and runs ALTER SESSSION SET CURRENT_SCHEMA=... ?
2. Is there a way to count exactly in v$sql/v$sqlarea the exact number of parsed queries in both cases ?
Can we tell from v$sql/v$sqlarea using PARSING_USER_ID and PARSING_SCHEMA_ID columns who has parsed which query in which schema ?
3. Do you think it's a good idea to centralize applications in one database this way ?
What are the pros and the cons ?

Thanks a lot !
(I have now your 2 books Oracle Effective By Design and Expert One and One: very very good books !).

and Tom said...

The DBA in your case is way way off. There can be MANY cached, parsed representations of the same query in the shared pool (and that is not necessarily a good thing, but - your DBA is not correct when they say:

....
but the problem is that (according to my DBA) there's only 1 parsed copy of an SQL statement in the shared_pool. During the parsing/execution process of the statement the database finds out that the cached statement isn't the statement he's looking for because it's referring to other tables (although they have the same name) => hard parse => performance degradation.
........

that is entirely INCORRECT and easy to show incorrect using tkprof.

run a script like this:

cconnect /
drop user a cascade;
drop user b cascade;

grant connect, resource to a identified by a;
grant connect, resource to b identified by b;

connect a/a
create table t as select 'this is from a' data from dual;
connect b/b
create table t as select 'this is from b' data from dual;
connect /
alter session set sql_trace=true;
alter session set current_schema=a;
select * from t;
select * from t;
select * from t;
select * from t;
alter session set current_schema=b;
select * from t;
select * from t;
select * from t;
select * from t;
alter session set current_schema=a;
select * from t;
select * from t;
select * from t;
select * from t;
alter session set current_schema=b;
select * from t;
select * from t;
select * from t;
select * from t;


and you'll see the output go from A to B to A to B and tkprof will show:



select *
from
t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 16 0.00 0.00 0 2 0 0
Execute 16 0.00 0.00 0 0 0 0
Fetch 32 0.01 0.00 10 48 0 16
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 64 0.01 0.00 10 50 0 16

Misses in library cache during parse: 2
Optimizer mode: ALL_ROWS
Parsing user id: 126

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL T (cr=3 pr=5 pw=0 time=840 us)


The misses in library cache = hard parses, there were 2 - but 16 parses total - 14 where SOFT parses, 2 were hard.


so

1) they will not share the same parsed cursor, they cannot - they will however each have their own parsed cursor in the shared pool.

If you have many of these however (lots of copies in the shared pool), that will become a problem in that it will take more "searching" to find the right cursor every time you soft parse.


2) sure, version_count in v$sqlarea has that already, or aggregating v$sql and counting.

v$sql would have the user/schema ids


3) nope, I'd much much rather have a SINGLE schema, with a SINGLE set of code, with a SINGLE set of sql statements - using fine grained access control or just plain old views (given that the predicate would probably be the same - meaning the flexibility of fine grained access control might be more than you need).





Now, if you use plsql stored procedures - they will have to be INVOKERS RIGHTS routines if you go this multi-schema approach (which I do not really like either). Expert One on One goes into why in the chapter in invoker/definers rights and fine grained access control.



Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

searching in the library cache

Matthieu de Graaf, May 12, 2006 - 6:49 am UTC

Hi Tom,
Thanks for your answer.
I have an additional question. In your answer you stated

"If you have many of these however (lots of copies in the shared pool), that will
become a problem in that it will take more "searching" to find the right cursor every time you soft parse."

Is it possible to determine the size of the "searching" problem?

Regards,

Matthieu de Graaf



Tom Kyte
May 12, 2006 - 9:50 am UTC

if you mean "how much does this impact me", we can measure things...
http://asktom.oracle.com/~tkyte/runstats.html

I'll get 100 copies by parsing the same sql 100 times with different sort area sizes


ops$tkyte@ORA10GR2> @test
ops$tkyte@ORA10GR2> /*
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> drop table t;
ops$tkyte@ORA10GR2> create table t as select * from all_objects;
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace procedure p1( p_iters in number )
ops$tkyte@ORA10GR2> as
ops$tkyte@ORA10GR2>         l_cursor sys_refcursor;
ops$tkyte@ORA10GR2> begin
ops$tkyte@ORA10GR2>         for i in 1 .. p_iters
ops$tkyte@ORA10GR2>         loop
ops$tkyte@ORA10GR2>                 execute immediate 'alter session set sort_area_size = ' || (65536+mod(i,2));
ops$tkyte@ORA10GR2>                 open l_cursor for select * from t auto_look_for_me;
ops$tkyte@ORA10GR2>                 close l_cursor;
ops$tkyte@ORA10GR2>         end loop;
ops$tkyte@ORA10GR2> end;
ops$tkyte@ORA10GR2> /
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace procedure p2( p_iters in number )
ops$tkyte@ORA10GR2> as
ops$tkyte@ORA10GR2>         l_cursor sys_refcursor;
ops$tkyte@ORA10GR2> begin
ops$tkyte@ORA10GR2>         for i in 1 .. p_iters
ops$tkyte@ORA10GR2>         loop
ops$tkyte@ORA10GR2>                 execute immediate 'alter session set sort_area_size = ' || (65536+i*10);
ops$tkyte@ORA10GR2>                 open l_cursor for select * from t manual_look_for_me;
ops$tkyte@ORA10GR2>                 close l_cursor;
ops$tkyte@ORA10GR2>         end loop;
ops$tkyte@ORA10GR2> end;
ops$tkyte@ORA10GR2> /
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec p1(100)
ops$tkyte@ORA10GR2> exec p2(100)
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select sql_text, count(*) from v$sql where upper(sql_text) like '%\_LOOK\_FOR\_ME' escape '\' group by sql_text;
ops$tkyte@ORA10GR2> */
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec runstats_pkg.rs_start
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> exec p1(100)
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> exec runstats_pkg.rs_middle
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> exec p2(100)
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> exec runstats_pkg.rs_stop(100)
Run1 ran in 10 hsecs
Run2 ran in 12 hsecs
run 1 ran in 83.33% of the time
 
Name                                  Run1        Run2        Diff
LATCH.cache buffers chains             296         146        -150
STAT...undo change vector size       2,768       2,388        -380
STAT...redo size                     4,124       3,324        -800
LATCH.library cache pin                544      10,702      10,158
LATCH.library cache lock               906      11,064      10,158
LATCH.library cache                  1,357      21,673      20,316
 
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
3,431      43,916      40,485      7.81%
 
PL/SQL procedure successfully completed.


<b>the second one with 100 copies (instead of just two) used significantly more latches, it will not scale as well</b>

 

set_current schema bug?

Peter Man, July 19, 2006 - 10:54 am UTC

Hi Tom,

Thank you for your explanation. I will keep an eye on potential performance hit caused by "searching". I have a question regarding if I can use "EXECUTE IMMEDIATE ('ALTER SESSION SET current_schema='||i.username);" in a loop. When I run the below block.

declare
TYPE t_ntr_integer is TABLE OF varchar2(32);
l_count pls_integer := 1;
l_out_str varchar2(4000);
l_schema_tab t_ntr_integer := t_ntr_integer();
begin

for i in (select USERNAME from dba_users
where username like 'FM0%'
and username not in ('FM003', 'FM005', 'FM007', 'FM010', 'FM011', 'FM014', 'FM015')
and account_status = 'OPEN'
order by username) loop
EXECUTE IMMEDIATE ('ALTER SESSION SET current_schema='||i.username);
select count(*) into l_count from bus_date_purpose where bus_date = trunc(sysdate)+1;
dbms_output.put_line(l_count||' funds are rolled to correct business date for schema '||i.username);
end loop;
end;

--result
0 funds are rolled to correct business date for schema FM001
0 funds are rolled to correct business date for schema FM002
0 funds are rolled to correct business date for schema FM004
0 funds are rolled to correct business date for schema FM006
0 funds are rolled to correct business date for schema FM008
0 funds are rolled to correct business date for schema FM009
0 funds are rolled to correct business date for schema FM012
0 funds are rolled to correct business date for schema FM013
0 funds are rolled to correct business date for schema FM016
0 funds are rolled to correct business date for schema FM017

it seems that somehow the wrong parsed query is used in the loop. Then I add a line before

EXECUTE IMMEDIATE ('ALTER SYSTEM FLUSH SHARED_POOL');
before the following line:
EXECUTE IMMEDIATE ('ALTER SESSION SET current_schema='||i.username);

0 funds are rolled to correct business date for schema FM001
0 funds are rolled to correct business date for schema FM002
3 funds are rolled to correct business date for schema FM004
0 funds are rolled to correct business date for schema FM006
0 funds are rolled to correct business date for schema FM008
2 funds are rolled to correct business date for schema FM009
5 funds are rolled to correct business date for schema FM012
0 funds are rolled to correct business date for schema FM013
0 funds are rolled to correct business date for schema FM016
3 funds are rolled to correct business date for schema FM017

This gave me the result I expected to see. I understand that 'ALTER SYSTEM FLUSH SHARED_POOL' should never be used on a production schema in general and I just add it there for testing purpose. Thus I sometimes receive the unable to allocate memory in shared pool error and 'ALTER SYSTEM FLUSH SHARED_POOL' will save the problem immediately. My question is, how often are unused objects being flushed out of the shared_pool. When there is a request to load something into shared_pool, wouldn't Oracle flush out just enough unused objects so that there will be enough memory for my process to pin my needed objects there? Or Oracle will attempt to flush out a certain amount of memory and if free space can't meet my demand, it raise the error. Thanks in advance.

P.S. I bought your "NEW" book, I first thought it was too similar to the previous one on one book but then I realised many new stuff were added. I like the Block Cleanout session very much because it explains how things work logically in detail. Just wonder when the next book will come out if it's not confidential=)

Cheers,
Peter



Tom Kyte
July 20, 2006 - 7:49 am UTC

the block is compiled and then run. plsql caches cursors - it parsed once, executed over and over - this is by design.

Your sql is sort of done up front there - you'd have to use dynamic sql in the loop to avoid that.

set_current_schema_bug

Peter Man, July 19, 2006 - 11:05 am UTC

sorry i made a mistake.

I put

EXECUTE IMMEDIATE ('ALTER SYSTEM FLUSH SHARED_POOL');

immediately after

EXECUTE IMMEDIATE ('ALTER SESSION SET current_schema='||i.username);

so that looks like:

EXECUTE IMMEDIATE ('ALTER SESSION SET current_schema='||i.username);
EXECUTE IMMEDIATE ('ALTER SYSTEM FLUSH SHARED_POOL');

in my block. It might not make any difference though. The sequence might not make any difference though.

**Learning never stops here**

Oraboy, July 20, 2006 - 11:12 am UTC

every visit to asktom site = learning something new

Today I spotted & learnt something new..

<quote>
drop user a cascade;
drop user b cascade;

grant connect, resource to a identified by a;
</quote>

I always thought I have to start with "create user"..
and didnt know this cool shortcut.

Even 10gR2 docs doesnt seem to have added this yet

</code> http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9013.htm#i2155015 http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_8003.htm#i2065278 <code>

Thanks,
Oraboy


Tom Kyte
July 22, 2006 - 4:51 pm UTC

it is not that they haven't added it to the docs but rather that the syntax that was deprecated in version 6 is still supported!

it is very very "old", not new.

Sorry it is documented

Oraboy, July 20, 2006 - 11:21 am UTC

Rereading (and not skimming through) the documentation again , I find its documented clearly.


<docs>
IDENTIFIED BY Clause

Use the IDENTIFIED BY clause to specifically identify an existing user by password or to create a nonexistent user.
</docs>

</code> http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9013.htm#i2062316 <code>



Administrative issues

Gary Mason, July 21, 2006 - 9:24 am UTC

On the original point 3(multiple schemas versus a single schema), I understand that from a performance perspective it is preferable to adopt the single scheme approach with views or FGAC. I am facing a similar decision where currently we have a separate database (not Oracle) for each schema, and wondered whether to adopt the single database/multiple schemas or single schema approach. I was thinking of administrative issues, such as backup/recovery of a single clients data, and being able to remove a client without affecting others, and was veering towards the multiple schema option.

Tom Kyte
July 23, 2006 - 7:31 am UTC

partition it (ability to remove/add clients by removing/adding partitions)

More to Explore

Security

All of the vital components for a secure database are covered in the Security guide.