Skip to Main Content
  • Questions
  • Single Schema vs Multiple Schema - pros & cons

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Prince.

Asked: December 15, 2001 - 4:41 pm UTC

Last updated: May 16, 2022 - 2:33 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

We have a single schema with few 100 objects some of them having millions of rows. The schema have data for various clients. Each client is identified by thier clientid and is a column in almost all the tables.

Ex: (will have data of all the clients in the same table)
----
Table1 (clientid number(10) , col1 varchar2(30)) ;
Table2 (clientid number(10) , col1 varchar2(30), col2 number(30)) ;

We are thinking of splitting the tables into separate schema having each clients data in its own schema.

Ex: (will have each clients data in their own schema)

sch1.Table1 (col1 varchar2(30)) ;
sch1.Table2 (col1 varchar2(30), col2 number(30)) ;

sch2.Table1 (col1 varchar2(30)) ;
sch2.Table2 (clientid number(10) , col1 varchar2(30), col2 number(30)) ;

sch3.Table1 (col1 varchar2(30)) ;
sch3.Table2 (col1 varchar2(30), col2 number(30)) ;

Now, assume I have 1000 clients and want to separate them into their own schema.
I will probably have 800 - 1000 small objects (including indexes etc) per schema.

I need to give the information to the management. So please provide as much details (on the pros and cons) you can

Question:
~~~~~~~~~

What are all the cons to the approach?

Will there be any performance problem having many objects and many schema in the same instance? (I have the DB with 2G memroy dual processor)

What would be the impact on the resource (SGA, disk etc etc)
(I will club some of many schema into the same tablespace)

Note: I am not considering Oracle EE at this time.

Thank you very much..


and Tom said...

cons:
o harder to manage
o harder to upgrade
o harder to patch
o harder to maintain
o causes your shared pool size to increase 1,000 times (shared sql goes down the tubes)
o takes more space
o queries against the dictionary will be impacted
o latching on the shared pool goes WAY up (latching = locks = serialization device = slows you down)

pros:
o none that I can think of.




Rating

  (59 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

multiple schemas or Virtual private database(s)?

Bernd Boecker, December 17, 2001 - 4:56 am UTC

Maybe its worth to read ORACLE 8i documentation of

"Fine Grained Access Control / Virtual Private Database"

for a possible secure solution for row-level access with a predictor like 'clientid'...



I wasn't thinking of shared pool

Prince, December 17, 2001 - 2:57 pm UTC

Thanks for pointing the correct points.

Schema name

atul, October 11, 2002 - 6:02 am UTC

Sir,

I have one task to complete i have one schema called test
Now my maneger ask me to change the name of that schema

But this has some referances to other schmas..

So according to you whats the easiest way to go for that??

Thanks.
atul


Tom Kyte
October 11, 2002 - 8:17 pm UTC

There is no way to rename a schema. Period.

You could:

exp user=test owner=test
imp user=new_schema fromuser=test touser=new_schema

but you'll have to fix up hard coded references to TEST.

A reader, May 21, 2003 - 7:11 am UTC

tom, how would you decide in case you had to manage several small applications if their database part contains few data but most of the logic is implemented on database side? would you (always) recommend to have EVERY application to have their OWN separate schema (although that would have very few objects ... and data)?
we are thinking of using prefixes for object names and manage privileges through roles ...

what would you consider pros & cons in such a scenario?

Tom Kyte
May 21, 2003 - 8:55 am UTC

i like a schema per application. applications come and applications go. if you put them all together in one big thing, they all become interdependent and unmanagable. separate schemas force you to document the dependencies via grants at least.

Comment to Ben's comment

Marc Blum, May 21, 2003 - 9:28 am UTC

FGAC is only available in Enterprise Edition, this is even true in 9. And the OP doesn't consider EE.

Tom Kyte
May 21, 2003 - 9:46 am UTC

sorry but you are coming in way out of left field on this one? who is ben?

I've no idea what point you are trying to make.

I've no idea what "OP" is. But whatever OP is -- they must like writing tons of their own code -- yeah, thats lots cheaper and much more efficient then buying stuff.

1 schema or 1 instance per application

Kuo-Chun, May 21, 2003 - 11:16 am UTC

Tom:

In here we have some applications using relatively small databases ( < 150M) and not too many transaction each day.
We are currently moving them to another platform. We are debating whether we should merge them under 1 instance with 1 schema per application or keep the current configuration (1 instance/application)? What are the pros and cons of each scenario? Thank you for your help.

Tom Kyte
May 21, 2003 - 11:27 am UTC

there are NO pros of an instance per application.


only headaches.


there is only one logical answer -- the fewer database instances you have, the better. The less memory you need, the more control you have, the less you have to backup, recover, manage.


You want as FEW instances as you can get away with. To me, one per server is perfect.

??? EXECUTE/GRANT ANY ???

A reader, May 22, 2003 - 4:42 am UTC

one of our development teams wants to have one certain schema of their application to provide "services for administration purposes": create/drop users, grant users to roles.

in order to achieve this the ask for EXECUTE ANY and GRANT ANY for that "special user".

I want/have to "convince" them now, that explicit object privileges with grant option (instead of the requested system privileges) are sufficient under any circumstances. I don't want to be responsible for the code of that "super-user" (the usage of the system privileges granted). I want to keep control at the database level as much as possible!

could you pls confirm that (or give some how-to-advice if I'm wrong)?

thanx in advance.

Tom Kyte
May 23, 2003 - 7:58 am UTC

they only need a user that has

o the ability to create users (pretty dangerous, but maybe acceptable to you). I would give them a stored procedure for this.

o I wouldn't want them dropping users, maybe the ability to lock an account, but since dropping is irrevocable, maybe you want to do this differently

o roles granted to this user with the ADMIN option -- then that user can grant the role as well.

o privs on objects granted to this user with the GRANT option -- that user can grant the privs as well.


You are 100% right to go for LEAST PRIVILEGES here. avoid the "any" privs, use fine grained privs.

A reader, May 23, 2003 - 9:53 am UTC

thanks, tom, for that complete summary: there 's no doubt left that we can get it working the way you described.

coupling application schemas

A reader, May 24, 2003 - 4:30 am UTC

tom, in an instance with several schemas belonging to different applications - would you decide to couple these application schemas in a manner where you have only ONE cross application dependency between objects of those schemas?

lets assume two applications/schemas:

application/schema A: object A.1
application/schema B: object B.1, object B.2

if objects B.1 and B.2 both use object A.1: would you decide to create an object B.A1 as B's "local wrapper" for object A.1 wich is then used by B.1 and B.2?
that way there would be only one dependency left between schemas A and B (no matter how often B uses A.1). on the other hand special "wrapper objects" have to be created (and maintained) all-around.

I'm really not sure whether it is worth it or not. what would you suggest (in terms of pros & cons)?

Tom Kyte
May 24, 2003 - 10:23 am UTC

depends -- do you anticipate A.1 changing -- if so, B.A1 may be useful in "hiding" that change from you -- stress MAY there.


The dependencies are all still there, it is just the ability to fix things. Say A.1 is a table and they DROP a column "X" and add a column "Y". Your view B.A1 could "hide" that -- supplying a NULL value for X and hiding that Y even exists. Some changes can be made without impacting the code.

Dropping sample schemas

Jerry G. Katchatchurian, June 09, 2003 - 10:53 am UTC

Tom, I happened to install the samples schemas in the database. However, I want to get rid of them now. The manuals talk about how to create them, but do not mention how to drop them. What is the best way to do it ? Just dropping each one wih the cascade option ? I'm not sure. I'm afraid I can mess up something.

Thanks

Tom Kyte
June 09, 2003 - 12:25 pm UTC

drop user cascade;

[tkyte@tkyte-pc schema]$ ls */*_drop.sql
human_resources/hr_drop.sql order_entry/oe_drop.sql sales_history/sh_drop.sql
order_entry/oc_drop.sql product_media/pm_drop.sql shipping/qs_drop.sql
[tkyte@tkyte-pc schema]$


there are "drop" scripts that drop each object in the schema, but drop user cascade gets them out of there.

Sorry, I forgot about the tablespaces

Jerry, June 09, 2003 - 11:00 am UTC

Oh yes, how about the sample schemas tablespaces ? How can I drop them ?

Tom Kyte
June 09, 2003 - 12:27 pm UTC

after you drop the users, drop the tablespace.

one schmea per year - is that right?

Safrin, June 25, 2003 - 3:51 am UTC


Thanks tom- "asktom" make us to be in line with current markend trend

My question:

Ver - 8.1.5 work group server
OS - Novel netware 5.0
company - medium size
data - 2 GB each year
Users - Max 150

We are creating schema for every year, and making common tables/master tables
available to all years using synonym/grants.

Will this approach is right for every year of operation? or can
you suggest any other method.

Note: Partition is not possible in our installation.

Tom Kyte
June 25, 2003 - 12:03 pm UTC

it seems fine, if it is working for you (given the info i have, no idea what your goals are...)

new schema for each year

Safrin, June 26, 2003 - 12:07 am UTC

Thanks for answer.

In this method of creating schema for every year, we find it "difficult" to maintain the same for the following reason.
1. New schema has to be created every year
2. We have to give grant for each new schema (year)
3. Synonym has to be created in the new schema for common and master related objects (alter session will not work, since queries refering both master and transaction tables)

Since partition can not be used in our installation, I would like to know, is any other good method available?
or else, can you plz give me any tips for using this method more safely & effectively.

Thanks in advance.

Tom Kyte
June 26, 2003 - 9:07 am UTC

since all you have to do is log into a new schema, run a script that

o creates your tables
o grants on your tables
o drops and creates synonyms

I do not see this as "hard"?  I mean it boils down to:

SQL> @happy_new_year

doesn't it? 

what about migrating to different schema

LB, November 06, 2003 - 4:26 pm UTC

Hi Tom,

We (2 developers) have been working on a reporting application for about 8 months now. We are soon going to be migrating all our data, and code to a new schema. The 2 schemas are quite different since the old schema was not normalized and the new one is. That means there will be several more in-between tables, primary keys, constraints,etc.

We (the 2 developers) have two options I can think of:

1) Change all our procedure, function, and report code to match the new schema. That way everything will be in sync.

2) Create some views for the tables that have the same field and table names as our code currently refers. That would make the conversion job easier. Plus the old schema is a little easier to understand with more legible names and less table joins.

I am leaning towards option one since that is probably the long-term better solution although it will mean much more work.

Do you have any recommendations?

(p.s. I searched the site for this issue but didn't find it so I hope I am not repeating the question.)

Tom Kyte
November 06, 2003 - 5:53 pm UTC

wow, 8 months on a reporting application?

I personally would be doing #1 and only doing #2 when performance was not acceptable.

why? i'd be really bored after 8 months, it would cost less, it would be easier, it is what views are for...

Thanks for reply

LB, November 07, 2003 - 9:22 am UTC

8 months is a lot?
Complex Actuate reports take a long time to develop. In our last project we were working on a reporting application for about 2 1/2 years. There were over 100 reports that were run nightly for over 100 different datasets.

In any case it sounds like you recommend the option of views? (I wasn't sure because you said option 1 but referred to the views in option 2).

Thanks for your time and attention!

Tom Kyte
November 07, 2003 - 10:30 am UTC

whoops -- got the numbers 110% backwards.

VIEWS (#2) and only do #1 when #2 did not perform adequately. sorry about that.

Thanks!!!

LB, November 07, 2003 - 10:37 am UTC


Followup question.

Kashif, November 07, 2003 - 2:49 pm UTC

Hi Tom,

Can you expand on the following cons to having multiple schemas:

"
o causes your shared pool size to increase 1,000 times (shared sql goes down the tubes)
o latching on the shared pool goes WAY up (latching = locks = serialization device = slows you down)
"

I'm not sure how these are cons to having multiple schemas and not single schemas. Thanks.

Kashif

Tom Kyte
November 07, 2003 - 5:22 pm UTC

well, if you have N multiple schemas -- all of the same except for the schema name, you will have N copies of each parsed sql statment in the shared pool.

instead of one copy of "select * from emp", there will be N of them.

instead of having ONE hard parse on "select * from emp" there will be N of them.

One each for each schema.


So, you need a shared pool N times as large

And one that does N times as much work.

Multiple schema vs partitionning

Cedric Florimond, September 06, 2004 - 7:18 am UTC

Hi Tom!

We are building one application where 5 years of data will have to be stored meaning more than 2000 millions of rows in some tables.
Our business is highly client oriented (client ID present in tables or is the main given field for our table joins).
Regarding the estimated size of the db in 5 years, and the response performances we need to ensure, we are looking to the possibilities to improve the current data model.
Today we have one schema with none partitionned tables.

Then we are studying 2 possibilities:
- Separate our schema into multiple ones with one by group of client (let's say 50/100 schemas)
- Or partition our table/index by list (one list is one group of clients). Note that most of our tables are IOTs.
- Or one schema but we split the tables by list of clients.

Note that in some scenarios(not the majority but...), we may have to do multi-client/all-clients queries, meaning if we choose multi-schema/table that we need do an union of queries on several schemas/tables.

What are the PRO/CONS of each solution?
- concerning overall database disk space?
- query retrieval? knowing that we have many scenarios of table access by indexes and few by table scan. Knowing as well that we could parallelize the heavier queries.
- on store side. We have dayly batches feeding data for many clients.
- maintenance (remove data older than 5 years, managing disks etc...)

Note that the key of partition/multiple schema/multiple table could be also the data validity date or a combination of both client/validity date.

Thanks for your support!

Cedric


Tom Kyte
September 06, 2004 - 10:31 am UTC

well, actually, the number of rows doesn't necessary affect response times.

IF a query returns 5 records, it shouldn't matter if the 5 records came from 200 or 20000000000000 records -- indexing would keep response times flat.

it would only be if "a query returns x% of the available records" would size really really count.

the key thing I picked up on in your lead paragraph was "5 years of data", that means every Month/Qtr/whatever --- you are in for a purge. I would be looking to a partitioning scheme that

a) doesn't impact my performance negatively (for it would be rare for it to be a positively performance impact in a transactional system!)

b) facilitates my purge routines, makes them easier/cleaner.


There is zero percent chance I would look to separate schema's for different client ids, that would be an unholy MESS. single schema, no question about it.

I seriously doubt I would be partitioning by client_id (the tables) as

a) it wouldn't benefit queries too much in my transactional system
b) it wouldn't help me age/purge data out at all



that you say:

- query retrieval? knowing that we have many scenarios of table access by
indexes and few by table scan. Knowing as well that we could parallelize the
heavier queries.

clinches it -- partition so you can administer the data easily, but take care to NOT negatively impact performance (eg: partition by time, but use globally partitioned indexes for access)....


If you have my book "Effective Oracle by Design", I discuss the physics of partitioning -- you really need to understand what it is doing conceptually under the covers in order to implement with it.

ALSO -- BENCHMARK, build some prototypes, scale them up, see what works and what doesn't. You'll never be sorry you spent the time doing so, only sorry if you don't!

Multiple Schema

Anupam Manna, February 28, 2005 - 4:46 am UTC

We have three Oracle(9.0.1) databases having same schemas, and same objects. Now the problem is to identify the non identical cases.

1. One database for fresh deployment with Master data.
2. One database is required to test the application by test engineers.
3. One databse for the developers.

So,if I change the #3 database to make some column allowed to NULL and forget to change the database #2 or #1. How could I find a program which will give a comparison analysis for #1,#2,#3 databases.





Tom Kyte
February 28, 2005 - 7:52 am UTC

sounds like you need to use the same things for your database CODE (create is code, alter is code, drop is code, grant is code) that you use for your CODE CODE

so, what source code control system are you using and why aren't you using the same concepts for database CODE as "code" CODE?

Loz, February 28, 2005 - 8:00 pm UTC

Hi Tom,
I have an upcoming project that will take data from tables in our application to populate around 40 feeds into another application. The other application has import utilities that can work directly with Oracle queries. My initial thoughts were to create a schema just for this with the feeds implemented as views, materialised views or staging tables. My thinking was that by doing this I've hidden all this junk away from the main schema. It seems to me that there is a real organisational benefit to doing this (in the same way that a package can group related code), but I suspect you will have something to say about this approach. Given the limited information I've provided, would you recommend this approach or not?
Thanks.


Tom Kyte
February 28, 2005 - 8:04 pm UTC

on the surface of it, sounds like an excellent idea. Implements the concepts of "least privileges" security wise (this schema needs only what it needs). Data hiding, encapsulation -- sounds nice

given the inputs above.

No Pros?

Peter Köhler, March 01, 2005 - 3:50 am UTC

Hello Tom


> o harder to manage
> o harder to upgrade
> o harder to patch
> o harder to maintain
> o causes your shared pool size to increase 1,000 times (shared sql goes down the
tubes)
> o takes more space
> o queries against the dictionary will be impacted
> o latching on the shared pool goes WAY up (latching = locks = serialization
device = slows you down)


I agree on the above points
But:
> pros:
> o none that I can think of.

Are You kidding?
You mean there is no difference in query´s with a three magnitude different number of rows?
We have a Application with really complex queries (and i am sure there IS a difference in dealing with 10^n or 10^(n+3) rows (think of joining or full table scans).

Greetings from Frankfurt, Germany

Peter Köhler


Tom Kyte
March 01, 2005 - 8:27 am UTC

there are better solutions than maintaining multiple schema's

first one popped into mind = partition.


Logically, one schema, code wise, maintainence wise, etc.

Physically under the covers, it is as if everyone has their own.

Arun Gupta, March 07, 2005 - 3:38 pm UTC

Tom,
We have one existing schema A containing some tables. A new module is being designed which has a totally different set of tables. We have put these tables into a new schema B.

In future, we anticipate that there will be a need to write queries which would frequently join tables from schema A and schema B. Since we are at design stage, would it be a good idea to merge schema A and B into a single schema?

Thanks

Tom Kyte
March 08, 2005 - 7:28 am UTC

Nope, schemas do not matter as far as performance or anything goes.

Schemas matter hugely from a security perspective (you have more levels of controls).

Schemas matter hugely from a manageability perspective (logical separation of objects)

Arun Gupta, March 08, 2005 - 8:38 am UTC

I cannot find the thread but I somehow remember reading on this forum that queries going across schemas can have performance impact so I asked. Maybe it was in some different context. Thanks for replying.


multiple schemas

ds, April 15, 2005 - 6:21 am UTC

I am in the privleged position of having over 150 different schemas all of the same thing. Nice!. Dont know why there isnt a company_id column in a single schema..

Anyway, problems with the shared pool as you would expect.

Now if cursor_sharing=similar is set, is the sql then shareable across schemas instead of having over 150 copies of select * from emp (because of the overbinding it can do)

The queries themselves use binds so that isnt much of a problem just wondering if we can avoid 150 copies of it?

Thanks

Tom Kyte
April 15, 2005 - 9:15 am UTC

no, they all have different plans (they are all accessing different objects)

there would still be 150 copies of each and every query in the shared pool.

Has to be, they all have different plans, they all have different security attributes.

ds, April 15, 2005 - 10:32 am UTC

thanks for confirming that. Need your opinion on something.

We are going to have 6 connections per 150 odd schemas so around 1000 connections to a 4 node RAC cluster along with 2000 connections to a single schema which is different (although will be producing massive amounts of sharable sql at least)

All the connections to the same schemas will be running practically the same sql and as you say they won't get the sql shared. Now those connections will maybe be running up to 100 sql statements each. So now we have a massive bunch of sql statements which the shared pool probably cant deal with (256M shared pool per instance).

I can see us getting ora-4031 unable to allocate.... because the statements will constantly be in use and those that do get aged out will need to be hard parsed again, causing muchos cpu consumption and latching.

What would you do in this situation (short of changing the app). How would you size your shared pool per instance, or configure to try and get the least number of errors possible.

How many statements can by shared pool hold?

Thanks

Tom Kyte
April 15, 2005 - 10:47 am UTC

any way to segment the users by schema (schemas A....E goto node 1 and so on)

that would reduce by about a factor of 4 the amount of shared sql in each pool.

that the statements are constantly in use won't prevent them from aging out, but hard parsing in RAC is even more scalability inhibiting then in non-RAC. So you will want to avoid it.

ds, April 15, 2005 - 11:10 am UTC

hmm, nice ideas about different users going to different nodes, I'll look into that.

My point about the in use statement. I thought that if a statement was being executed at that time, it cannot be aged out. So if the pool is full of such statements we get the 4031. Is that correct or does the 4031 happen some other way.

I really fear a constrant stream of sql statements getting aged out and hard parsed again very quickly when system is at full user load.

Tom Kyte
April 15, 2005 - 11:25 am UTC

it can be, in a tkprof you can see "misses in library cache during execution"...

things go out even when "in use". some things have to stay and if you have some of those and free the rest -- you might not still have enough contigous free memory to satisfy the request.

FGAC and Sequence

Jeeves, April 15, 2005 - 11:14 am UTC

We are migrating from 7.3 version to 10g and consolidating 90 databases that are spread over geographically into a single database instance. All schema objects and business rules are similar in all these sites. Each schema has around 75 tables, 2 stored procedures and 7 sequences. We are planning to add an extra column named OWNER on each of the table . Since the customer has discouraged us from changing any code to the front end, we are planning to implement the VPD using FGAC policies.
However the sequence is the show stopper here. Customer wants serial number without gaps. ( I know, I am flogging a dead horse here. Sequence and gap free don't see eye to eye. Small gaps in sequence is tolerated, the customer has a less tolerance for larger gaps.....Sequence here is used for transaction number...)
Is there any way we can have different sequences and by means of FGAC we can select from the appropriate sequence.
Or should I create 90 schemas and have private sequences for each site but share the table from central schema ?Would there be any performance implication ?
I read in one of your answer above about the increase in SGA if we have many schemas and reusability of SQLs getting reduced.
Your advice will be highly valued...

Tom Kyte
April 15, 2005 - 11:31 am UTC

but a gap is a gap is a gap. A gap of 5 is not any different than a gap of 20 and I'll bet they have gaps of 19 or 20 as it is (just from database stop/starts in an abrupt manner)

There is nothing FGAC will do to change the underlying object you access, it just protects the object you are accessing.


creating 90 schemas is a sure way to kill performance, you'll have 90 copies of each sql statement to search through to find yours. It'll be a nightmare.

Sorry, don't know what else to say.

Thanks Tom...

Jeeves, April 15, 2005 - 12:00 pm UTC

Thanks a lot for your views.. I shall try to reason out with the customer for the usage of single sequence and live by the gaps. Additionally, I'll prove the performance implication of having seperate schemas.
Its like a religious view point.
Rational talking does not work. Only proselytizing works ;-(

Many schemas ???

Jeeves, April 18, 2005 - 5:26 am UTC

Tom,
I am a bit confused here. You have preferred having many users/schemas while answering this question : </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4725317905805 <code>
You have around 48K users in your system. Is it not killing the system ?
Issues with Sequences apart, am I perceiving things wrongly here ?

Regards
Jeeva


Tom Kyte
April 18, 2005 - 7:11 am UTC

for authentication/authorization.

NOT for storing 48k COPIES of the same schema and code.


I have 48k *named accounts with privileges*

I have one application schema for an application.


There is an difference between a user account for authentication, identificiation, authorization.

And a schema that is copied over and over with the same tables and code.

Still confused ...

Jeeves, April 18, 2005 - 7:47 am UTC

Tom,
I am intending to do something similar.
I shall have a central application schema with all the database objects, except the sequences. Another 90 user accounts with its own sequences.
From central schema, I would grant privelege on all objects to 90 user account.
Since all of the application logic is in front end (Power Builder), I am not copying any code for every schema.
Under this model, do you think still the performance would be de-graded ?

Regards
Jeeva.

Tom Kyte
April 18, 2005 - 8:44 am UTC

why except sequences? what could be the goal with 90 separate sequences? It would preclude you from using stored procedures which (in my opinion) would be a really *bad* idea. It would cause all SQL to be non-shareable across these schemas (which in my experience is a horribly bad bad thing).

you would have 90 copies of

insert into t values ( s.nextval, .... );

since there are in fact 90 S's and each have different "plans"


I know this would be a bad idea, and I cannot imagine the goal for having 90 sequences (it would only seem to be big time confusing!)

One last question...may be

Jeeves, April 18, 2005 - 9:36 am UTC

Tom,
I checked all the modules of my PowerBuilder application and there are only 6 instances of select statements where the seq.next_val is used and surprisingly no seq.curr.val is used. None of the two stored procedure uses sequences. Given this scenario, just to satisfy the "gap free imaginary world" of the customer, if I create separate sequences to each schema, but sharing every table, will it still be a performance issue with "non-sharable" SQL statements ?

Thanks for your views...

Regards
Jeeva.


Tom Kyte
April 18, 2005 - 9:59 am UTC

it will not be gap free.
it has NOT been gap free.
it cannot be gap free.


you have non-shareable sql.  watch:

ops$tkyte@ORA9IR2> create user a identified by a;
 
User created.
 
ops$tkyte@ORA9IR2> create user b identified by b;
 
User created.
 
ops$tkyte@ORA9IR2> grant create session, create sequence to a;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2> grant create session, create sequence to b;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2> grant all on t to a;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2> grant all on t to b;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select sql_text from v$sql where sql_text like 'insert into ops$tkyte.t %';
 
no rows selected
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect a/a
ops$tkyte@ORA9IR2> set termout off
a@ORA9IR2> set termout on
a@ORA9IR2> create sequence s;
 
Sequence created.
 
a@ORA9IR2> insert into ops$tkyte.t values ( s.nextval );
 
1 row created.
 
a@ORA9IR2>
a@ORA9IR2> @connect b/b
a@ORA9IR2> set termout off
b@ORA9IR2> set termout on
b@ORA9IR2> create sequence s;
 
Sequence created.
 
b@ORA9IR2> insert into ops$tkyte.t values ( s.nextval );
 
1 row created.
 
b@ORA9IR2>
b@ORA9IR2> @connect /
b@ORA9IR2> set termout off
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> select sql_text from v$sql where sql_text like 'insert into ops$tkyte.t %';
 
SQL_TEXT
-------------------------------------------------------------------------------
insert into ops$tkyte.t values ( s.nextval )
insert into ops$tkyte.t values ( s.nextval )
 
ops$tkyte@ORA9IR2>


you will have 90x6 copies of those sql statements.  Up to you.  I would not recommend this path, but your call at the end of the day. 

Its me again...

Jeeves, April 18, 2005 - 10:42 am UTC

Thanks a lot Tom,
I had just talked to the customer and walked-though the lines of code that uses sequences.
According to business requirements, the part of the code which has reference to sequences are only used during end of Friday. They are basically used only for some payment authorizations and creating payment advice files. On the rest of the days these modules are not touched upon. If the customer("Her Majesty's") can live with certain performance degradations for few hours in bargain of having sequences for each site, can I go ahead ?
Apart from the statements which have reference to sequence, I believe everything else can be re-used. Am I right in making this assumption ? I tried to execute couple of statements and it seemed to be true.

Sigh...Tom.. I know after bind variable, you may be having a heart ache with gap free sequences.... Sorry for bothering you... :-(

Thanks a lot...
Jeeva.


Tom Kyte
April 18, 2005 - 10:47 am UTC

6 statements, 90 copies, once a week, is probably going to be OK.

the other sql should be shareable as long as the objects are the same

Thanks Thanks Thanks

Jeeves, April 18, 2005 - 10:56 am UTC

Thanks a lot, Tom..
I have always learnt something new from your site on regular basis. Thanks to your site, I got myself extricated from an on-going project costing over 9 figures, which stubbornly disallowed stored procedures and even data integrity constraints for sake of "database independence" . Some bright java guys are using " last_number + 1 from some sequence table " logic to implement sequence behaviour. I am watching gleefully the implementation of this project which is around 15 months later....

Thanks once again..

Regards
Jeeva


Difference between schema and user

A reader, November 10, 2005 - 5:25 am UTC

Hi Tom

I have a question. What is the diffrence between
SCHEMA and USER. I've tried to summarise. Please remark.

(1) USER and SCHEMA are not same . They are different.
(2) USER is a named account to login into Oracle
(3) SCHEMA is name of the collection of database objects
(4) SCHEMA can be created separately , but that's nothing but creation of objects under
a particular USER
(5) In real life , multiple USERs are given priviliges for a single SCHEMA to enforce
security and integrity
(6) SCHEMA and USER has many-to-many relationship (which is against most of the books
which narrates the relationship as one-to-one relationship)
(7) When a USER is created , a SCHEMA with the same name is also created.
(8) USER is database object , but SCHEMA is not a database object

Thanks
Debashis


Tom Kyte
November 11, 2005 - 11:40 am UTC

conceptually, they are the "same"

I would not over analyze the semantics of these two names - I just consider them the same (a schema that owns objects has the ability to login to oracle. if you have the ability to login to oracle you can get the ability to create and own objects).

Difference between schema and user

A reader, November 12, 2005 - 9:05 am UTC

Hi Tom

Good morning. Thanks for the reply.
Well, I can understand what you've said. But studying a lot
of books/technical papers and by discussing with Oracle
veterans , I've got a real confusion whether a SCHEMA and
a USER are same or different.
If somebody asks "What is the difference
between schema and user" (and I face this question very often ), then what should be my answer ?

Thanks
Debashis

Tom Kyte
November 12, 2005 - 10:59 am UTC

I would say "effectively, nothing".


A schema is technically a collection of objects. But in Oracle at least, there is always a user associated with a schema and for the most part, every user has a schema (might be empty - but it is "there").



Subject- rather than application-based schemas

Leuis, January 28, 2007 - 6:02 pm UTC

Truely valuable discussion, thank you. I'm working for a large organisation and we are considering building an enterprise database (10g) with a few large schemas based around subject rather than application. Each subject (of which there are around 7-8) eg. Customer, Organisation, Product etc. would give rise to a schema. Business data would be stored in these, and application schemas would only store app specific data (parameters, UI persistence etc.).

The aim is to form a framework around which to build an enterprise data modelling approach, rather than one focused on narrower business problems (which hitherto has resulted in many incoherent data models with duplication, overlap etc.).

Users connect to our database via application accounts. Do you think it would be better to use synonyms in the app schemas or have applications target destination schemas directly? Any traps you can see in this approach?

Can I grant a privilege on schema to user/role?

Joji Jose, March 13, 2007 - 11:16 am UTC

Hi Tom,

You said that it¿s efficient to have multiple schemas which ensure high manageability. Is it possible to give a set of privileges on a schema to a user/role. I am able to grand privileges on individual schema objects to these roles. But when an object is recreated; all the grants need to be re-granted.

Question is: I want to grant create, select, insert, drop (privileges) on staging(schema) to load_users(role)

What is your recommendation?
Thank you in Advance

Tom Kyte
March 13, 2007 - 11:37 am UTC

no, you cannot

3 Schemas or 1 Schema - Oracle 10g

Tsang, March 15, 2007 - 9:45 pm UTC

Hi Tom,

I am loving your book - Effective Oracle by Design!

Hope you would provide recommendation to the below situation...

. We have a legacy system (3 applications, each application has its own database). The databases are relatively small and old data gets purged yearly.
. We are migrating to oralce 10g with a web-based front-end (in-house development).
. Users in these applications will access one or more of these 3 applications. Users are organized by sites.
. There are currently about 360 tables combined in the 3 databases. Total users may be a few thousand (not sure what the total concurrent users will be).

My thougths are along these 2 options:

1. Create 1 schema for each application (3 schemas).
2. Create only 1 schema for all 3 applications (1 schema).

What would you recommend and why?

If you recommend option 1, how would you best organize users/roles since users will access one or more of these 3 applications?

Thanks a bunch in advance.




Tom Kyte
March 17, 2007 - 2:23 pm UTC

#1 sounds most reasonable.

it allows for security (application 1 cannot read/write application 2 tables without permission)

it allows for separation (maintenance, understanding benefit)

I don't know what would be harder or even different about "organize user/roles" using 3 schemas rather than one?


You would typically create roles for the applications (and each application would typically have more than one role associated with it). You would grant those to the end users.

cons for one schema?

Ben, August 20, 2007 - 2:36 pm UTC

Hi Tom,
Thanks for all the information on the site.

We are looking at combining about 40 MS Sql databases into one Oracle 10g database. They are housing data from our company website and we are thinking of combining them all into one schema.

What, if any, functionality could we lose from doing this. I'm thinking in terms of maintenance (imp, exp, rman, etc), user functionality and security.

Thanks,
Ben
Tom Kyte
August 22, 2007 - 11:01 am UTC

you lose nothing

you gain everything.

A reader, September 21, 2007 - 1:28 pm UTC

Hi Tom,

Thanks for your help.

I just had a meeting with the technical architect. We have a database with 5 schemas (A, B, C, D, E) and we would like to build a report on data coming from schema A, B and C.
The technical architect advises to create a new schema F and create in it a new table (F.table). This table will be populated by data coming from schema A, B and C.

I advised instead to create this new table in the existing schema A and populate it with data coming from A, B and C.

Could you please give your opinion (pros and cons)

Thanks in advance
Tom Kyte
September 26, 2007 - 1:13 pm UTC

how about

no new schema, just query what you have.

what is wrong with that

schema

sam, October 30, 2007 - 10:24 pm UTC

tom:

I am confused on whether you recommend one schema per application or not?

I have this situation and i am trying to see if it is beneficial to create a second schema or not from an organization and security and cleanliness point of view.

We have an existing old instance/db named "HIBD".It is not very clean. Many unused tables., procedures etc.

This is accessed by production users by web and client server PB application. A new requirement requires several new perl batch programs to query this database continuously, then do some work on a filesystem and report status updates back to DB. The user in this case is a automated batch program and http client.

Would it be more beneficial to create a second schema and place all the new oracle code and tables required for messaging application into it?

2. Would having two accounts/schemas create issues in accessing tables from the new schema to the old schema or they can work as if they are in one schema?
Schema 1 would need to access/update tables in schema 2 and schema 2 needs to access/update tables in schema 1.

please advise.

Tom Kyte
November 01, 2007 - 3:54 pm UTC

you are confused?

Let me clear that up:

the original question was "we want to install our application into schema1, schema2, .... and so on. For each customer, we'll install our stuff again"

My response:

do not do that.


Your question is different. You have a new set of requirements, a new set of data coming in. It would make sense to break things out by the schema, yes. You are not talking about having the SAME SET OF STUFF installed over and over, you are talking about "modularizing your data", keeping what goes together - together.

and you can absolutely query across schemas.

schema

Sam, November 01, 2007 - 6:26 pm UTC

Tom:

thanks, I am taking your advice. your answers always make sense and excellent.

In order for schema 1 and 2 to look like one, how many grants you have to issue in each. I want schema 1 to run all code in schema 1 and access all tables/objects like they were local and vice versa.
Tom Kyte
November 02, 2007 - 12:16 pm UTC

you will grant to schema1 that which needs be granted, if you want it to see all code in schema2, you will have a grant per schema object.

schema

sam, November 02, 2007 - 11:05 pm UTC

Tom:

that sounds like a lot of work.

Are you saying that I have to go to each schema and run a single grant for each object to the other schema. Every table, procedure, trigger, view, etc.

Is not there a global command that takes care of it or you suggest to create a script somehow
Tom Kyte
November 05, 2007 - 11:13 am UTC

you need to, want to, better follow the concept of LEAST PRIVILEGES.

You would be hard pressed to convince me that this new function needs EVERYTHING
You would be hard pressed to convince me that the existing code needs EVERYTHING

so start by granting nothing, and adding grants only as you verify they are actually needed.

Software development unfortunately still takes work today in 2007.

schema

sam, November 02, 2007 - 11:07 pm UTC

Tom:

another small question on schemas.

Is there a way to see the names of schemas created in a dataabase.
when i connect in sql*plus to dbaadmin/dbaadmin@MPP

is the schema name = dbaadmin and database name = MPP.


Tom Kyte
November 05, 2007 - 11:13 am UTC

pretty much select * from all_users would do it, or if you just want things that own things you can see, select distinct owner from all_objects

schema

sam, November 05, 2007 - 4:39 pm UTC

Tom:

One last thing I forgot to mention.

Some of the messaging application tables in SChema "B" will need to be tied to a master book table and another vendor table in the other schema "A".

Would there be a constraint in defining Foreign keys and defining triggers if I went with the 2 schema solution. I want to be able to create a foreign key from one table in schema B to another table in schema A. Also I want to define a "after insert" trigger on schema A.table_A that inserts/updates a record in schema B.table_b.

would you still advise on gaving two schemas with above conditions.

thank you,



Tom Kyte
November 06, 2007 - 9:23 am UTC

foreign keys can cross schemas, but it would be something you would not want to do if it is avoidable (seems to me there is a tight coupling between this new app and the old one, eg: there is one data model)

... Also I want to define
a "after insert" trigger on schema A.table_A that inserts/updates a record in
schema B.table_b. ..

really, I'd be hard pressed to understand that one.

schema

A reader, November 06, 2007 - 12:30 pm UTC

Tom:

Well I have an existing magazine production data model "SCHEMA A". Magazines are ordered using the system and received by mail from vendors and then the business do a few QA tests on the files and results are logged into the "SCHEMA A" application.

The addition we are making will force vendors to upload magazine files instead of mailing them.
This requires a vendor to upload the magazine to a separate server from DB server and then check in the book in the existing system (just to alert busines that he did upload).

There will be a new set of batch jobs on other external servers that process the book based on flags set in these new tables "SCHEMA B" that we will create to interface with the batch programs. Some of these flags in SCHEMA B will be set from user actions in SCHEMA A. For example, QA approves a book in SCHEMA A, and now we set a flag in SCHEMA B so that batch job does the processing and move it to another location.

I am trying to separate the schema for "production users" from "batch program logins" for better organization and security controls. However, there must be some interface between the two schemas.

For example, the master magazine table is in existing schema A. All magazine numbers are there. When a vendor uploads a magazine into a table in SCHEMA A, I have to insert records in Schema B for all batch programs to set/unset. This is why I was asking about the trigger.

Basically schema B data model will provide all interface required with batch programs. However, it will need to talk to SChema A tables since the master book table and QA tables are there.

DO you think a new schema makes sense and would I be able to implement the interface without issues coming up?


Tom Kyte
November 06, 2007 - 3:09 pm UTC

two schemas make sense, foreign keys between them are tricky (schema a will be able to drop the schema b foreign key).

it makes sense from security to segregate - always.
and to grant just what you NEED, nothing more

I am currently voting for two schemas at least. Maybe three

one for data, one for code.

schema

A reader, November 06, 2007 - 5:18 pm UTC

Tom:

Thanks for the fast response.

1. You say that foreign keys are tricky! I assume they can still be done but schema A can still drop a foreign key in schema B.

2. How about triggers, would I have issues there where if vendor inserted a record in schema A, can the trigger defined in schema A insert records in a messaging table in schema B. I assume that should not be a problem?

3. Regarding your idea of 3 schemas, do you mean something like this:

Schema A = Current magazine production system
Schema B = Data tables for messaging schema (talks to A and C)
Schema C = Code for messaging that talks to the Batch programs on external servers via http (talks to C)

Tom Kyte
November 07, 2007 - 5:05 pm UTC

1) they can be done, but conventionally - it would surprise many people that you did it. They would not immediately "see them", they can accidentally drop yours (doing a reorganization or something).

2) assuming the proper grants, no problem. see #1 again.

3) or schema a to hold data, schema b to hold some code, schema c to hold some code, use schemas as a way to secure and segregate things - logically - according to your needs.

Fk

A reader, December 28, 2007 - 10:01 pm UTC

Tom:

Would creating a Foreign key from one schema to another require anything special or it is same as one schema except you refer to the other schema name?

2. When DBA created the 2nd schema I found that I can view all table in Schema A using "select * from schemaA.table". Is this normal or he must have granted select to all table in SChema A to Schema B.


Tom Kyte
January 01, 2008 - 5:30 pm UTC

you need the permission to do it, but nothing 'special'



if the DBA created another schema and you were able to see all of their stuff, it is likely you were granted "select any table" and are so insecure in your implementation that any talk of security is not worthwhile right now.

You might want to ask the DBA how it is you were able to do that. If you have select any table, you pretty much can read any bit of data - and that is not good.

Read only schemas ??

reader, May 20, 2008 - 3:51 pm UTC

Tom,

We have "INFO_OWNER" schema which owns all the tables of our applications. Now soon we will be having reporting layer on top of this schema. For this reporting application we are thinking of creating seperate schema which will have read only privileges on "INFO_OWNER", we have already created ROLE for it.

But in reports we dont want to refer object from "INFO_OWNER" as owner.table_name nor we want to create a synonym.

Is there any way in Oracle via which we achieve this ?
Tom Kyte
May 20, 2008 - 4:14 pm UTC

alter session set current_schema = info_owner;


by default:

select * from t;

will be:

select * from <current_logged_in_schema>.t;


by using that alter session, it will become

select * from info_owner.t;

instead.

Thanks you sir

reader, May 20, 2008 - 4:25 pm UTC

Tom, do we need to fire the ALTER SESSION ... command in each time or its just one time.
Tom Kyte
May 20, 2008 - 4:48 pm UTC

once per session.

What about Access restriction

Shrikant, June 25, 2008 - 4:42 pm UTC

Hi Tom,

I have two questions on this

1. With command "alter session set current_schema = info_owner; ", can we afford access authorisation. I mean can we have say INFO_USER to just have read-only permissions on Tables, views and PLSQL_Objects (procedures, functions, packages)

2. If INFO_OWNER has some Public/Private Synonym created, will INFO_USER can able to read those synonyms?

Thanks -
Shrikant


Tom Kyte
June 25, 2008 - 5:09 pm UTC

current_schema changes NOTHING from a security perspective. All it does is change the default schema that is prepended to objects.

select * from t

becomes

select * from CURRENT_SCHEMA.t

if you cannot select * from current_schema.t, you cannot select from it - this will change nothing.

it will change nothing but the default schema name that is used - synonyms and whatever are unaffected by this - it just changes the default name.

A reader, June 25, 2008 - 5:26 pm UTC


A reader, June 25, 2008 - 5:26 pm UTC

<code>
</code>

Thanks for the answer Sir

A reader, June 26, 2008 - 5:01 pm UTC


how best to move from multiple schemas to single schema

Chris Gould, April 27, 2022 - 9:29 am UTC

Hi
Returning to the original question, I find myself at a company that has implemented a schema-per-customer model. The database is experiencing all the symptoms you'd expect, but they still believe the design is correct, convinced it gives more "security" keeping each company's data in separate tables. I have referred them to multiple books and articles (such as this), but they're unmoved.
Is there anything else you can suggest that might persuade them that the design is flawed?
Connor McDonald
April 28, 2022 - 4:15 am UTC

That's tough to answer because its going to come down to the particular business model.

1 schema per customer is an *awesome* solution....for a small number of customers :-)

A lot of this comes down to priorities and application usage, eg

Would I deploy something like (say) Oracle Financials (50,000+ database objects) on a schema-per-customer basis? Hell no.

Would I deploy an app with insane performance demands (millions txns per second) on a schema-per-customer basis (customers > 100) ... Probably not.

Would I deploy an app with with modest demands where data separation/security is absolutely the #1 requirement that my business lives and dies on, on schema-per-customer basis ... I probably would.

Would I do schema-per-customer if I had robust automation which means I can scale my deployments trivially, or separate customers into separate databases with minimal effort etc...I would definitely consider it. (Conversely - no automation would mean excessive schemas is totally off the table)

Don't forget - Tom's original answer is from 2001... servers were different then :-)

how best to move from multiple schemas to single schema

Chris Gould, April 28, 2022 - 8:47 am UTC

Thanks for your reply, Connor.
You concluded by saying that the original answer by Tom was from 2001 and "servers were different then".
But the internal architecture of Oracle, specifically the shared pool, is essentially still the same now. So aren't Tom's points about shared-pool size, shared SQL and latching still valid?

Connor McDonald
May 03, 2022 - 2:44 am UTC

Back in the day, we had a single shared pool latch....

Times change...Try this

select name, count(*) from v$latch_children
group by name
order by 1

I didn't say: "Hey, machines are great, just go nuts" :-) I said "What you probably could not do in 2001, you can probably do in 2022"

But always with a mind toward a disciplined and methodical approach.

impact on shared pool and shared SQL

Chris London, UK, May 05, 2022 - 10:29 am UTC

Thanks again for your reply, Conner.

What about Tom's points about the shared-pool management and the "schema-per-customer" approach resulting in far more unique SQL statements and little shared SQL?
The problem I'm seeing with each company (customer) having their own schema but all running the same application-code is that the performance of the database is impacted not just by the number of users online at any one time, but the number of different companies to which they belong. If they're all from one company, the the SQL statements are shared. But if they're from different companies then I'll have a versions of SQL for each of those companies, with no sharing between companies (because each company has its own tables, even though they're identical in structure to all the other companies). So the database will behave very differently if I run a test simulating 10,000 online users if they're all from the same company or if they come equally from 1000 different companies.

If there were a common schema containing the data from all companies, then it wouldn't make any difference which companies the 10,000 online users came from, as all the SQL can be shared across all companies.
Connor McDonald
May 11, 2022 - 5:37 am UTC

As per my previous comment:

I didn't say: "Hey, machines are great, just go nuts" :-) I said "What you probably could not do in 2001, you can probably do in 2022"

But always with a mind toward a disciplined and methodical approach.


50 schemas I probably wont lose sleep over.
50,000 schemas I'll take a lot more care :-)

eg, Our internal APEX instance here at Oracle has ~5,000 schemas and the external one (apex.oracle.com) I think has around 30,000.

Obviously they are not all the same code in each schema, but they run just fine.

impact on shared pool and shared SQL

Chris London, UK, May 11, 2022 - 8:10 am UTC

Hi Conner,
I think we’re talking at slightly crossed purposes now. The situation I am describing is where I have a single application with many different corporate clients using the application. There is one schema containing the application-code, but each client has a schema containing an IDENTICAL set of tables containing only their data.

The application-code is primarily packages, all of which use AUTHID CURRENT_USER (ie invoker rights model).

Each client has many end-users who can all log in online and use the application to work with their client-data. There could be tens of thousands of users online at any one time from any number of different clients.

Now consider the impact of this on the SGA and specifically the shared pool. I don’t believe that’s any different to how it was in 2001 when the original question was asked.

Could you comment just on that aspect, please?

Connor McDonald
May 16, 2022 - 2:33 am UTC

My point was - the shared pool *has* changed since then....a lot

- dynamically sizing
- mutexes instead of latches
- more latches
- some re-architecture for multitenant, including the application container architecture

which brings me back to the original comment I made:

20 years ago, having 10 schemas in this model I'd be concerned about. Now, I'd probably start to be concerned at (say) the 100 schema mark.

In *all* cases, when it comes to "tens of thousands of users online", then my much greater concern would be appropriate connection pool management.

And overriding all of this, is the importance of testing and proof-of-concept, eg, I don't want someone reading this comment and their takeaway is: "OK, 10 schemas is 100% guaranteed to be fine".

A crappy app full of dynamic SQL can be bad on 1, 10, or 1000 schemas :-)



impact on shared pool and shared SQL

Peter Hitchman, May 13, 2022 - 3:49 pm UTC

Hi
In general I would say that this way of doing things is not an Oracle Database anti-pattern, locking and latching has moved on in the last 20 years and is a lot more granular.

Have you considered that the shared pool in this instance may be too small, given the design? Have steps been taken to make sure that the SQL being executed is shareable i.e. using bind variables not constants, so that CPU time is not being wasted on getting the latch access required and carrying out a hard parse. Also in this scenario you need to be careful about rebuilding packages and DDL changes as this can cause high levels of latch/lock contention as Oracle recompiles.
Maybe upping sesion_cached_cursors would help?