Skip to Main Content
  • Questions
  • Multi-tenant Schema design - Single Schema vs Multi-schema

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Ketan.

Asked: February 03, 2016 - 4:09 pm UTC

Last updated: August 05, 2019 - 9:17 am UTC

Version: 11g

Viewed 10K+ times! This question is

You Asked

We are building a new multi-tenant application. We are thinking about going with one of the approach:
- 1000 customers in ten schemas altogether
- Each customer in its own schema.

We are interested in getting your feedback so that we can design it right for Oracle database.

Design approach 1: Put 100 customers in one schema . With 1000 customers, we need 10 schema in total per host. Each table will have column customer id. Application will filter records by customer id.

Advantage:
- Few number of schema/tables to manage
- Few hard parsers
- Better SQL caching

Disadvantage:
- Performance may be slow. As same block will store data from multiple customers, SQL fetch may have to fetch more blocks to get the same results.
- Unpredictable performance: As data for one customer grow, SQL query start getting slower for other customers.
- Wrong SQL Execution Plan. When one of the customer has more data, execution plan will be more suitable for that customer but mayn’t be right for all other customers. This may result in slow query performance for all other customers

Design approach 2: Put each customer in different schema. With 1000 customers, we will create 1000 schemas in total per host.

Advantage:
- Performance will be extremely fast. As customer specific data will be stored in the table, SQL fetch will have to process fewer blocks to get the same results.
- Predictable performance: SQL performance would depend upon how much data that customer has. Fewer data means few blocks to fetch from disk.
- Right SQL execution plan: SQL execution plan is for that customer. That would make SQL executes fast.
- Easy to export/import customer data to move to different hardware
- If customer gets too big or needs more security, it is easy to move to dedicated host
- We could also introduce new features in sandbox environment using customer’s own data. This is only possible if we can easily import/export customer data.

Disadvantage:
- Many schema/tables. May impact metadata cache.
- 1000 times more hard parsers
- Need more memory for SQL caching

In both the cases, we are planning to create application user which will connect to customer schema using “SCHEMA.TABLE” notation.

Which approach would you prefer and why? Would you see problems if we create 1000 schemas per host and each schema would have about 400-500 tables?

and Connor said...

I dont have a hard and fast rule, but 1000 schemas ... each with presumably a copy of the own data structures, plsql code, triggers, synonyms, views, etc etc... sounds like trouble brewing to me.

Conversely, 1 schema with a CUSTOMER_ID column on every table also sounds like potential issues as customer needs change, eg, you might have a tiny customer alongside a giant one, with the volume of one impacting the other.

For me, I'd like to keep things in a state where the decision can be made as it needs to be made, rather than set in stone.

If the 'application user' you mention does the following via a logon trigger:

alter session set current_schema = blah;

and does *not* reference schema names in the the code, but *does* always reference (say) customer_id in predicates for all of its queries, then it might *start* in a schema sharing with 10 other customers. If it becomes a "problem" schema, you can trivially separate it into its own schema using datapump and just change the logon trigger and you're done. The same logic lets you easily move some customers to new servers etc if the need arises.

So that lets you start small, grow easily, and adapt to whatever you encounter.

Hope that helps.

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Multi-tenant Schema Design

Ketan Bengali, February 04, 2016 - 8:34 pm UTC

Thanks a lot for the response. Your feedback is valuable.

We don't use any PL/SQL code, triggers, views, etc. We are using Oracle as data store only. Our application takes care of upgrading schemas automatically and there is no manual intervention.

Do you still think keeping customers together in one schema and moving it out only when needed is the right choice from performance point of view?

Thanks,

Ketan.


Connor McDonald
February 05, 2016 - 1:29 am UTC

Yes. For example - here's 50 customers, each with a table

SQL> begin
  2  for i in 1 .. 50 loop
  3    begin
  4      execute immediate 'drop user cust'||i||' cascade';
  5    exception
  6      when others then null;
  7    end;
  8    execute immediate 'grant resource, connect to cust'||i||' identified by blah';
  9    execute immediate 'alter user cust'||i||' quota 10m on users';
 10    execute immediate 'create table cust'||i||'.T as select * from all_objects where rownum = 1';
 11  end loop;
 12  end;
 13  /

PL/SQL procedure successfully completed.


Now they all run their "app"

SQL> col sortkey nopri
SQL> select rownum sortkey, 1 sortkey, 'connect cust'||level||'/blah' from dual connect by level <= 50
  2  union all
  3  select rownum sortkey, 2 sortkey, 'exec declare x int; begin select count(*) into x from t; end;' from dual connect by level <= 50
  4  order by 1,2;

connect cust1/blah
exec declare x int; begin select count(*) into x from t; end;
connect cust2/blah
exec declare x int; begin select count(*) into x from t; end;
connect cust3/blah
exec declare x int; begin select count(*) into x from t; end;
...
...
...


At the end of that, what do I have in my shared pool

SQL> @pt "select * from v$sqlarea where sql_text like 'SELECT COUNT(*) FROM T'"
SQL_TEXT                      : SELECT COUNT(*) FROM T
SQL_FULLTEXT                  : SELECT COUNT(*) FROM T
SQL_ID                        : 48k1njhd4ras3
SHARABLE_MEM                  : 664359
PERSISTENT_MEM                : 2354976
RUNTIME_MEM                   : 2293776
SORTS                         : 0
VERSION_COUNT                 : 51   <<<==========
LOADED_VERSIONS               : 51



Multi-Tenant shema

TABRAIZ HASSAN KHAN, April 29, 2019 - 4:17 am UTC

Could you please tell me what is the standard of the small or big customer according to the number of records tables?

Connor McDonald
April 30, 2019 - 3:09 am UTC

There is no relationship.

I've worked at places with tiny databases but millions of transactions, and places with huge databases that have very little activity at all.

My definition of a "big" database is simple: Any database that takes considerable time and energy to manage it.

in 12.2

Rajeshwaran, Jeyabal, April 30, 2019 - 11:57 am UTC

Given that the database version as 12.2, will this be a potential use case for Application containers?
Connor McDonald
May 02, 2019 - 6:44 am UTC

Potentially, but I don't think that escapes from the parsing overhead of 1000 different schemas. A lot would depend on expected volume/activity/etc

New Question

vijayender tarun, August 02, 2019 - 9:33 pm UTC

Any suggestion of how many tables per schema or it doesnt matter.
I have a schema with 500 tables and I am expecting 200 more tables in coming days.
If needed new tables can easily be created in a new schema of their own if they serve a single different functionality for example (if we add a new sales module, most new tables can be created in sales schema).

is there any thought process of one huge schema per database or multiple schemas which are created on basis of the functionality they support.
Connor McDonald
August 05, 2019 - 9:17 am UTC

Use schemas to suit your business and technical needs.

eg for business, I might want to have more granular control over what privileges exist between the schemas.

eg for technical, I might want to copy subsets of data around easily, so separation by schema and/or tablespace might facilitate that.

So yes you *could* have a single giant schema, but typically the demands of your business dictate otherwise.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.