Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Chris.

Asked: August 04, 2008 - 6:50 am UTC

Last updated: September 20, 2010 - 1:56 pm UTC

Version: 10.2.0

Viewed 1000+ times

You Asked

Tom -
A data-architect at work has proposed that we should start using separate database accounts to hold the "code" (packages, procedures,views, etc) and the "data" (tables, materialised views, indexes etc) for an application. I've never come across this idea before and it seems to be contrary to concepts of encapsulation, since the application will be spread across at least two schemas and require more admin overhead maintaining the necessary grants between them.
Are there any situations you can think of where this would be a recommended approach?



and Tom said...

it is hardly contrary, it promotes encapsulation, modularization, least privilege.

Maintaining the grants is a POSITIVE aspect, you'll have to state why you need such and such a privilege, it can be recorded, documented. Auditors can inspect this and see what access you have to the data.

It is a really good idea, in fact, maybe more than one schema for code to give even finer grained control over the grants.

Right now, developing in the same schema, you could drop the table, truncate it, perform any DML on it, alter it in any way you see fit. You will (fortunately) lose all of that when this really good idea is put in place.

It will ultimately lead to a more well documented system, with great security controls in place.

Will you lose some 'flexibility', sure, you will think so - but it is flexibility a development team should not necessarily have. There are too many things you can do wrong in this environment - with so many privileges.

In a short while, you'll be able to query the dictionary and see the least set of privileges you need for your application to execute (that is very good), you'll understand why you need them (because you had to ask for them, with cause) - which is very very good.

This is a very common implementation, getting more common today with the need to be more accountable - who does what and when, who CAN DO what and when and so on.


Remember the data is the data, the application is something that accesses data. They are not one in the same - they should be separate.

Rating

  (19 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Another reason

Glen Smith, August 04, 2008 - 7:12 pm UTC

While often when I first create a database it's like this

Application_1--------Database

They almost always will end up something like this:

Application_1-------------|
Application_2-------------Database
Application_3-------------|

Application_1 may still need full read/write access to everything in the database to operate. Application_2 may only need read access but needs that access to everything. Application_3 may only need partial access. I have also experienced situations where say changes to application_1 requires modifications to the underlying database that application_2 does not know about but may need to know the stuff at some point in the future. It would seem such a layering of schemas would provide architecture to support such needs.

Chris Gould, August 05, 2008 - 3:50 am UTC

Thanks for the very prompt response and the explanation of the benefits of this approach. It's an idea I'd not come across before, but I see the advantages in terms of security and audit.

Name resolution

Paul, August 05, 2008 - 9:29 am UTC

Great explanation.

How would you recommend referencing objects in the data schema from the application schema?

By explicit schema.object, private synonyms, views, etc?
Tom Kyte
August 05, 2008 - 9:54 am UTC

either

a) schema.object (my preference)
b) private synonyms

from a parsing and dependency perspective - (a) is best (b) second best.

Chris Gould, August 06, 2008 - 5:29 am UTC

Would you put any views into the "code" or "data" schema? To me, they're "code" and also they might contain calls to functions which would be in the "code" schema too. But they could reasonably be considered "data". I'd be interested to hear from anyone that's done this before in a production environment and how they divided object types between the two schemas.
Tom Kyte
August 06, 2008 - 8:58 am UTC

views could go in either place.


they make sense in both places.


a) the case for putting them into the application schema.

The view in question is a view of convenience, it joins N tables, selects out needed data, and formats a couple of columns. It is in place because that SQL is used here, there and everywhere. The application was granted access to the base tables WITHOUT the grant option/admin option - so the application cannot grant on this view to others (it is only usable by the application logic in the database) so you haven't opened up any security holes.

In this case, the view is very much like a subroutine, it accesses data the application schema was granted access to.


b) the case for putting them into the data schema.

The view in question is used to further restrict access to data - it includes a predicate for example that limits what data can be seen. There is a large base table and a given application schema only needs to see 1/2 of that data. So, you create a view that exposes the correct set of data, grant on the view to that application schema and the application schema uses the view as if it were a table. That application schema will not have access to the base table. You can use the "with check option" and such to enforce insert/update restrictions as well.

Fine grained access control could do this as well (DBMS_RLS)




So, the developers are free to create their views for (a)

And the data owners, the data secure-er's would create views for (b)

RE: separating tables design from the application source

Duke Ganote, August 06, 2008 - 11:11 am UTC

Chris Gould, August 06, 2008 - 12:27 pm UTC

Duke - in the system you described in the other thread, what did you do with views?
Tom Kyte
August 06, 2008 - 12:41 pm UTC

you'd do what I described above.

You use views for two main purposes:

a) to protect data, you'd want those by the data itself - people access the simple single table (usually) view INSTEAD of the table.

b) to store a repeatedly used query, like a subroutine. This goes with the application - there are no security implications to go with it.

RE: separate code from app

Duke Ganote, August 06, 2008 - 1:12 pm UTC

Chris-- We did it like Tom indicates. Originally I built the views as verification tools (to specify and check the results of ETL transformations), so they were part of the app/ETL/code schema, rather than data schema.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:47458244605081
-- Duke

Just stick to it

Brett, August 06, 2008 - 1:41 pm UTC

I began to use this concept a few years ago. It really does work great, but you have to be diligent about it. The developers I worked with really hated it because they had to "wait for access" before creating code. Instead of view it as a positive, they complained to management that it "slow productivity". I fought for a few days, but in the end I was told to fall in line and go back to the lazy way of implementing things. It wasn't too long before developers where accidentally dropping/truncating tables and I got to spend time recovering them. Thank God for the recycle bin and flashback queries.
Tom Kyte
August 06, 2008 - 2:10 pm UTC

The problem is one of configuration management really, when you get down to it.

Having source code control does this too (creates that perception of "slow"). In reality, everything goes faster - because everything is a bit more reliable, understandable, maintainable.

short term (*perceived*) pain for a few, long term gain for everyone

Application retirement

Glen, August 06, 2008 - 11:26 pm UTC

While I've rarely seen databases retired (though I have seen them re-modeled or expanded), I've seen applications get retired or simply go unused. Being able to basically just drop a schema to retire an app would seem to be a benefit. I've also come across situations where you might want to temporarily disable access to a certain application but the data.
Tom Kyte
August 07, 2008 - 12:35 pm UTC

data lives forever

applications come
applications go


Chris Gould, August 07, 2008 - 10:46 am UTC

You'd have to think very carefully before dropping a code-schema when you retired an app. Think about the real-world situations where you start by having one code-schema for your application. Then you later develop another application, but some of the code is common with the original app (eg. accessing customer details). Do you duplicate all the relevant packages into your new app schema? Or do you extract the common code and put them into a third "common" schema. Or does the new app just use the packages it wants from the original app's schema? The first two approaches mean you've got to change your original app too. So which do you think is the most likely to happen?
I don't really think there'd be many DBAs brave enough to drop or even temporarily disable one of the code schemas as you suggest.

Glen, August 07, 2008 - 3:33 pm UTC

Chris,

I find many DBAs are willing to do very crazy stuff.

Sorry I wasn't clear. I said BASICALLY drop (I thought that would be clear but I was wrong) not JUST drop or DROP <schema>, your not going to just do this willy-nilly in the operational environment but will have an easy, well-known, documented and testable methodology to retire an application which will more thoroughly do the job (I can't tell you the number of times that potential security and data quality issues have arisen because an obsolete app was not fully eliminated). You would be able to monitor for possible dependencies (dependencies would be clear and concise, easily understood at a glance even for less technical people) and should have all known dependencies documented. Also, these dependencies indicate that you need to think about moving those shared objects to the data level or to something I'd call a shared "library" schema. You also may have a situation where they really aren't two applications. In any case, you know this things going in and already have them documented so your not just futzing about.

Locking down application_schema1 does not necessarily mean that application_schema2 that uses code in the locked schema is also unusable (though you may fully lock it down instead of retiring an application), just that no one can use application_schema1.

Anyway, the best piece of advice I ever got was "your mileage will vary".

invoker rights

Chris Gould, August 10, 2008 - 8:39 am UTC

Tom -
can't the same control over privileges and access be achieved even if the packages and tables/views exist in the same schema provided all the packages use the invoker-rights access model?
Eg - all access to data is via packages (no stand-alone funcs/procs and no direct querying of tables/views). Tables and pkgs are in schema APP_OWNER to which the application may not connect directly. The application connects using a sep. account APP_USER to which grants on tables/views and the pkgs are given (either directly or via roles). Even though a package is in the same schema as tables, if there were a TRUNCATE statement in the pkg (as you mentioned in your arguement) it would fail unless the invoking user (APP_USER) had explicit privilege to do this (and you can't grant TRUNCATE as a privilege anyway).
The APP_USER schema would only need to contain private synonyms to the packages it was going to access (unless using owner.object notation). You could even make sure the APP_USER itself couldn't modify synonyms (or anything) in its own schema by only granting it CREATE SESSION and no other system privs. Instead any synonyms it needed could be maintained by an admin account with CREATE ANY SYNONYM priv.
I've used this approach successfully at a couple of different sites.
Surely where fine-grained access control is required over data, it'd be better to use Oracle's in-built FGAC and not try and invent your own half-baked version with views in the way you're describing?
Tom Kyte
August 12, 2008 - 4:44 am UTC

you use invokers rights very very very sparingly - they are appropriate for utilities - but not for real code. They make plsql behave like a pro*c application that asks the user for their username and password. the sql executes as the user, the invoker.

I would not even consider using them for any real code in general, they are very very very special purpose.

You want to do the separation this way - tables over there, code over here. If you follow my personal "best practice", the words select, insert, update, delete, merge never would appear in your client code - ever. Only "declare...", "begin ...", "call ..." would and they would be calling explicit transactional API's that do the right thing in the database

client is granted access to appropriate api's
api's are granted appropriate access to data
data is protected by multiple layers.

re: invoker rights

Stew Ashton, August 12, 2008 - 4:16 am UTC


(Thanks to Chris for opening this thread, and as usual to Tom for being so clear, concise and concrete.)

@Chris: the "invoker rights" model doesn't protect the data from arbitrary or accidental changes by the SQL developers. It also seems a bit more complex to test properly during development.

@Tom: based on various answers of yours, I've been recommending this setup :
- a DATA schema
- one or more APP schemas as stated above
- roles with execute privileges on APP packages
- USER schemas with assigned roles
- no table APIs (they nullify this layered approach).

Secure the data from the app, secure the app from the user.

Is this a good recommendation? Anything to add?
Tom Kyte
August 13, 2008 - 4:08 am UTC

just documentation for each privilege between the layers and why it was granted, so in the future - you know the justification and the auditors can be told why something is so...

invoker rights

Chris Gould, August 12, 2008 - 7:19 am UTC

Sorry, I didn't make it clear that I already follow the principles of using a PL/SQL API-layer and my application code only contains calls to packages and no SELECT/INSERT/UPDATE/DELETE statements.
I've also re-read the chapter on Invoker Rights in your book "Expert Oracle" (though my edition only covers up to 8.1.7 - is there a newer edition available?) and I understand that a lot of information about DML statements in definer-rights packages is gathered at compile-time whereas with invoker-rights this couldn't be determined until run-time. So it means there's more overhead in working this out when the statement executes and additionally the objects referred to in the DML may vary from user to user, depending on the particular environment.
Are these the main two reasons why you emphasise so strongly that invoker-rights should generally not be used?

Tom Kyte
August 13, 2008 - 4:24 am UTC

Say you have procedure P

Procedure P is invokers rights.


I run it
You run it


Procedure P has a query "select * from t"


Now, both you and I need access to T, WE (our accounts) need select on T. And in fact, there might be a TKYTE.T and a YOU.T - we might actually be accessing TWO DIFFERENT TABLE T's. It gets to be quite a mess.

You lost the security aspects of stored procedures - the owner of the procedure needs access to SOME table T, you need access to SOME table T, I need access to SOME table T.

It should be that the procedure owner needs access to T, we need access to the procedure - period.

invokers rights are great for utilities (save this table to a CSV file - it would use dynamic sql, the dynamic sql would execute using YOUR privs - not the procedure owner, so the procedure can save ANY table without the procedure owner having access to ANY OF THE TABLES - that is nice, and secure)....

invokers rights are not useful for "real code used by the application to do transactions"

you want the end users NOT to have access to tables, but only to API's that can change tables...

Invoker rights

Mike, August 12, 2008 - 8:01 am UTC

I agree with Tom - invoker rights are most useful for utilities, not in general. For example, we have a package (in a 'utility' schema) that creates new date-based range partitions for a given schema/table. The code that issues the DDL must be in a call path that maintains invokers rights, so it can do this on behalf of the calling schema.

But for a typical application design, how would you propose using invokers rights and keeping control? If you did this:
User U
--> calls package A.P1 (which uses invokers rights)
--> which performs SQL on table A.T1
then user U must have rights to perform SQL on A.T1 - which means he/she/it can do things like DELETE FROM A.T1; with or without your application code.

invoker rights

Chris Gould, August 13, 2008 - 7:57 am UTC

<quote>
.. user U must have rights to perform SQL on A.T1 - which means he/she/it can do things like
DELETE FROM A.T1; with or without your application code.
</quote>
Mike - thanks : this makes it perfectly clear.

Code/Data separation, multiple simultaneous versions.

Daniel, August 23, 2010 - 12:18 am UTC

Hello Tom.

Keeping data and code separated through the use of different schemas as discussed seems eminently sensible.

I wonder how best to scale it to a situation where there are multiple versions of the application code and multiple independent sets of data at different versions.

For example, we currently have schemas cust0 through custN. Each of these 150-odd schemas is at one of about 20 'versions' ie slightly different table structures etc. In addition each schema has the appropriate version of application code compiled into it.

I would like to separate the data from the code. However, having to have a customer app version schema dedicated to each individual customer data schema (ie 300 schemas in total) doesn't seem like a big improvement.

Is it possible to have custData0 .. custData149 and appVer0 .. appVer19 schemas and have client calls on behalf of customer X (running version Y) go through appVerY using custDataX?

I'd thought that compiling everything in appVerN with invoker rights might be the way to achieve this... I'll take your word that it's likely a path to madness.


For context this is an N-tier app with all db connections made as custN.
Tom Kyte
August 26, 2010 - 9:54 am UTC

... I wonder how best to scale it to a situation where there are multiple versions
of the application code and multiple independent sets of data at different
versions. ...

multiple versions before 11g - indicate multiple databases
In 11g, there is an opportunity to use edition based redefinition

see January - June 2010 articles:
https://asktom.oracle.com/magazine-archive.htm


Having schema's cust0..custN is a non-starter with me, I would (have, will continue to do so) scrap it and move to a single schema. Standardize - all for one, one for all - the way software as a service is to run. Having 20 different 'versions' is just untenable to me. In addition to the heinous management issues - you are flooding the shared pool with 150 copies of basically the same sql that is different (but only because of the schema name). It is just about the wrongest approach you can take.


don't even go down the invokers rights path - it is wholly inappropriate for that. invokers rights is good for writing an infrequently used utility program (like a plsql routine to accept a query as input - parse as invoker - and print contents to a file - that is a good invokers rights routine - no too many other things are)


Daniel, September 18, 2010 - 11:16 pm UTC

Thanks for your response, Tom.

Yes, we're using 11g R2. EBR looks interesting. Having a single active edition per session seems as if it would limit it's usefulness though.

Independently versioning different modules of the system and picking-and-mixing them for an individual customer is a useful capability.

I guess with EBR (using one schema, in contrast to multiple 'version' schemas) if
- cust1 and cust2 both used modules A and B and
- cust1 wanted to (only) upgrade to A' and
- cust2 wanted to (only) upgrade to B'
you'd need a new edition providing A,A',B and B' and need to rewrite both customers client applications. And have custN roles that prevent the client application from using the wrong version of a module.


Anyway, assuming a single data schema for all customers and EBR...

Migrating shouldn't be to hard if we add the primary key of the 'customer' (currently a table with a single row) to every table's pk and then use that to create a VPD. And partitioning tables by that customer pk would get us back to effectively the same physical structure. Customers do have wildly different data in some tables though so a single execution plan across all customers might not be as efficient.

My main concern is longevity of editions. There are non-technical reasons why customers aren't immediately upgraded to the latest versions. Is it feasible to continue using an old edition, with cross-edition triggers in place, for extended periods (eg months) or are they only intended for a few hours/days during the upgrade window?

Are there serious performance implications of running old editions under multiple layers of cross edition triggers?
I expect a percentage slow-down to 5 significant figures based upon the extensive information I've given you. :)

Thanks.
Tom Kyte
September 20, 2010 - 1:56 pm UTC

... My main concern is longevity of editions. There are non-technical reasons why
customers aren't immediately upgraded to the latest versions. Is it feasible
to continue using an old edition, with cross-edition triggers in place, for
extended periods (eg months) or are they only intended for a few hours/days
during the upgrade window?
...

it would be feasible, but I personally would not recommend it. I'm definitely a "one for all, all for one" type of guy. My very strong opinion is that you are running a 'SaaS' software as a service deal here - and the customer gets an interface to the data - but the underlying software is your job - they get the upgrade when everyone else gets the upgrade.


The problem with running different editions long term would be similar to the problems of invokers rights - it is a mess when everyone is running their own stuff (trying to figure out what is happening) and you will tend to flood the shared pool with multiple versions of code.

schemas

A reader, September 20, 2010 - 5:06 pm UTC


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library