thanks
Sokrates, March 26, 2014 - 11:07 am UTC
Thanks for answering.
... not at this time ... doesn't sound too bad.
A reader, March 26, 2014 - 3:08 pm UTC
From day 1 , Sybase , DB2 and few others had this architecture ( 1 instance , multiple dbs ) . How Oracle's Multitenancy is different from their offerings ?
March 27, 2014 - 10:05 am UTC
Here is one big difference
Could you for example have multiple installs of Peoplesoft in SQL Server? In order to do that you would need N number of accounts (where N is the number of Peoplesoft installs you want) all with the same name - but totally different sets of grants and passwords.
Say you are hosting software for companies - you need to keep their data separate, you need to keep their DBA's separate even. But you would like to consolidate. But everytime you create a database for them - they create a user "ADMIN", they give it lots of privileges. How could you install them five times in single SQLserver, DB2, any of the others - keeping each of the ADMIN accounts separate, with five different passwords, five different sets of grants and no ability to use any of the other databases inside that instance?
In short, how can you consolidate any real application - installing multiple copies of its database in the same instance while preserving the physical isolation?
CDB redo and PDB recovery
Sokrates, March 28, 2014 - 9:25 am UTC
... Physical standby works by shipping redo from one site to another. There is no differentiation of the redo for pdb-1 versus pdb-2. They are same as far as the redo log buffer and lgwr are concerned. LGWR will just ship the redo. ...
But I can backup and especially recover on PDB-level as well ?
And this recovery has to use CDB-level-log-stream ( because there is only CDB-level-log-stream ) ?
What would be the principle difference between an
rman recover
and a
standby database recover ( alter database recover [managed] standby database ... ) ?
March 28, 2014 - 11:35 am UTC
we can recover on a pdb level because we can recover
a) individual blocks
b) files
c) tablespaces
d) sets of tablespaces
that is all about redo application - not redo generation, storage, archival. You cannot have one pdb in archivelogmode and another NOT.
lgwr just flings all redo over the network by design in physical data guard. It isn't really analyzing or looking at the redo data, it just writes it.
so ALL redo is sent over the network.
sure, on the standby - you could choose not to have various datafiles present - but the redo would still be generated and sent and attempted to be applied. There is no way to stop the redo - that is the crux of the issue.
Just like you cannot have some tablespaces in archivelog mode and some not - you cannot have some pdb's not in standby and some in standby. The redo will be generated, the redo will be transmitted, the redo will be placed into the standby redo log, the redo will be attempted to be applied.
has nothing to do with recover, it is all about redo generation, shipping and storage. all of that has to take place regardless.
A reader, March 28, 2014 - 5:37 pm UTC
Dear Sir,
Glad to see you back!!!
In you above response you said...
"Just like you cannot have some tablespaces in archivelog mode and some not"
But we can create tablspace with nologging option? Or I miss understand your comment.
Regard's
March 31, 2014 - 9:58 am UTC
creating a tablespace in nologging mode simply makes it so that segments you create in that tablespace by default have the nologging attribute which makes them candidates for not generating redo in response to *direct path operations* only.
so for example,
create table t as select * from x;
will bypass UNDO and *may* bypass redo IF the tablespace table T is in was defines as "nologging", however
insert into t .... values ....;
insert into t select * from x;
will *always* generate redo. And note that NOLOGGING *might* bypass redo, then again, it might not (if the dba has set force logging to true for the database)
nologging on a tablespace DOES NOT disable logging on a tablespace. It simply adds "nologging" by default to segments created AFTER being set to nologging (and has no effect on already existing segments) and allows those segments to MAYBE bypass redo generation on direct path operations only.
that tablespace is still in "archivelog" mode.
A reader, April 01, 2014 - 5:26 pm UTC
thanks for your reply!!
just to clarify if i understand correctly or not.
Let say T is a tablespace created with nologging and let say I am
creating table t1 in T like "create t1 as select * from x;" (with direct path) will bypass redo and undo and any subsequent insert operation like "insert into t1 select * from x;" (without direct path) will always generate redo and undo.
if above is correct than during intial creation of segment with direct operation will bypass redo?
Thanks in Advance.
April 02, 2014 - 5:31 am UTC
assume the database is not in "force logging" mode,
then if the table is nologging, no redo would be generated for the create table as select (well, some will but only for the data dictionary modifications)
if a table is nologging, no redo would be generated for direct path loads into this table
insert, update, delete, merge into a normal will ALWAYS generate redo - no matter what - under all circumstances, regardless of the nologging setting.
I don't know what "than during intial creation of segment with direct operation will bypass redo? " is trying to say....
A reader, April 02, 2014 - 4:55 pm UTC
Thank you Tom!!!
question on memory manageability
Sita, July 08, 2014 - 10:16 pm UTC
Hi Tom,
With multitenant option, Oracle memory (SGA/PGA) is controlled at CDB level. Question is, how does control the memory for the PDB databases ?
Memory requirement (like SGA/PGA) could be different for each PDB database running in CDB.
1) How does Oracle control this memory thing at CDB level.
2) Can this be controlled by DBAs.
2) Is there any shortcomings or Pitfalls with multitenant option.
Thanks.
Alexander, June 29, 2016 - 3:36 pm UTC
Hi guys,
I'd like to get your thoughts on a situation we have we are thinking of using multitenancy for. Have a portal application that is comprised of many APIs. We also have a process to essentially clone our network so developers can test changes to all the various systems and software throughout our network. This requires there to be a database for the portal app in order for this to work, and could mean we would need to fire up 5, 10, 100 at a time depending on how many people are testing. So I thought pdbs would be the way to go for this.
My question is, would you recommend cloning directly from production, or I was thinking another alternative would be to duplicate database to our host running the pdbs, and clone from that cdb. I've never used this option so I'm not even sure what is possible. I also think option 1 would require us to license production for multitenancy. Then again it's Exadata so we might already have it.
Thanks.
June 30, 2016 - 1:40 am UTC
My inclination would be to clone from production to another environment, and then take (pluggable) snapshots from that environment...but that's simply because I'm always paranoid about keeping production environments safe from a security and resource consumption perspective.
Thinly provisioned pdb's sound like a good fit for your usage case here - but yes, you'd be up for a multitenant license in this instance.
If you've got stacks of RAM, you could use storage level tech to do the snapshot, but then it's a new db instance for each snapshot (hence the RAM requirement). Similarly, you could look our CloneDB, which is database-level snapshotting (still with an instance per cloned snapshot)
Cross container operations
Rajeshwaran Jeyabal, June 10, 2019 - 1:27 pm UTC
Team,
Was reading about Cross container operations from Concepts guide
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-05B15BE7-D8CD-4D8C-A33F-38C1BB09D893 Here is a testcase from a local database running on 12.2 - but keep failing.
demo@PDB1> select count(*) from emp ;
COUNT(*)
----------
14
demo@PDB1> conn sys/Password-1@ora12cr2 as sysdba
Connected.
sys@ORA12CR2> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
sys@ORA12CR2> select * from containers(demo.emp) t1 where t1.con_id = 3 ;
select * from containers(demo.emp) t1 where t1.con_id = 3
*
ERROR at line 1:
ORA-00942: table or view does not exist
sys@ORA12CR2> update containers(demo.emp) t1
2 set t1.ename = 'XXX'
3 where t1.con_id = 3 ;
update containers(demo.emp) t1
*
ERROR at line 1:
ORA-00942: table or view does not exist
Kindly help us to understand, where are we missing here.
June 11, 2019 - 6:08 am UTC
Cross container operations
Rajeshwaran Jeyabal, June 11, 2019 - 2:34 pm UTC
Thanks for those nice examples.
but still unable to get this answered myself.
sys@ORA12CR2> select * from containers(demo.emp) t1 where t1.con_id = 3 ;
select * from containers(demo.emp) t1 where t1.con_id = 3
*
ERROR at line 1:
ORA-00942: table or view does not exist
sys@ORA12CR2> update containers(demo.emp) t1
2 set t1.ename = 'XXX'
3 where t1.con_id = 3 ;
update containers(demo.emp) t1
*
ERROR at line 1:
ORA-00942: table or view does not exist
was doing that above select and updates from CDB$ROOT with a common user account (SYS) - but still fails. could you please clarify.
Cross container operations
Rajeshwaran Jeyabal, June 11, 2019 - 3:12 pm UTC
thanks re-reading the above link helped again.
from the PDB created a view in common user
from the CDB created a empty table (clone of table from local user at pdb) for common user.
then the cross container query worked perfectly.
c##rajesh@ORA12CR2> conn c##rajesh/oracle@pdb1
Connected.
c##rajesh@PDB1> desc emp
ERROR:
ORA-04043: object emp does not exist
c##rajesh@PDB1> create or replace view emp as select * from demo.emp;
View created.
c##rajesh@PDB1>
c##rajesh@PDB1> conn c##rajesh/oracle@ora12cr2
Connected.
c##rajesh@ORA12CR2> CREATE TABLE EMP
2 ( EMPNO NUMBER(4,0),
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4,0),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2,0)
10 ) ;
Table created.
c##rajesh@ORA12CR2> select * from containers(emp);
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO CON_ID
---------- ---------- --------- ---------- ----------- ---------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-1980 800 20 3
7499 ALLEN SALESMAN 7698 20-FEB-1981 1600 300 30 3
7521 WARD SALESMAN 7698 22-FEB-1981 1250 500 30 3
7566 JONES MANAGER 7839 02-APR-1981 2975 20 3
7654 MARTIN SALESMAN 7698 28-SEP-1981 1250 1400 30 3
7698 BLAKE MANAGER 7839 01-MAY-1981 2850 30 3
7782 CLARK MANAGER 7839 09-JUN-1981 2450 10 3
7788 SCOTT ANALYST 7566 19-APR-0087 3000 20 3
7839 KING PRESIDENT 17-NOV-1981 5000 10 3
7844 TURNER SALESMAN 7698 08-SEP-1981 1500 0 30 3
7876 ADAMS CLERK 7788 23-MAY-0087 1100 20 3
7900 JAMES CLERK 7698 03-DEC-1981 950 30 3
7902 FORD ANALYST 7566 03-DEC-1981 3000 20 3
7934 MILLER CLERK 7782 23-JAN-1982 1300 10 3
14 rows selected.