setting compatble parameter
A reader, November 27, 2003 - 9:33 am UTC
Hi
We are using 9.2.0.1 but compatible is set to 9.0.0, do we have to set it to 9.2.0? Any gains?
November 27, 2003 - 11:03 am UTC
compatible tells us what features we are allowed to use that affect how stuff is stored on disk.
you do not "have" to set it.
you might find that some new features are not available to you (you'll get an error message to that effect)
note that if you set it up, you in general cannot set it back (it modifies how we store, persist data on disk -- this is not reversible).... SO, test it first on a test system.
setting compatible parameter on Oracle 10g
Vladimir Rojas, May 09, 2005 - 2:12 pm UTC
Hi Tom,
Can I have oracle 10g instance with compatible parameter set-up as 9.2.0.0, ?, what is the process to do that...... I try when I create a database instance but without success
Thanks and Regards
Vlad
May 09, 2005 - 3:59 pm UTC
yes, but why? what is the goal here?
you would have to set it in the parameter file before creating the database.
idle> startup nomount;
ORACLE instance started.
Total System Global Area 113246208 bytes
Fixed Size 778032 bytes
Variable Size 61874384 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes
idle> create database;
Database created.
idle> !cat initXXX.ora
db_name=XXX
compatible=9.2.0.0
idle> show parameter compat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 9.2.0.0
Compatible Param
Himanshu, May 25, 2005 - 11:31 am UTC
Hi,
Could you please confirm the compatible parameter needs to be set to 9.2.0.x release or to only 9.2.0 for all subsequent release in 9.2 series.
My guess is 9.2.0 should be sufficient.
Can you provide us with some matrix on this ?
This is needed ASAP.
Regards
Himanshu
May 25, 2005 - 2:05 pm UTC
compatible should be set to the level you need it to be set to.
9.2.0 should be sufficient, there is no "matrix" I'm aware of.
Compatible
atul, June 14, 2005 - 1:22 am UTC
Hi tom,
What's the differnece if we set compatible to 9.2.0.6 or 9.2.0
My database has been upgraded to 9.2.0.6.
Please suggest
June 14, 2005 - 9:48 am UTC
if you set it to 9206 you cannot downgrade to 9200, it enables the use of new on disk storage structures that once we start using them - you cannot downgrade anymore.
How Can I Resolve Version Problem
Muhammad Waseem Haroon, July 28, 2005 - 8:49 am UTC
How Can I Resolve an Error :
ORA-01571: redo version 9.2.0.0.0 incompatible with ORACLE version 9.0.0.0.0
Please Help
Thanks in Advance.
July 28, 2005 - 10:22 am UTC
one would need more information as to the steps leading up to getting this error.
[tkyte@desktop tkyte]$ oerr ora 1571
01571, 00000, "redo version %s incompatible with ORACLE version %s"
// *Cause: This software version can not read the current redo logs, and
// either crash recovery is required or there are offline database
// files that need media recovery. If a file name is listed then
// it needs media recovery.
// *Action: Shutdown and startup using the compatible software. Do any required
// media recovery, and open the database. Shutdown and then startup
// using current software. If the file is going to be dropped then
// take it offline with the DROP option to skip this check.
incompatiblity
Muhammad Waseem Haroon, July 29, 2005 - 11:37 am UTC
How Can I Resolve an Error :
ORA-01571: redo version 9.2.0.0.0 incompatible with ORACLE version 9.0.0.0.0
Please Help
Thanks in Advance.
Followup:
one would need more information as to the steps leading up to getting this
error.
[tkyte@desktop tkyte]$ oerr ora 1571
01571, 00000, "redo version %s incompatible with ORACLE version %s"
// *Cause: This software version can not read the current redo logs, and
// either crash recovery is required or there are offline database
// files that need media recovery. If a file name is listed then
// it needs media recovery.
// *Action: Shutdown and startup using the compatible software. Do any required
// media recovery, and open the database. Shutdown and then startup
// using current software. If the file is going to be dropped then
// take it offline with the DROP option to skip this check.
Actually the Case is that, I have installed Oracle 9i Version 9.0.0.0.0 (For Testing) and after the installation I Have Restored the full offline Backup of version 9.2.0.0.0 (PROD Database).
I have issued the command "STARTUP".
Database is mounting.... but not opening and its giving me an error
ORA-01571: redo version 9.2.0.0.0 incompatible with ORACLE version 9.0.0.0.0
I Have Just Received this Error, I Hav'nt Received any other Error.
How can I Resolve this Issue of incompatiblity?
Thanks in Advance
Muhammad Waseem Haroon
mwaseem_haroon@hotmail.com
July 29, 2005 - 2:43 pm UTC
that is not going to work, think about it.
You have old software,
new datafiles
9.0 software cannot read 9.2 *stuff* and the stuff you have is 9.2
compatible initialization parameter
A reader, October 14, 2005 - 5:20 pm UTC
Hi Tom,
If the compatibility parameter is set as:
COMPATIBLE=9.2.0
Does Oracle interpret it as 9.2.0.* or 9.2.0.1.0?
In other words, does COMPATIBLE=9.2.0 mean the lowest or the highest sub-release?
On a related topic, when you install the Oracle software on your systems, do you use the full release number in your ORACLE_HOME? For example, which of these would you use for ORACLE_HOME when installing 9.2.0.4.0
(a) /opt/oracle/product/9.2.0.4.0
(b) /opt/oracle/product/9.2.0
If (a) was used, the directory name would not match the version when the software is eventually upgraded to 9.2.0.7.0. So is it better to install the software into (a) instead of (b)?
October 14, 2005 - 6:13 pm UTC
It is "lowest"
As a test case - my compatible was 10.2.0
I set it to 10.2.0.1... and bounced.
Then I set it to 10.2.0 and get:
ORA-00201: control file version 10.2.0.1.0 incompatible with ORACLE version
10.2.0.0.0
ORA-00202: control file: '/home/ora10gr2/oradata/ora10gr2/control01.ctl'
(now, I'm off to fix that ;)
I don't like to use the full release number - just the major release - my homes look like ..../9ir1/, ..../9ir2/, .../10gr1/...., and so on.
for the reason you pointed out. the directory would seem to say "i'm 9.2.0.4" but reality might be that it is 9.2.0.7
Compatible
A reader, March 30, 2006 - 3:30 am UTC
Tom,
We are going to replace our old 9i database server and move it to a new HW. So I want to use this to upgrade the database to 10g at the same time. The problem is that the application vendor says that the application is "not tested/certified with Oracle 10g" and recommends to stay with 9i.
My question is:
Can I install 10g software, set compatible parameter to 9.2.0 before creating the database and later when the software vendor certified the application simple change the compitible parameter (saving upgrade 9i to 10g this way)?
March 31, 2006 - 11:02 am UTC
You would have to work with your 3rd party software vendor.
compatible simply tells us what we can write to disk. We won't use new redo log formats, we won't write new block level information, we won't create datafiles that have anything 9ir2 didn't have.
But you would be entirely using the 10g software, optimizer, set of features.
compatible would not change this all.
Snapshot in 9.2.0
Kuldeep, April 17, 2006 - 11:26 am UTC
Dear Tom,
I have upgraded my database from 8.1.7 to 9.2.0.7 on SunOS.
After upgrade, refreshing my snapshot are taking greater time and two snapshot refresh does not complete at all and hangs the session.
I am using the following command to refresh snapshot:
dbms_snapshot.refresh(list=> 'censys_tns',rollback_seg=>'amarbs01')
I read somewhere that after upgrade to oracle 9i re-creating the snapshot would help. I dropped the snapshot and trying to re-create it but create command is not completing, it hangs forever.
-------------- snapshot create command --------------
CREATE MATERIALIZED VIEW OPS$VSIS.CENSYS_TNS
PCTFREE 0
PCTUSED 99
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 20480 K
NEXT 5120 K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT)
TABLESPACE REFERENCE
LOGGING
NOCACHE
NOPARALLEL
USING INDEX PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
BUFFER_POOL DEFAULT)
REFRESH FORCE
WITH ROWID
USING DEFAULT LOCAL ROLLBACK SEGMENT
DISABLE QUERY REWRITE AS
select /*+ FULL(tns) FULL(ord) */ distinct
tns.CUSTOMER cust_ref_no
,tns.SITE site_code
,decode(tns.SERVICE_TYPE,'BILLING',null,tns.SWITCH_ID) switch_id
,tns.PRIMARY public_tn
from TN_SYS@censys tns
,CENSYS_MAX_ORD ord
where tns.ORDER_NUM = ord.ORDER_NUM
and tns.SITE is not null;
-------------------------------------------------------
When I execute snapshot's query it is returning the result in a couple of minutes, I am also able to create table using this query (create table as).
The compatible parameter set in my datbase is 9.2.0.
Will re-setting compatible to 8.1.7 would help any way, because snapshots are depreciated in oracle9i.
If not please help me to overcome from this problem.
Thanks and regards,
April 17, 2006 - 8:01 pm UTC
snapshots are NOT deprecated (materialized views are snapshots are materialized views). the "word" snapshot might be, but materialized views are the snapshots of old (in fact, there are synonyms making the two the same)
compatible does not affect how things work, it only affects what features might be used (preventing you from using features that would create on disk images that older releases would never be able to understand).
lose the hints, why have hints, just lose them.
And trace it, what is happening - if possible compare the trace to the 817 trace and see "what is different"
Let me ask you - when you tested this before upgrading, did you experience the same issues?
Snapshot refresh problem in oracle 9.2.0.7
Kuldeep, April 24, 2006 - 11:15 am UTC
Dear Tom,
Thanks for your reply.
I created censys_tns snapshot view after removing hints. It took around 14 minutes. Thanks. Earlier I was not able to create
this because it was taking too long time to create and session seems to be hang.
I had not tested any of snapshot in 8.1.7 database, all were running fine in 8.1.7. I do not have trace of database 8.1.7 for
any snapshot refresh. But I have traced the development database's snapshot refresh where it is taking half time than the
production. But parameters on development database are different. These are as below:
On Development
--------------
SQL> show parameter block
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_buffers integer 200
db_block_checking boolean FALSE
db_block_checksum boolean TRUE
db_block_size integer 2048
db_file_multiblock_read_count integer 8
On Production
--------------
SQL> show parameter block
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_buffers integer 76800
db_block_checking boolean FALSE
db_block_checksum boolean TRUE
db_block_size integer 8192
db_file_multiblock_read_count integer 128
Trace on Development
====================
********************************************************************************
INSERT /*+ APPEND */ INTO "OPS$VSIS"."CENSYS_SO" select distinct
sot.CUSTOMER cust_ref_no
,tn.TN_AREA site_code
,tn.SWITCH_ID switch_id
,sot.TN public_tn
from SO_TN@censys sot
,TN@censys tn
where sot.TN = tn.TN
and sot.STATUS in (select LOOKUP_CODE
from VSIS_LOOKUP_CODES
where LOOKUP_DOMAIN = 'CENSYS_SO'
and LOOKUP_SUBCODE = 'STATUS')
and tn.TN_AREA is not null
and length(tn.TN_AREA) <= 6
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 0 0 0
Execute 2 347.87 961.25 192664 3228308 133009 452280
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 347.88 961.35 192664 3228308 133009 452280
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 53 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
1 LOAD AS SELECT
452280 SORT UNIQUE
470576 MERGE JOIN
1564373 SORT JOIN
1564373 TABLE ACCESS BY INDEX ROWID VSIS_LOOKUP_CODES
3128747 NESTED LOOPS
1564373 REMOTE
1564373 INDEX RANGE SCAN VLC_PK (object id 43189)
470576 SORT JOIN
471415 REMOTE
********************************************************************************
Trace on Production
===================
********************************************************************************
INSERT /*+ APPEND */ INTO "OPS$VSIS"."CENSYS_SO1" SELECT DISTINCT
sot.CUSTOMER cust_ref_no
,tn.TN_AREA site_code
,tn.SWITCH_ID switch_id
,sot.TN public_tn
FROM SO_TN@censys sot
,TN@censys tn
WHERE sot.TN = tn.TN
AND sot.STATUS IN (SELECT LOOKUP_CODE
FROM VSIS_LOOKUP_CODES
WHERE LOOKUP_DOMAIN = 'CENSYS_SO'
AND LOOKUP_SUBCODE = 'STATUS')
AND tn.TN_AREA IS NOT NULL
AND LENGTH(tn.TN_AREA) <= 6
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.30 0 1 0 0
Execute 1 694.49 1578.19 2591 470609 14 452300
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 694.52 1578.50 2591 470610 14 452300
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 34 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
1 LOAD AS SELECT
452300 SORT UNIQUE
470607 NESTED LOOPS
471476 NESTED LOOPS
1564285 REMOTE
471476 REMOTE
470607 TABLE ACCESS BY INDEX ROWID VSIS_LOOKUP_CODES
470607 INDEX UNIQUE SCAN VLC_PK (object id 179852)
********************************************************************************
I am also facing performance problem in my SQLs after upgrade to 9.2.0.7.
=========================================================================
Earlier below query was running in just 15-20 seconds but now it is taking 4 minutes. Unfortunately for this too
I have not captured the explain plan in 8i db.
I have removed hint from query and analyzed all 6 tables and its 13 indexes using analyze table/index <name> compute statistics.
But this query is not using any index and taking much time.
Please help.
SQL> select /* + FIRST_ROWS */ jc.job_no,
2 jc.phase_end_datetime,
3 jc.job_status,
4 olj.order_no,
5 co.customer_no,
6 rt.report_name,
7 rt.data_source_type,
8 substr(c.customer_name,1,15)
9 from ops$vsis.vsis_job_control jc,
10 ops$vsis.vsis_ord_line_jobs olj,
11 ops$vsis.vsis_cust_orders co,
12 ops$vsis.vsis_cust_order_details cod,
13 ops$vsis.vsis_customers c,
14 ops$vsis.vsis_report_templates rt
15 where jc.job_status = 'E'
16 and jc.phase_end_datetime > (sysdate-30)
17 and jc.job_no = olj.job_no
18 and olj.order_no = cod.order_no
19 and olj.order_line_no = cod.order_line_no
20 and olj.order_line_detail_no = cod.order_line_detail_no
21 and cod.order_no = co.order_no
22 and co.customer_no = c.customer_no
23 and cod.report_no = rt.report_no
24 and jc.job_line_no = (select /* + index (d VJC_VOLJ_FK_I) */ max(d.job_line_no)
25 from ops$vsis.vsis_job_control d
26 where jc.job_no = d.job_no);
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=21915 Card=1515 Byte
s=234825)
1 0 FILTER
2 1 SORT (GROUP BY) (Cost=21915 Card=1515 Bytes=234825)
3 2 MERGE JOIN (Cost=21397 Card=43914 Bytes=6806670)
4 3 SORT (JOIN) (Cost=11500 Card=23530 Bytes=3482440)
5 4 MERGE JOIN (Cost=10994 Card=23530 Bytes=3482440)
6 5 SORT (JOIN) (Cost=10989 Card=23530 Bytes=2447120)
7 6 MERGE JOIN (Cost=10622 Card=23530 Bytes=2447120)
8 7 SORT (JOIN) (Cost=10616 Card=23530 Bytes=1694160)
9 8 MERGE JOIN (Cost=10351 Card=23530 Bytes=1694160)
10 9 MERGE JOIN (Cost=10264 Card=23530 Bytes=1364740)
11 10 SORT (JOIN) (Cost=10098 Card=23530 Bytes=941200)
12 11 MERGE JOIN (Cost=9939 Card=23530 Bytes=941200)
13 12 SORT (JOIN) (Cost=535 Card=23530 Bytes=470600)
14 13 TABLE ACCESS (FULL) OF 'VSIS_JOB_CONTROL' (Cost=440 Card=23530 Bytes=470600)
15 12 SORT (JOIN) (Cost=9404 Card=2270192 Bytes=45403840)
16 15 TABLE ACCESS (FULL) OF 'VSIS_ORD_LINE_JOBS' (Cost=393 Card=2270192 Bytes=45403840)
17 10 SORT (JOIN) (Cost=166 Card=42563 Bytes=766134)
18 17 TABLE ACCESS (FULL) OF 'VSIS_CUST_ORDER_DETAILS' (Cost=6 Card=42563 Bytes=766134)
19 9 SORT (JOIN) (Cost=88 Card=24777 Bytes=346878)
20 19 TABLE ACCESS (FULL) OF 'VSIS_CUST_ORDERS' (Cost=7 Card=24777 Bytes=346878)
21 7 SORT (JOIN) (Cost=7 Card=438 Bytes=14016)
22 21 TABLE ACCESS (FULL) OF 'VSIS_CUSTOMERS' (Cost=2 Card=438 Bytes=14016)
23 5 SORT (JOIN) (Cost=5 Card=52 Bytes=2288)
24 23 TABLE ACCESS (FULL) OF 'VSIS_REPORT_TEMPLATES' (Cost=2 Card=52 Bytes=2288)
25 3 SORT (JOIN) (Cost=9897 Card=4235330 Bytes=29647310)
26 25 TABLE ACCESS (FULL) OF 'VSIS_JOB_CONTROL' (Cost=440 Card=4235330 Bytes=29647310)
Plans and Compatible?
Mette Stephansen, January 17, 2008 - 4:42 am UTC
Hi Tom,
Q0) Will a change in compatible possibly change the execution plans?
We are on 9.2.0.6 and compatble 9.2.0.0.0.
Q1) They want to change to 9.2.0 in compatible, but to my knowledge this change will have no effect at all, is that correct? (It will as you stated in this thread take the lowest possible compatible which will be 9.2.0.0.0)
Q2) But a change to 9.2.0.6.0 might affect plans and other stuff?
best regards
Mette
January 17, 2008 - 11:13 am UTC
compatible affects what we store on disk. optimizer features enable would be optimizer related.
i would not expect any plan changes, no.
Clearification
Mette, January 17, 2008 - 2:00 pm UTC
Thanks Tom,
Does that mean that the answers are:
Q0) No effects on plans
Q1) No diff. in specifying 9.2.0 and 9.2.0.0.0. So no effects in chainging
Q2) Not plans, but possibly other stuff (ie things residing on disk)
regards
Mette
January 19, 2008 - 10:23 pm UTC
q0) i would not expect any, no.
q1) those would be the same, 9.2.0
q2) should only affect that which is stored on disk.
DBA_REGISTRY
mohammed Abuhamdieh, April 04, 2008 - 2:04 pm UTC
Greetings Mr.Kyte,
we have upgrade the database from 9.2.0.0.0 to 9.2.0.8.0 the compatible parameter is set to 9.2.0.0.0, but when i select from dba_registry it gives me that i have the "xml database" in invalid state we tried to recompile all objects through the "utlrp.sql" but nothing happened.
my question is, does setting the compatible parameter to 9.2.0.8.0 can resolve this issue. and what is the risk of setting it to 9.2.0.8.0.
April 04, 2008 - 10:12 pm UTC
compatible won't do anything for you in this case. compatible only controls "things we store on disk"
please utilize support for install/configuration issues.
Alexander, July 08, 2008 - 3:46 pm UTC
Hi,
Couple questions about this parameter.
1) When patching, should* we set this to our current patch version or is it just for major releases?
2) Will it have the same effect as a major release, and prevent us from rolling back the patch once it is set?
*By should I mean, if I want all the fixes in the patch to take affect, do I need to set this, e.g., I upgraded from 10.2.0.2 to 10.2.0.4, do I have to now
alter system set compatible = '10.2.0.4' scope=spfile;
Thank you.
July 08, 2008 - 5:28 pm UTC
when you change this, there is no going back - it permits the use of disk storage structures that older releases did not understand. You would be allowing those structures to be created.
You need to set it "up" when you want to use a new feature that uses some new structure (eg: in 11gR1, you can have table compression enabled for conventional path operations - you need compatible set to 11.1 in order to use that new feature)
Otherwise, it is your call - do you want to enable these new features - doing so will make it not possible to downgrade - once you set it up and open up the database - there is no going back.
It is pretty much mostly a major release thing
When compatible may be incompatible
Mike, July 14, 2009 - 6:39 pm UTC
Hi Tom, thank you for taking your time to help all of us Oracle users!
Should sys objects stay the same between a 10.2.0.3 home's database with compatible = 10.2.0.0.0 and an 11.1.0.6 home's database with compatible = 10.2.0.0.0?
I ask because I've recently discovered a vendor's application is breaking when I test it on an 11.1.0.6 home's database configured with compatible = 10.2.0.0.0. The application works on a 10.2.0.3 home with the database configured to compatible = 10.2.0.0.0 I do realize that this would be a very difficult task for the database software to achieve, but the documentation leads me to believe that it should work, even without testing first. Any clarification as to when we should actually use this parameter will be appreciated!
Notes from 11g Reference Guide:
COMPATIBLE allows you to use a new release of Oracle, while at the same time
guaranteeing backward compatibility with an earlier release. This is helpful if it
becomes necessary to revert to the earlier release.
This parameter specifies the release with which Oracle must maintain compatibility. It
allows you to take advantage of the maintenance improvements of a new release
immediately in your production systems without testing the new functionality in your
environment. Some features of the release may be restricted.
The sys objects I found, there may be more, to break the vendor's application are:
-- Oracle 10g instance with compatible = 10.2.0.0.0
CREATE OR REPLACE
TYPE SYS.ODCICOLINFO AS object
(
TableSchema VARCHAR2(30),
TableName VARCHAR2(30),
ColName VARCHAR2(4000),
ColTypeName VARCHAR2(30),
ColTypeSchema VARCHAR2(30),
TablePartition VARCHAR2(30)
);
/
CREATE OR REPLACE
TYPE SYS.ODCIINDEXINFO AS object
(
IndexSchema VARCHAR2(30),
IndexName VARCHAR2(30),
IndexCols ODCIColInfoList,
IndexPartition VARCHAR2(30),
IndexInfoFlags NUMBER,
IndexParaDegree NUMBER
);
-- Oracle 11g with compatible set to 10.2.0.0.0
-- Note the additional attributes
CREATE OR REPLACE TYPE SYS.ODCIColInfo AS object
(
TableSchema VARCHAR2(30),
TableName VARCHAR2(30),
ColName VARCHAR2(4000),
ColTypeName VARCHAR2(30),
ColTypeSchema VARCHAR2(30),
TablePartition VARCHAR2(30),
ColInfoFlags NUMBER,
OrderByPosition NUMBER,
TablePartitionIden NUMBER,
TablePartitionTotal NUMBER
);
/
CREATE OR REPLACE TYPE SYS.ODCIIndexInfo AS object
(
IndexSchema VARCHAR2(30),
IndexName VARCHAR2(30),
IndexCols ODCIColInfoList,
IndexPartition VARCHAR2(30),
IndexInfoFlags NUMBER,
IndexParaDegree NUMBER,
IndexPartitionIden NUMBER,
IndexPartitionTotal NUMBER
);
Unfortunately the vendor will not support 11g for another year. I really wish to keep with the latest version. Is this a bug?
July 15, 2009 - 11:50 am UTC
... Should sys objects stay the same between a 10.2.0.3 home's database with compatible = 10.2.0.0.0 and an 11.1.0.6 home's database with compatible = 10.2.0.0.0? ...
no, compatible only controls what is written to disk.
You have compatible at 10.2
That means 11g will generate redo that a 10.2 database could process.
That means 11g will generate database blocks that 10.2 database could process (eg: new OLTP compressed blocks will NOT appear in that database - that feature will not be able to be used, since 10.2 cannot read that block format)
That means anything written to disk will be in a format the earlier release could read.
but the data dictionary will be definitely different between each release. We give you upgrade scripts to upgrade 10.2 to 11.1. We give you downgrade scripts to downgrade 11.1 to 10.2. But each release needs to have it's dictionary format in place.
taking back the compatible prameter
muhammad, October 03, 2009 - 4:24 am UTC
Hi Tom,
We have a mission critical application database which was upgraded to 10g with the compatible parameter still set to 9205
why? because there were sql codes embeded in the application front end that uses specific hints as per 9205 optimizer.
Later vendor has supplied to 10g compliance code and we tested it and now we would like to change the parameter to 10g
for reasons like a)better performance of db since optimizor will chose better plan, b) usage of adv feature of fsfo etc
Problem is the development is asking for a fall back procedure.
We do have a standby database for this. Now if we stop replication to that site after change of parameter in prod.
And said the management that if they feel to fall back , they can move to drc but with old data.
Question is: I know there is no fall back of the parameter. But in real world, its hard to convince management
and they would like a foll proof fallback procedure with much less down time and with changes in tact.
How can we do it?
October 07, 2009 - 4:02 pm UTC
... why? because there were sql codes embeded in the application front end that
uses specific hints as per 9205 optimizer. ...
too bad compatible has NOTHING at all to do with that. compatible will not affect the optimizer in the way you seem to think. it (compatible) controls what we write to disk - what format it is.
once you set compatible to X+1 from X, you cannot set back to X. That is because is controls what we write to disk - not what the optimizer does. And if we write something that only 10g can read - we cannot - repeat CANNOT - go back to 9i. So, as soon as you open a database with compatible at X+1, it'll be X+1 and no going back to X - ever.
how can you do it? It is called testing, testing your application in test with compatible set higher.
COMPATIBLE=9.2.0.8 and AWR
Ishi Vora, December 07, 2009 - 3:01 am UTC
Hi Tom,
This question is for information purpose only. I would like to know if AWR would work fine in my 10.2.0.4 database even if COMPATIBLE parameter value is set to 9.2.0.8?
Can you also point me to some documentation on OTN or Metalink which lists all 10G features reuire COMPATIBLE parameter value to 10.2.X.X..?
Thanks for your help...
Cheers,
Ishi
December 07, 2009 - 2:58 pm UTC
compatible controls what is written to disk, how it is written - how the redo stream is formatted, how the database blocks are formatted.
Only features that change what is logged or what the structure of a block looks like would need compatible to be set to a certain level.
AWR is a repository, it doesn't change the way blocks are formatted on disk.
I'm not aware of a comprehensive list of features that require compatible to be set to a certain level. Basically - you would go to the feature in the documentation (you must be doing that anyway, you are learning about this new feature before you use it) and if it says "compatible must be x.y.z" then you have it.
Version difference
Niddhi, February 11, 2010 - 11:54 pm UTC
Hi,
This is for the first time I am working with Solaris and installing oracle over it.
When i give the command
STARTUP MOUNT
It gives me the following error:
ORA-00201: control file version 10.2.0.1.0 incompatible with ORACLE version 10.2.0.
Kindly let me know how to solve this as I am stuck with this for over a day now.
Thanks!!
February 16, 2010 - 11:29 am UTC
you chopped off the message, it would look in full like this:
ORA-00201: control file version 10.2.0.4.0 incompatible with ORACLE version
10.2.0.0.0
ORA-00202: control file: '/home/ora10gr2/oradata/ora10gr2/control01.ctl'
update your compatible init.ora parameter to say 10.2.0.4 instead of whatever it currently says.
sys%ORA10GR2> startup force pfile=initora10gr2.ora
ORACLE instance started.
Total System Global Area 536870912 bytes
Fixed Size 1268484 bytes
Variable Size 138413308 bytes
Database Buffers 390070272 bytes
Redo Buffers 7118848 bytes
ORA-00201: control file version 10.2.0.4.0 incompatible with ORACLE version
10.2.0.0.0
ORA-00202: control file: '/home/ora10gr2/oradata/ora10gr2/control01.ctl'
sys%ORA10GR2> !grep compatible initora10gr2.ora
*.compatible='10.2.0.'
sys%ORA10GR2> edit initora10gr2.ora
sys%ORA10GR2> !grep compatible initora10gr2.ora
*.compatible='10.2.0.4.0'
sys%ORA10GR2> startup force pfile=initora10gr2.ora
ORACLE instance started.
Total System Global Area 536870912 bytes
Fixed Size 1268484 bytes
Variable Size 230687996 bytes
Database Buffers 297795584 bytes
Redo Buffers 7118848 bytes
Database mounted.
Database opened.
sys%ORA10GR2>
COMPATIBLE parameter and optimizer plans
Arun Mathur, August 05, 2010 - 11:02 am UTC
Hi Tom,
We had an Oracle consultant review our database instances for any general recommendations. One item pointed out dealt with the compatible parameter. It's currently set to 9.2.0.0.0, and we are running 10.2.0.4. He mentioned that we could be shooting ourselves in the foot by forcing Oracle to not use the 10g optimizer. However, based on the documentation and this discussion, I am under the impression that modifying the compatible parameter does not influence the optimizer's behavior in any way. Our optimizer_features_enabled_parameter is currently 10.2.0.4. So, I'd like to conclude that we do not need to modify our compatible parameter in order to ensure our 10g databases use the 10g optimizer, and we only need to modify the compatible parameter when it's required by a feature we're interested in using.
What do you think?
Thanks,
Arun
August 05, 2010 - 1:08 pm UTC
compatible isn't 'optimizer', it controls what format disk files are generated in. Right now you are making 10g write redo log files that look like 9i redo log. Data files are written in a format that 9i could read. Any change that took place to file formats is forbidden to be used by you right now.
optimizer features enable is all about the optimizer. It will make the 10g optimizer work "a lot like, but not exactly the same as" the 9i optimizer if you have it set to 9.x
You are correct in your observation.
Thanks for the quick response!
A reader, August 05, 2010 - 2:28 pm UTC
10g compatible variants
Alejandro Flores, November 01, 2010 - 11:42 am UTC
Hi Tom,
Is there any difference in database performance between to set compatible = 10.2.0 and compatible = 10.2.0.4?
Thank you in advance.
November 01, 2010 - 3:48 pm UTC
compatible isn't about performance inasmuch as it is about what FEATURES you may use.
No, there wouldn't really be a difference between the two above 'performance' wise.
Compatibility parameter during backup and restore
Pradeep, December 23, 2010 - 12:44 am UTC
Hi Tom,
I sincerely appreciate your responses on oracle issues.
I have one question regarding the compatibile parameter.
We have taken a backup of DB at version 9.2.0.8.0.
Now we are restoring this into version 10.1.0.2.0.
Error says the export version (9.2.0.8) is incompatible with import version (10.1.0.2.0).
My question is whether we need to change the init.ora compatible parameter of DB (at 10.1.0.2.0) to 9.2.0.8 so that it accepts the backup?
Thanks,
Pradeep
December 23, 2010 - 10:57 am UTC
you didn't take a backup! You created a logical copy, a dump, of a database - one that might be able to be imported - or not. it is certainly NOT a backup however, it is just a bunch of data you might be able to reload at some point in time.
(for example, if you used sys or sysdba to export AND you used consistent=yes, I seriously doubt you can import that data - since consistent=y and sysdba are incompatible.
you do not need to change compatible at all - you just need to make sure the export comes from any version LESS THAN or EQUAL TO the version you are importing into.
And again - please do not call this a backup, you'll only confuse people. It is a dump, an export, a logical copy. It is NOT a backup. You cannot "restore" this data, you might be able to import it.
thank you TOM
Pradeep, December 23, 2010 - 10:50 pm UTC
Thank you for the clarification TOM.
Ill remember this.
The export is done by user 'system' and also imported by user 'system'.
Actually the export and import happens through application GUI so we dont really have an option of choosing the parameters (like consistent=y)
Thank you once again!!
Pradeep
December 24, 2010 - 12:15 pm UTC
hopefully they use consistent=y
otherwise, your dump is going to be logically corrupt and if you use database constraints - it won't import. if you do all constraints in the application - you just have garbage and you won't know it right away.
should i upgrade?
A, February 08, 2011 - 3:42 am UTC
Hello Tom,
On a 10.2.0.4 database where the compatible parameter is set to 10.2.0.3, will it behave as 10.2.0.3 or 10.2.0.4?
As we are getting ORA-00600 errors on 10.2.0.3 but on not 10.2.0.4 (where the compatible parameter is set to 10.2.0.3). Oracle is suggesting us to upgrade it to 10.2.0.4.
We are not sure whether migrating it to 10.2.0.4 will really solve the issue as we are unable to reproduce it on 10.2.0.4 (compatible parameter = 10.2.0.3).
We have tried to simulate the same set of data/steps both the dbs
I know the question is quite vague but your thoughts please ...
February 09, 2011 - 7:22 am UTC
I cannot comment without knowing the background - I don't have visibility to your SR and cannot really comment given this little bit of detail
Downgrade a database
A reader, June 12, 2011 - 6:48 am UTC
Hi,
I need to downgrade my oracle database from 10.2.0.3 to 10.2.0.2.
The closest instructions I can find is the 10g Release 2 (Downgrading a Database Back to the Previous Oracle Database Release) guide.
As for compatibility, it says "If the compatibility level of your release 10.2 database is 10.2.0 or higher, then you will not be able to downgrade."
SELECT name, value, description FROM v$parameter
WHERE name='compatible';
I get 10.2.0.2 . Does this mean I won't be able to do this ??
Am I looking at the right documentation ? I can't find anything else relating to what I want to do.
I'd appreciate your suggestions.
June 17, 2011 - 10:12 am UTC
You can downgrade - since you want to go to 10.2.0.2. The doc you read was talking about a major database downgrade - back to 10.1 from 10.2 - which you cannot do.
Please utilize support to get the procedure for the downgrade.
compatible parameter set to 10.2 in 11.1 DB
A reader, January 11, 2012 - 12:26 pm UTC
Hello,
I have a requirement to create a physical standby database (2nd one, one already exists) for my 11.1 database. This database has compatible set to 10.2. When I use rman duplicate target database for standby from active database (dorecover nofilenamecheck), it is giving me error that the compatible parameter need to be 11.1. Later I tried to set the compatible value to 10.2 in my rman script and Oracle said ORA-00400: invalid release value 10.2 for parameter compatible.
I already have a physical standby database for my primary but I am wondering how that particular standby got created with compatible parameter set to 10.2. Does rman duplicate command doesn't like the compatible parameter set to lower value?
Thanks,
January 11, 2012 - 3:32 pm UTC
your compatibles have to be the same in production and standby.
follow up,
A reader, January 11, 2012 - 5:07 pm UTC
Thanks. I want it to set to 10.2 on both prod and standby but the rman duplicate isn't liking that. Is there a workaround?
January 11, 2012 - 9:55 pm UTC
you cannot set the compatible DOWN in an instance, you can only set it UP.
It seems to want 11.1, 10.2 isn't going to cut it.
why do you want it to be 10.2?
follow up,
A reader, January 12, 2012 - 11:22 am UTC
Well, the application team prefers to continue with 10.2.
I guess then the option of rman duplicate is ruled out and we are left with the option to take a regular backup using rman to tape/drive and restore it on the server that hosts standby database. This may need downtime until the backup/restore happens or need to have plenty of space to store archivelog files that can be applied during recovery (to avoid long downtime).
Thanks,
Compatible parameter change in 11gR2
Chandan, May 23, 2012 - 5:45 am UTC
Hi Tom,
I have taken an export of “xxx” schema from 10.2.0.4.0 database version installed on windows 64bit and imported in database version 11.2.0.2 in Solaris 64bit box.
Now I am facing performance issue on Solaris box.
I have seen many queries plan are also different in both the environment.
One has suggested me that change compatible parameter with older version i.e. 10.2.0.4.0.
1 I want to know how true it is?
2 Is it possible to do so?
3 And will my problem solved with this?
Regards,
Chandan
May 23, 2012 - 10:04 am UTC
compatible only restricts what new features you may use. for example, you cannot use advanced compression until you update your compatible.
compatible will not affect query plans really - optmizer_features_enable does that.
compatible should not have any noticeable effect on your overall performance.
You should be trying to isolate the root cause of your performance issues, then we can remedy them.
compatible parameter
A reader, August 18, 2022 - 5:00 am UTC
Greeetings!
Just want to clarify Compatible Vs OFE.
We upgraded the database from 12.1.0.2 to 19.11.0.0 but left the compatible parameter as is 12.1.0.2. for about 3 months to test with application. But even though compatible parameter was not changed to 19c. , we noticed that optmizer_features_enable is default set to 19.1.0. after upgrade.
Does this mean that Oracle queries will start using 19c optimizer engine and query plan is expected to change ?
2) Now after 3+ months of being running with compatible parameter of 12.1..0.2, we are ready to change compatible to 19.0.0.
Based on our previous reply on this thread,
>> compatible only restricts what new features you may use. for example, you cannot use advanced compression until you update your compatible.
>> compatible will not affect query plans really - optmizer_features_enable does that.
So, I should not expect any Query plan or SQL execution plan changes because of changing compatible change to 19c., correct ?
Thanks for your time!
August 23, 2022 - 10:36 pm UTC
Queries should not be impacted. Compatible is all about datafile headers/formats/feature availability etc.
Of course if you then take advantage of them (eg you say used a new 19c datatype or facility which changes the structure of a table) you could expect that the optimizer might also react differently to that changed table.
Compatible parameter
A reader, August 24, 2022 - 5:51 am UTC
Thanks Connor
August 24, 2022 - 12:47 pm UTC
Glad we could help