What about not null
A reader, September 25, 2001 - 9:33 am UTC
Tom
I guess you forgot to mention 'NOT NULL' constraint.
can you throw some more light on
v - view with check option
o - read only on a view
September 25, 2001 - 8:00 pm UTC
err -- no, i didn't mention "not null" for the same reason I didn't mention "check (x > 1)" -- its just a specific instance of a check constraint:
ops$tkyte@ORA8I.WORLD> create table t ( x int not null );
Table created.
ops$tkyte@ORA8I.WORLD> select constraint_name, constraint_type from user_constraints;
CONSTRAINT_NAME C
------------------------------ -
SYS_C0044420 C
it is of type C.
This is the check constraint on a view:
ops$tkyte@ORA8I.WORLD> create view v as select * from t where x > 0 WITH CHECK OPTION;
View created.
ops$tkyte@ORA8I.WORLD> insert into v values ( 1 );
1 row created.
ops$tkyte@ORA8I.WORLD> insert into v values (-1);
insert into v values (-1)
*
ERROR at line 1:
ORA-01402: view WITH CHECK OPTION where-clause violation
If the row you insert/update would not be selected by the view -- it violates the check option on the view and it rejected..
As for read only -- thats just a read only view:
ops$tkyte@ORA8I.WORLD> create view v2 as select * from t with read only;
View created.
ops$tkyte@ORA8I.WORLD> select constraint_name, constraint_type from user_constraints;
CONSTRAINT_NAME C
------------------------------ -
SYS_C0044420 C
SYS_C0044421 V
SYS_C0044422 O
It reads a view that cannot be updated.
Check constraint VS not null - any difference in performance?
Andrew, September 25, 2001 - 12:50 pm UTC
SQL> create table T (col1 number not null, col2 number, col3 number);
Table created.
SQL> alter table T modify (col2 constraint t_col2_nn NoT NuLl);
Table altered.
SQL> alter table T add constraint t_col3_nn check (Col3 is Not Null /*my custom criteria*/);
Table altered.
SQL> column search_condition format a40
SQL> select constraint_type, constraint_name, search_condition
2 from user_constraints
3 where table_name = 'T';
C CONSTRAINT_NAME SEARCH_CONDITION
- ------------------------------ ----------------------------------------
C SYS_C00112155 COL1 IS NOT NULL
C T_COL2_NN COL2 IS NOT NULL
C T_COL3_NN Col3 is Not Null /*my custom criteria*/
Harold, February 19, 2003 - 11:51 am UTC
If a use this kind of constraints:
SYS_C00887 P ENABLED
on a table with 800,000 rows it will be any performance problems??? or it will be better to create a PK on that table??
Thanks
February 19, 2003 - 3:06 pm UTC
that is a primary key.
If that is coming from user_constraints and the P is the constraint type -- that IS a primary key.
Harold, February 19, 2003 - 3:40 pm UTC
i know, but if i do this:
SYS_C00887 P ENABLED
alter table INISER.ACREENCIA drop primary key;/*this one is SYS_C00887*/
and create a new one with this:
create unique index INISER.ACREENCIA_I_NUMACRE on INISER.ACREENCIA (NUMACRE)
tablespace INISER
storage (initial 11800 K next 2960 K );
Will i have a better performance??
February 19, 2003 - 4:28 pm UTC
maybe if you used 11801 on the initial (that, is a joke).
No, you use a primary key for primary key.
You use a unique constraint for unique columns.
You lose valuable metadata if you do not.
Under the covers, you will find an index in support of that primary key already.
primary key and not null
A reader, March 06, 2003 - 12:46 pm UTC
Hi tom,
pleaes help me here..
/*****************************************/
12:45:34 SQL> create table t (id number not null primary key )
12:45:54 2 ;
Table created.
12:45:56 SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,SEARCH_CONDITION
12:46:00 2 from USER_CONSTRAINTS WHERE TABLE_NAME = 'T';
CONSTRAINT_NAME C TABLE_NAME SEARCH_CONDITION
------------------------------ - ------------------------------ --------------------------------
SYS_C0025737 C T "ID" IS NOT NULL
SYS_C0025738 P T
Elapsed: 00:00:00.00
12:46:01 SQL> desc t
Name Null? Type
----------------------------------------------------------------- -------- --------------------
ID NOT NULL NUMBER
12:46:07 SQL> alter table t drop CONSTRAINT SYS_C0025737;
Table altered.
Elapsed: 00:00:00.00
12:46:25 SQL> desc t
Name Null? Type
----------------------------------------------------------------- -------- --------------------
ID NOT NULL NUMBER
12:46:28 SQL> drop table t;
Table dropped.
Elapsed: 00:00:00.00
12:46:37 SQL> create table t (id number primary key);
Table created.
Elapsed: 00:00:00.00
12:46:51 SQL> desc t
Name Null? Type
----------------------------------------------------------------- -------- --------------------
ID NOT NULL NUMBER
12:46:53 SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,SEARCH_CONDITION
12:46:57 2 from USER_CONSTRAINTS WHERE TABLE_NAME = 'T';
CONSTRAINT_NAME C TABLE_NAME SEARCH_CONDITION
------------------------------ - ------------------------------ --------------------------------
SYS_C0025739 P T
then,,
Q: what is the use of non null constraint on the column
which paritcipates in the primary key ???
Thanks
March 06, 2003 - 3:43 pm UTC
all columns in a primary key are by definition "not nullable" -- it is the very definition of primary keyness.
so here is it overhead ?
A reader, March 06, 2003 - 3:52 pm UTC
so in the 1st case I have put both the constraints on the
column where as only primary key is needed..
so now is it an over head to check for not null even if it primary key or oracle is intelligent to know that and check it only for once ?
March 06, 2003 - 4:09 pm UTC
it'll do one or the either, it would not be good practice to do both by you -- just use primary key. that is after all all that is needed.
Make View Read Only
George Fernandes, April 30, 2003 - 11:04 am UTC
Tom,
I've two tables say T1 and T2. Both of them have views V1 and V2 on them. The view V1 is created with 'WITH READ ONLY' (hence a record exist in DBA_CONSTRAINTS with CONSTRAINT_TYPE='O') where are the view V2 has the same defination as V1 but is without 'WITH READ ONLY'. How can I alter view V2 without dropping so that it is similar to V1('WITH READ ONLY') and thereby a record exist in DBA_CONSTRAINTS for V2 ?
Please Help.
April 30, 2003 - 7:00 pm UTC
CREATE OR REPLACE view v2.....
New column, what should be the constraint
A reader, October 01, 2003 - 8:42 am UTC
Tom, We need to add a new column to a table, the column is like a flag, the possible values are 'Y' or 'N'. The table is loans , the new column will be to lock a loan once it has been approved.
Should the column be declared as not null or null( we have existing data in this column, and on update database trigger as well).
October 01, 2003 - 9:38 am UTC
tell me, is NULL a valid value for this column?
if not, it should be NOT NULL
if so, it should be NULL
Nice
Ram, March 04, 2004 - 12:45 am UTC
Hi Tom,
Suppose if I have a table "Emp" with ename column,can it be
made to check that it is entered with only characters?
I mean
ename varchar2(30) check (ename --only alphabets to take);
Do you have any programming trick to do this?
Please do reply.
March 04, 2004 - 8:20 am UTC
check ( replace(translate( upper(ename),'ABC....','A'),'A','') is null )
Please see this
Ram, March 24, 2004 - 6:45 am UTC
Dear Tom,
Please see the following code.
SQL> create table t1(a number not null);
Table created.
SQL> drop table t1;
Table dropped.
SQL> create table t1(a number,constraint nn_a not null(a))
SQL> /
create table t1(a number,constraint nn_a not null(a))
*
ERROR at line 1:
ORA-00904: : invalid identifier
My Question is "Can not 'not null' constraints be declared out of line"
Is this not possible?Please do reply.
Bye!
March 24, 2004 - 9:05 am UTC
ops$tkyte@ORA9IR2> create table t ( x int not null, y int, check (y is not null) );
Table created.
ops$tkyte@ORA9IR2> select constraint_name, constraint_type, search_condition from user_constraints where table_name = 'T';
CONSTRAINT_NAME C SEARCH_CONDITION
------------------------------ - --------------------------------------------------------------------------------
SYS_C005140 C "X" IS NOT NULL
SYS_C005141 C y is not null
a not null is just an implicit "check" constraint.
How to go with constraints
A reader, June 18, 2004 - 9:15 am UTC
Hi Tom,
Database version - 8.1.7.0.0 - Production
my table are not partitioned now.
I want to create the with partition now.
I want to use
STEP 1: create table partition_table as select * from non_partition_table.
STEP 2: drop the non_partition
STEP 3: rename the partition_table
in this case my constraints are not taken into partition_table.
primary key, check constraints are not considered.
only not null considered.
If this is my case,
1. should i avoid the this method?
2. should i need to give all constraints again?
3. what will happen to my triggers and packages depend on the tables?
4. Is there any other method to do this?
Thank you
June 18, 2004 - 11:01 am UTC
1) no? you would just add the contraints et.al. yourself. I would hope you have the scripts?
2) yes.
3) you will create the triggers again (they go "poof" with the drop table, consider them "constraints"). the packages will just fix themselves.
4) to partition a table in 8i will require an offline rebuild like this, yes. You could export the table. drop it. create JUST THE TABLE. import with ignore=Y. it'll import and add constraints, etc. <b>before just doing this approach, you had better test it -- test the entire process, against the entire dataset. No kidding there.</b>
ops$tkyte@ORA9IR2> create table t
2 ( dt date primary key,
3 x int check ( x > 5 ),
4 y int check ( y < 0 )
5 )
6 /
Table created.
ops$tkyte@ORA9IR2> create index t_idx on t(x,y);
Index created.
ops$tkyte@ORA9IR2> create trigger t_trigger before insert on t
2 for each row
3 begin
4 null;
5 end;
6 /
Trigger created.
ops$tkyte@ORA9IR2> grant select on t to connect;
Grant succeeded.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
2 select to_date( '01-mar-2003', 'dd-mon-yyyy' )+rownum, 5+rownum, -rownum
3 from all_users
4 /
50 rows created.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> !exp userid=/ tables=T
Export: Release 9.2.0.5.0 - Production on Fri Jun 18 10:40:06 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table T 50 rows exported
Export terminated successfully without warnings.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select 'cons', count(*) from user_constraints union all
2 select 'trig', count(*) from user_triggers union all
3 select 'inde', count(*) from user_indexes union all
4 select 'tab ', count(*) from user_tables union all
5 select 'gran', count(*) from dba_tab_privs where table_name = 'T' and owner = USER;
'CON COUNT(*)
---- ----------
cons 3
trig 1
inde 2
tab 1
gran 1
<b>so we have 3 constraints, 1 trigger, 2 indexes, a table and a grant...</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select 'cons', count(*) from user_constraints union all
2 select 'trig', count(*) from user_triggers union all
3 select 'inde', count(*) from user_indexes union all
4 select 'tab ', count(*) from user_tables union all
5 select 'gran', count(*) from dba_tab_privs where table_name = 'T' and owner = USER;
'CON COUNT(*)
---- ----------
cons 0
trig 0
inde 0
tab 0
gran 0
<b>and now we don't</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y int
6 )
7 PARTITION BY RANGE (dt)
8 subpartition by hash(x)
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
11 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace tools,
12 PARTITION junk VALUES LESS THAN (MAXVALUE)
13 )
14 ;
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select 'cons', count(*) from user_constraints union all
2 select 'trig', count(*) from user_triggers union all
3 select 'inde', count(*) from user_indexes union all
4 select 'tab ', count(*) from user_tables union all
5 select 'gran', count(*) from dba_tab_privs where table_name = 'T' and owner = USER;
'CON COUNT(*)
---- ----------
cons 0
trig 0
inde 0
tab 1
gran 0
<b>table is back...</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> !imp userid=/ full=y ignore=y
Import: Release 9.2.0.5.0 - Production on Fri Jun 18 10:40:08 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T" 50 rows imported
Import terminated successfully without warnings.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select 'cons', count(*) from user_constraints union all
2 select 'trig', count(*) from user_triggers union all
3 select 'inde', count(*) from user_indexes union all
4 select 'tab ', count(*) from user_tables union all
5 select 'gran', count(*) from dba_tab_privs where table_name = 'T' and owner = USER;
'CON COUNT(*)
---- ----------
cons 3
trig 1
inde 2
tab 1
gran 1
<b>and now everything is back</b>
Thank you Tom,
A reader, June 21, 2004 - 1:18 am UTC
Hi Tom,
Thank you for your simple explanation.
This is very much useful.
Oracle Developer
Dawar, June 24, 2004 - 1:39 pm UTC
Hello Tom,
I have an existing table along with data.
Here is one of the columns where I like to put check constraint or any other relevant logic.
Column name :Date_of_Birth, Data Type: Date
Data is already in the table.
We have some bad data in the table.
Some of the Date_of_Birth values are in future dates e.g.: 2049, 2007 etc.
These values are used in our company portal web pages.
Where user entered wrong dates by mistake.
Now I need to fix that.
I need to put logic on Date_of_Birth. Some thing like that Date between 1900 and SYSDATE.
If user entered any date beyond this range it will not allow user to enter any data.
I wonder how I could put this logic in existing table which already have lot of data.
Any feedback will be appreciated.
cheers,
Dawar
June 24, 2004 - 3:16 pm UTC
you'd use a trigger for that sort of enforcment:
create trigger t after insert or update on t for each row
begin
if ( :new.date_of_birth > sysdate )
then
raise_application_error( -20001, 'Time Machine use illegal...' );
end if;
end;
it'll not affect any existing data of course -- only newly inserted/modified data.
Oracle Developer
Dawar, June 24, 2004 - 1:41 pm UTC
I forgot to tell you my Database version info as follows:
Oracle9i Enterprise Edition Release 9.0.1.5.0 - Production
PL/SQL Release 9.0.1.5.0 - Production
CORE 9.0.1.2.0 Production
TNS for Solaris: Version 9.0.1.5.0 - Production
NLSRTL Version 9.0.1.5.0 - Production
Oracle Developer
Dawar, June 24, 2004 - 4:18 pm UTC
Thanks.
It works for me.
cheers,
Dawar
Oracle Developer
Dawar, June 25, 2004 - 12:16 pm UTC
Hello Tom,
Sorry I gave my feedback in rush.
I wrote a trigger on table as below but it is not effecting.
Users are able to insert the dates greater date current date or lower than '01-Jan-1900'.
create trigger t_date_trigger after insert or update on table_name for each row
begin
if ( :new.date_of_birth > sysdate and :new.date_of_birth < '01-JAN-1900')
then
raise_application_error( -20001, 'Please enter dates between 1900 and current date' );
end if;
end;
Any feed back will be appreciated.
Regards,
Dawar
June 25, 2004 - 4:19 pm UTC
"or" not "and"
a date will never be > sysdate AND < 1900 ;)
that should of course be to_date('01-jan-1900', 'dd-mon-yyyy' );
Oracle Develpoper
Dawar, June 25, 2004 - 6:11 pm UTC
I tried "or" instead of "AND".
But when user put data greater than today's date its ok.
But if user enater lower than 1900 and it inserts system generating date.
If user enetred 01/14/1800 its automitcally take as 01/14/2000.
For my testing prrpose
I created a table in SQL plus environment as:
SQL> create table dt0 ( dt0 date);
Table created.
SQL> insert into dt0 values ('14-JAN-1878');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from dt0;
DT0
---------
14-JAN-78
But If I run select * from dt0;
from Enterprise manager I am getting values as:
14-Jan-1878 12:00:00 AM
Please note: User are entering dates through Oracle Portal Web Page.
Regards,
Dawar
June 26, 2004 - 1:22 pm UTC
it is not doing what you say.
in sqlplus, simply "select to_char(dt0, 'yyyy' ) from dt0"
you'll see the same thing that EM shows (all dates are always stored in a 7 byte binary field with ddmmyyyyhh24miss in it -- you see 78 because your default "printing format for dates" is set to dd-mon-yy or dd-mon-rr.
if ( :new.date_of_birth > sysdate OR :new.date_of_birth < '01-JAN-1900')
is in fact the edit you want.
Oracle Developer
Dawar, June 30, 2004 - 12:30 pm UTC
Hello Tom,
Instead of using above date trigger.
I wrote a function to prevent user to enter any date of birth greater than sysdate and lower than 1900.
It will display message on web portal ' DOB out of acceptable date range'.
This function is the part of a existing Package we used for different purposes.
This is a huge Package and working properly.
But now after adding below function I am getting errors (below).
Please note: I am using this package for web portal page.
Function validate_date_range(
p_names PORTAL.wwv_utl_api_types.vc_arr
, p_values IN OUT PORTAL.wwv_utl_api_types.vc_arr
, p_flags PORTAL.wwv_utl_api_types.vc_arr
, p_name VARCHAR2
, p_message OUT VARCHAR2
) RETURN NUMBER IS
v_string DATE;
v_idx1 NUMBER;
BEGIN
v_idx1 := dhs_referral_service.get_index( p_names, p_name );
IF p_flags(v_idx1) IN ( show_hidden, show_display_only, show_table_display_only,
show_changed_hidden ) THEN
RETURN 1;
END IF;
BEGIN
if to_char(dhs_referrals.ref_patient_dob,'MM/DD/YYYY') > to_char (sysdate,'MM/DD/YYYY')
then
p_message := ' DOB out of acceptable date range';
elsif
to_char(dhs_referrals.ref_patient_dob,'MM/DD/YYYY') < to_char('01-01-1900','MM/DD/YYYY')
then
p_message := ' DOB out of acceptable date range';
end if;
RETURN 0;
END validate_date_range;
Line # = 2426 Column # = 10 Error Text = PLS-00103: Encountered the sysmbol "VALIDATE_STRING_FORMAT" when expecting one of the following: := . ( @ % ;
Line # = 2434 Coulmn # =4 Error Text = PLS-00103: Encountered the sysmbol "V_STRING" when expecting one of the following: (select <a SQL Statement>
Line # = 3571 Column # = 1 Error Text = PLS-00103: Encountered the sysmbol "END" when expecting one of the following: being function package pragma procedure form
cheers,
Dawar
June 30, 2004 - 1:16 pm UTC
oh great, so when the next great application comes along -- it will just happily enter bad data.
if you have a rule to enforce, you had better enforce it in the database.
just use the trigger, it is the appropriate approach here.
Oracle Developer
Dawar, June 30, 2004 - 1:39 pm UTC
Tom,
I used trigger but does not work properly.
Currently, we are using fuction for other stuff such as for the zip code.
If anyone enetred wrong zip code.
Error/Alert Message comes up on portal page.
For my above Function:
I noticed that I have 2 begins and only 1 end.
I added one more end as below.
The old error is gone but new error comes up.
RETURN 0;
END;
END validate_date_range;
ERROR:
Line # = 2382 Column # = 29 Error Text = PLS -00357: Table, View or sequence refence
'dhs_referrals.ref_patient_dob' not allowed in this context.
Line # = 2382 Column # = 4 Error Text = PL/SQL: Staemnet ignored
cheers,
Dawar
June 30, 2004 - 1:56 pm UTC
the trigger most certainly does, if coded properly. think about it..
I defy you to get an invalid date into this table according to your rules:
ops$tkyte@ORA9IR2> create table t ( date_of_birth date );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t_trigger
2 before insert or update on t for each row
3 begin
4 if ( :new.date_of_birth > sysdate
5 OR
6 :new.date_of_birth < '01-JAN-1900', 'dd-mon-yyyy' )
7 then
8 raise_application_error( -20001, 'Invalid date' );
9 end if;
10 end;
11 /
Trigger created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( to_date( '01-jan-2005' ) );
insert into t values ( to_date( '01-jan-2005' ) )
*
ERROR at line 1:
ORA-20001: Invalid date
ORA-06512: at "OPS$TKYTE.T_TRIGGER", line 6
ORA-04088: error during execution of trigger 'OPS$TKYTE.T_TRIGGER'
<b>in the future.</b>
ops$tkyte@ORA9IR2> insert into t values ( sysdate+1/24/60/60 );
insert into t values ( sysdate+1/24/60/60 )
*
ERROR at line 1:
ORA-20001: Invalid date
ORA-06512: at "OPS$TKYTE.T_TRIGGER", line 6
ORA-04088: error during execution of trigger 'OPS$TKYTE.T_TRIGGER'
<b>a second into the future..</b>
ops$tkyte@ORA9IR2> insert into t values ( to_date( '01-jan-1900' ) );
1 row created.
<b>just fine</b>
ops$tkyte@ORA9IR2> insert into t values ( to_date( '01-jan-1900' )-1/24/60/60 );
insert into t values ( to_date( '01-jan-1900' )-1/24/60/60 )
*
ERROR at line 1:
ORA-20001: Invalid date
ORA-06512: at "OPS$TKYTE.T_TRIGGER", line 6
ORA-04088: error during execution of trigger 'OPS$TKYTE.T_TRIGGER'
<b>one second too old.</b>
if your trigger did not "work" you made some basic, fundemental 'mistake'....
Very simple!!
A reader, June 30, 2004 - 2:03 pm UTC
Why make things complicated when it can be achieved easily and in a simple fashion. As for Dawar, your trigger is huge friend. See the line with the error. It is 2382. I can imagine what problems you must be facing. Try to shorten it a bit or use procedures and functions. It will help you out in spreading out the logic.
Just trying to help with a little friendly advice.
Cheers
Oracle Developer
Dawar, June 30, 2004 - 2:31 pm UTC
Ou requirments is to show the ALert or Error message on current web portal page.
issue #1:
With trigger it takes us out from the web portal page.
Business user will confuse to see any coding.
issue#2:
I used your trigger for sysdate its work but when I enetred any lower than 1900 its automatically converted into greater date.
For e.g:
I enetred 1887 it becomes 1987.
Then I used After_logon_database trigger and set alter session.
But my portal system hang up and giving some java errors.
I need to remove that trigger.
Anyway, Is it possible to see error/alert message on the current web portal page by using trigger?
Thanks for your erlier feed back.
cheers,
Dawar
June 30, 2004 - 2:42 pm UTC
sorry but if you don't do it in the database -- you lose. period. It is that simple. data integrity constraints belong THERE.
you can validate all you want at the front end, but if you don't do it at the back end, the only thing I can assure you is you will have GARBAGE in that column some day.
We do not change 1887 to 1987, you did something somewhere in your code - -the database only takes what it is given, nothing more, nothing less. it does not make stuff up, it does not change stuff, it is very predicable in that sense.
I don't see the purpose of an alter session -- especially in an after logon trigger -- in a product that uses a connection pool.
If a portlet/application fails, you will see an error -- period.
Oracle Developer
Dawar, June 30, 2004 - 3:06 pm UTC
Tom,
I still have following trigger in my database.
CREATE OR REPLACE TRIGGER "DHS"."DHS_REFERRALS_DATE_TRIGGER"
BEFORE
INSERT
OR UPDATE ON "DHS_REFERRALS" FOR EACH ROW begin
if ( :new.ref_patient_dob > sysdate or :new.ref_patient_dob < to_date('01/JAN/1900','DD/MON/YYYY'))
then
raise_application_error( -20001, 'DOB out of acceptable date range' );
end if;
end;
I just entered date from the web portal page to database as 02/14/1887 but its enetred in db table as 02/14/1987.
But when I entered date greated than sysdate.
Current web page goess to new error page.
ORA-20001: DOB out of acceptable date range
ORA-06512: at "DHS.DHS_REFERRALS_DATE_TRIGGER", line 4
ORA-04088: error during execution of trigger 'DHS.DHS_REFERRALS_DATE_TRIGGER'
ORA-06512: at "DHS.DHS_PROCESS_REFERRALS", line 1899
ORA-06512: at "DHS.DHS_PROCESS_REFERRALS", line 3150
ORA-06512: at line 10
.......
Is there any thing wrong with trigger?
Your feedback will help me. Thanks
cheers,
Dawar
June 30, 2004 - 7:47 pm UTC
no, but there is probably something wrong with your portal application -- it is the thing turning 1887 into 1987 -- add debugging logic to those layers to see what is doing that.
the *TRIGGER* only lets in valid data.
the *TRIGGER* is not changing a byte of data.
Oracle Developer
Dawar, July 08, 2004 - 5:08 pm UTC
Tom,
You are correct some thing is wrong with our portal.
I tested above trigger on database or table level and its work fine.
I am using TAR to solve portal issue.
Reagards,
Dawar
Naming constraints in schema
sonali, August 05, 2004 - 11:14 am UTC
We have a customer who wants us to use user defined names for all constraints in the schema. We already name all FK's, indexes and unique constraints but don't have specific names for Primary keys. Is there any way we can give names for all the primary keys without dropping them, some routine that will change the name for Pk from sys_xyz to pk_column_name ( column name will be primary key column name)..
eg. SYS_C0020848 for table entity ent_id should be pk_ent_id.
fortunately we have no duplicate column names across any of the tables. So the pK column name would be unique in schema.
Thanks
August 05, 2004 - 1:09 pm UTC
ops$tkyte@ORA9IR2> alter table t2 rename constraint SYS_C005924 to xxxx;
Table altered.
you can rename them.
sorry but I found it
sonali, August 05, 2004 - 11:44 am UTC
Preference!!!!!!!!
A reader, September 06, 2004 - 5:46 pm UTC
Hi Tom,
I was talking with my developers on validating data and constraints. Say for example the length of given field cannot be greater than or less than 3. It has to be always equal to 3. So at present we validate this in a package.
But I was wondering that there are other possibilities also:
1. Check Constraints
2. Triggers
3. Packages, procedures etc.
4. In the front-end application like in our case Java.
I know the triggers have a bit of a overhead so, out with it. The Front-end applications will also serve this purpose but is it worth it. I think letting the database handle this task is recommended.
But I would like to know what do you have to say on this topic.
Thanks as always
September 06, 2004 - 6:22 pm UTC
if you do not do it at the level of the table - it is not going to be done some day.
1) absolutely, 100%, no question about it.
2) never, if you can do it declaratively, that that is the ONLY way to do it.
3) you can replicate it there -- however, you are responsible for maintaining it in two places. the database is the source of truth however, you would only put it in plsql if it made someones "life better"
4) see #3, same thing. it MUST be in the DB, it can be copied into the client application -- to make the clients experience a tad better (get a message as they tab out of the field instead of after trying to post the changes to the database). However, it must be done in the DB and you are responsible for making sure if you change the rule, you change it in both places.
RELY
David, November 10, 2004 - 1:33 pm UTC
I have hard time understanding the meaning of RELY and NORELY constraints. Could you explain about these and their usefulness? Thanks.
November 10, 2004 - 8:10 pm UTC
it is for query rewrite.
should the server RELY on a constraint when re-writing queries using MV's and such (should it *trust* you)
or not.
Unique constraint
A reader, November 25, 2004 - 5:09 pm UTC
Hi Tom
I have to implement an unique constraint for a table with columns as below.
create table test
(
id number,
val1 varchar2(10),
val2 varchar2(20)
)
The rule is that the combination of columns val1 and val2 must be unqiue if value of column id = 0. Since val1 and val2 are nullable I have used functional based index to implement this. I am unable to use Unique constraint because it fails if both the columns val1 and val2 are null.
create unique index on test
( case when id = 0 then
val1||val2
end );
Is this the right way to do it?
Regards
November 25, 2004 - 8:13 pm UTC
i would do this
create unique index T on test (case when id = 0 then val1 end, case when id = 0 then val2 end );
otherwise
val1 = 1
val2 = 01
would bump up with
val1 = 10
val2 = 1
sure, you could add '/' whatever -- but just keeping as two separate columns is easiser.
error in trigger code
Tom, November 26, 2004 - 4:39 am UTC
Tom,
In your trigger code from a few posts ago....
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t_trigger
2 before insert or update on t for each row
3 begin
4 if ( :new.date_of_birth > sysdate
5 OR
6 :new.date_of_birth < '01-JAN-1900', 'dd-mon-yyyy' )
7 then
8 raise_application_error( -20001, 'Invalid date' );
9 end if;
10 end;
11 /
should line 6 not be
:new.date_of_birth < to_date('01-JAN-1900,'DD-MON-YYYY')
as the existing code doesn't compile on my system [8.1.7.4].
I agree with you completely that the check logic should be a trigger though.
Hope the turkey was good.
November 26, 2004 - 9:34 am UTC
ctl-f for 1900 and see the history -- i just cut and pasted the original "user supplied" piece of logic. Yes, it needs a to_date(
Please help
Raju, January 07, 2005 - 11:51 pm UTC
Hi Tom,
I have a table like this.
SQL> create table tbl1(x number primary key,y references
tbl1(x),z number check (z > 0) )
Table created.
My requirement is ** I would like to drop all the constraints in a single sql statement **.
Is that possible?
Is there any equivalent statement like
SQL> alter table tbl1 drop all constraints;
Please do reply.
Bye!
January 08, 2005 - 4:46 pm UTC
there is no such facility -- constraints must be managed independently of eachother, with separate sql statements.
constrant name changes after export & import
Jianhui, April 04, 2005 - 12:19 pm UTC
(Oracle Enterprise version 9205 on Solaris 8)
Tom,
I have renamed(alter table ... rename constraint...) constraint names for schema U1 in database A explicitly in order to follow the corperation naming standard. But in the new database B, after i did schema export and import to then new empty schema U2, i see some constraint name changes on PK, FK, Check with system generarte names like 'SYS%', How could this happen? How to preserve the constraint names across different databases (my new database was empty before import, no naming conflition). I double checked the source schema, no constraint name found like 'SYS%', number of constraints match in two databases, constraint columns match as well.
By the way, is it possible to explicitly define constraint name for NOT NULL?
April 04, 2005 - 12:28 pm UTC
give me a "for example" -- you should be able to set up a small single table test case.
copy & paste of constraint name changes
jianhui zhang, April 04, 2005 - 4:56 pm UTC
Hi Tom,
This is a near Copy&Paste, since i ONLY modified username, password, database tnsname for privacy purpose. I also copied and pasted the export and import commands in my shell script.
One thing i forgot to mention is that the default tablespaces for olduser and newuser are different on purpose just in order to move the objects to different tablespace during the import. The other thing is that most of these constraint names were system generated and I renamed them to follow the client's naming standard.
I did couple smaller table level tests, no constraint name change found. Any thought?
<<<-- copy of shell script exp & imp code
exp userid=olduser/password@old_db file=${PIPE_FILE} \
log=${EXP_LOG_FILE} buffer=2000000 direct=y owner=olduser
imp userid=newuser/password@new_db \
file=${PIPE_FILE} log=${IMP_LOG_FILE} \
buffer=2000000 fromuser=olduserr touser=newuser grants=n
>>>
<<< -- copy of sql*plus session
@>conn olduser/password@old_db
Connected.
logon time: 050404 16:33:11
olduser@old_db>select constraint_type, count(*)
olduser@old_db>from dba_constraints
olduser@old_db>where owner='OLDUSER'
olduser@old_db>and constraint_name like 'SYS%'
olduser@old_db>group by constraint_type
olduser@old_db>/
no rows selected
olduser@old_db>conn newuser/password@new_db
Connected.
logon time: 050404 16:33:34
newuser@new_db>select constraint_type, count(*)
newuser@new_db>from dba_constraints
newuser@new_db>where owner='NEWUSER'
newuser@new_db>and constraint_name like 'SYS%'
newuser@new_db>group by constraint_type
newuser@new_db>/
C COUNT(*)
- ----------
C 415
P 36
R 5
3 rows selected.
newuser@new_db>
-- copy ends here >>>
April 04, 2005 - 9:00 pm UTC
I sort of need a single user test with a create table, look we export it, then we drop it and look at the constraints.
constrains
P.Karthick, April 05, 2005 - 1:37 am UTC
hai tom,
we are a product development company. we are using oracle 9i for backend. we have developed our entire system without any constrains. there is no sconstains exist on the database. all constrains and relations are only on developers mind. this was mainly done because our software handles very large data and our technical persons who did the technical architecture belived that constrains in database would have performance issue. but now there is a conflict in that approach.
so can you comment on this.
April 05, 2005 - 7:30 am UTC
comment on it.
I wrote a book about it, Effective Oracle by Design. Here is an excerpt dealing directly with this:
(1)Its a Database, Not a Data Dump
This section is for everyone who feels that constraints should be verified in the client or middle tier, foreign keys just make the database slow, and primary keys are a nuisance.
Ask Tom
We have a City table, consisting of different cities where our clients offices are located. We have a VB form for inserting employee-related details. We have foreign key on the City column in the Employee table, whose parent key is the City table. One of our consultants recommended that we discard this check for the validity of city entered and suggested that we maintain all city validation checking through front-end coding. The reasons cited were that referential integrity checking at the back-end would be too time-consuming and would slow down the data-entry processing jobs. I wasnt truly convinced by his reasoning. Is his argument valid?
My recommendation was to get rid of that consultant as fast as humanly possible.
End Ask Tom
(2)Use Primary and Foreign Keys
There are quite a few reasons why getting rid of foreign key and database validation checks is the worst approach in the world. Here are three questions to ask if someone suggests getting rid of them:
o Is this the only application that will be accessing this data forever?
That is a rhetorical question. History proves time and time again that the answer is a resounding no. This data will be reused in many applications by many development teams (otherwise, it would be pretty useless data). If you hide all of the rules, especially fundamental ones like primary or foreign keys, deep inside the application, what happens in two years when someone else starts using this data? How will that application be prevented from corrupting your database? What happens to your existing applications when they start to query and join data and there is no matching primary key? Suddenly, your application breaks, their application is wrong, and the data is corrupt. The Internet revolution/evolution we all just went through should prove this point in spades. What were the last systems to get moved over to Internet-based ones in your company? Most likely, it was that set of applications that had 100% of the data logic embedded in the client.
o How could doing this processing be faster on the client?
The front-end must make a round-trip to the database in order to do this. Also, if the front-end tries to cache this information, it will only succeed in logically corrupting the data! There may be something to be said for caching on the client and on the server, but not just on the client. The end users experience may be nicer if, on tabbing out of a field, they were notified, Sorry, missing or invalid foreign key. But that does not mean that you can permanently move this check from the database to the client. Sure, you can replicate it, but move it? Never.
o Are the consultants who are advising getting rid of the keys paid by line of code?
This could be truethe more they write, they more they make, the more they have to maintain, and the more they have to debug. As the earlier example of auditing showed (in the Inability to Quickly Deliver Software section), sometimes you can do in one line of code that uses a database feature what might take dozens or hundreds of lines of code otherwise. If the database does something, the odds are that it does it better, faster, and cheaper than you could do it yourself.
(2)Test the Overhead of Referential Integrity
To test the performance cost of using referential integrity, we can create a small CITIES table using the ALL_USERS data dictionary table for sample data. Include the requisite primary key constraint on this table. Then create two child tables, which are reliant on the data in the CITIES table. Table T1 includes the declarative foreign key. Oracle will not permit a row in this table unless a matching row exists in the parent CITIES table. The other table has no such constraint; it is up to the application to enforce data integrity.
[AU: I suggest using different, more meaningful names for the tables. You used T1 and T2 earlier to represent time. MS, renamed to WITH_RI and WITHOUT_RI]
ops$tkyte@ORA920> create table cities
2 as
3 select username city
4 from all_users
5 where rownum<=37;
Table created.
ops$tkyte@ORA920> alter table cities
2 add constraint
3 cities_pk primary key(city);
Table altered.
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table with_ri
2 ( x char(80),
3 city references cities
4 );
Table created.
ops$tkyte@ORA920> create table without_ri
2 ( x char(80),
3 city varchar2(30)
4 );
Table created.
Now, we are ready to test. Well use the built-in SQL_TRACE capability of the database and TKPROF to analyze the results.
NOTE: TKPROF is part of a SQL Profiling tool built into Oracle. It is an invaluable performance tool. Please see Chapter 2 for more details on using SQL_TRACE and TKPROF if you are not familiar with them.
The benchmark will test the efficiency of single-row inserts into both tables using a simple loop. Well insert 37,000 rows into each table.
ops$tkyte@ORA920> alter session set sql_trace=true;
Session altered.
ops$tkyte@ORA920> declare
2 type array is table of varchar2(30) index by binary_integer;
3 l_data array;
4 begin
5 select * BULK COLLECT into l_data from cities;
6 for i in 1 .. 1000
7 loop
8 for j in 1 .. l_data.count
9 loop
10 insert into with_ri
11 values ('x', l_data(j) );
12 insert into without_ri
13 values ('x', l_data(j) );
14 end loop;
15 end loop;
16 end;
17 /
PL/SQL procedure successfully completed.
Now, lets review the TKPROF report resulting from this:
INSERT into with_ri values ('x', :b1 )
call count cpu elpsed disk query current rows
------- ------ ---- ------ ---- ----- ------- -----
Parse 1 0.00 0.02 0 2 0 0
Execute 37000 9.49 13.51 0 566 78873 37000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ ---- ------ ---- ----- ------- -----
total 37001 9.50 13.53 0 568 78873 37000
*****************************************************
INSERT into without_ri values ('x', :b1 )
call count cpu elpsed disk query current rows
------- ------ ---- ------ ---- ----- ------- -----
Parse 1 0.00 0.03 0 0 0 0
Execute 37000 8.07 12.25 0 567 41882 37000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ ---- ------ ---- ----- ------- -----
total 37001 8.07 12.29 0 567 41882 37000
As we discovered, for 37,000 single row inserts, we used 0.000256 CPU seconds per row (9.50/3700) with referential integrity. Without referential integrity we used 0.000218 CPU seconds per row. Will your human being end users realize you have imposed this whopping 0.00004 CPU second penalty on them?
All told, the declarative referential integrity in the database added maybe 10% to 15% overhead. For that, you get the peace of mind that lets you sleep at night knowing the integrity of your data is protected and you used the fastest way to develop the application. You know that no matter what new application is added to the system, it will encounter this rule and will not be able to violate it. This same principle works on a much grander scale, beyond simple primary and foreign keys.
(2)Middle Tier Checking Is Not a Panacea
Here is an idea I hear espoused often these days: Use the applications middle tier for doing work such as performing data verification and checking security. Using the middle tier might sound great. The benefits appear to be that it makes the application faster, more flexible, database independent, and secure. But is it? Lets take a closer look at each of these claims.
Ask Tom
We have some consultants building an application for us. They will have an Oracle database that contains only tables, views, and indexes. Most of the work, such as constraint checking, will be in the middle tier. According to them, this makes the application faster. Also, they say it makes the application more flexible, able to use different databases because most of the code is in the application. Last, the security checking is done at the application level (middle tier), and they have their own auditing feature that creates their own tables in the Oracle database.
In short, I would say these people will end up with an application that performs slower than it could, takes much longer to develop than it should, and has much more code to maintain than necessary. It will not be database-independent or flexible. In fact, it will be a way to lock the customer into using this consulting firm forever. To top it off, it will have security that is far too easy to get around.
End Ask Tom
(3)Is It Faster?
If all of the constraint checking is done in the middle tier, typically in Java, this will be faster, they say. Well, that means in order to load data, they must write a loader (one that can do constraint checking). Will their loader be faster than the native direct path load? Their constraints will be checked in Java. Will that be faster than native database code in C? (I would take them on in that race, for sure.) If they have a parent table with a million rows and a child table with ten million rows, and they need to enforce referential integrity, what will be faster: querying the database over the network to check and lock the parent row, or letting the database perform this check upon insertion? Thats a rhetorical question, there will be no contest. Their application will be many times slower performing checks like that.
I would say their claim to faster would need to be backed up with real statistics, from real case studies. Ive never seen one yet. However, I have seen a lot of cases where the converse has been shown: Doing it yourself is much slower than doing it in the database. Again, consider the auditing example described earlier (in the Inability to Quickly Deliver Software section).
(3)Is It More Flexible?
I dont know about you, but if I needed to alter a constraintsay X in table T must be changed from between 25 and 100 to between 50 and 100I would find doing it in the database a bit more flexible. I can do this in two commands: one to add a constraint and one to drop the old constraint. The database will verify the existing data for me, and even report exceptions so I can fix them. If you choose to do this in their middle tier, you must edit the procedural code in many cases and read all of the data out of the database to verify it, bringing it back over the network.
Even if you make this rule table-driven or parameter-driven, it is no easier than updating a constraint in the database itself. You still need to validate the data. What if the data is known to be valid? Will the fact the constraint isnt done by the database make it more efficient to implement, since you dont need to check each row of data? Well, the database can do that as well. You can enable the constraint in the database without validating the underlying data, meaning the database doesnt need to check the existing data unless you want it to.
Doing constraint checking outside the database does not add flexibility. There is a ton of code to be written, but I dont see that as being flexible. This sounds more like the application designers dont know how to use the database.
In addition, this actually makes the entire database less flexible, less able to support the needs of this company over time. Why? Well, the only way to the database is through this application. Does this application accept ad-hoc SQL queries, or does it do only what the developers programmed it to do? Of course it will not accept ad-hoc SQL. It will not be extensible by the customers; it will not be flexible as their needs change over time.
Take this argument back in time five or six years, and change middle tier to client application. Do you have a custom-developed piece of code written in 1996 that is client/server-based and has all of the business rules cleverly hidden inside it? If you do, you probably are still looking for ways to get out of that situation. Dont do it again with a new technology (application servers). Rest assured that in two years, some new, cool development paradigm will arise. If your data is locked up in some application, it wont be very useful or flexible at that point.
Another consideration is that if all of the security, relationships, caching, and so on are in the application, using third-party ad-hoc query tools cannot be permitted. The security (data filtering and access auditing) would be subverted if you access the data directly. The data integrity is in question (if the application caches). The data relationships are not known, so the query tool has no idea what goes with what in the database.
Ive never seen a set of data that the business users are fully satisfied accessing only through the application. The set of questions they want to ask of the data is virtually infinite, and the developed application could not have anticipated all of those questions.
(3)Is It Database-Independent?
Lets look at the claim that using the middle tier makes the database independent. It is a transactional system? If so, there are going to be issues with regards to concurrency controls and data consistency. If you didnt need subtly different code for different databases, why wouldnt PeopleSoft and SAP just use Open Database Connectivity (ODBC) or Java Database Connectivity (JDBC) to connect to everyone and be done with it?
Is it an application for a company, custom-coded to that organization? This company has a database, which cost real money. The company managers should not be paying someone else to rewrite the functionality of the database in a middle tier. That would be like getting taxed two times on the same paycheck.
(3)Is It More Secure?
As for the security, to say that the data is secured because the application has good security is really missing the point. The further from the data security lies, the less secure the data is. Consider auditing, for example. If you have a DBA access the data directly, will that be audited? No, absolutely not. If you used the database to audit, you could do things like:
o Detect the DBA doing malicious things.
o If the DBA disabled auditing, you would be able to detect that she did that (and get rid of her).
o If the DBA mucked with the audit trail (not possible if you use operating system auditing and dont give the DBA operating system privileges), you can detect that (and get rid of her).
In short, you can detect that your data has been compromised and take corrective action. If the auditing takes place at the application layer, it is far too easy to get around it.
how to find NOT NULL constraints quickly
Jianhui, April 25, 2005 - 2:30 pm UTC
Hi Tom,
xxx_constraints.search_condition column is LONG type, how can i find out all NOT NULL constraints quickly?
Thanks,
April 25, 2005 - 2:41 pm UTC
select table_name, column_name from user_tab_columns where nullable = 'N';
will report back all not null columns, does that do it for you?
OK
Kumar, May 11, 2005 - 1:18 pm UTC
Hi Tom,
What are the differences between a table level constraint
and column level constraint??
Are all integrity constraints support both the
levels of creation??
May 11, 2005 - 7:18 pm UTC
table level constraints are allowed to reference multiple columns, whereas a column level constraint may only reference the column it is defined on.
OK
A reader, May 12, 2005 - 1:36 pm UTC
Hi Tom,
Thanks for your reply.
A column level constraint references multiple columns
as shown below.
SQL> create table a( x int primary key)
2 /
Table created.
SQL> create table b( x int primary key)
2 /
Table created.
SQL> create table c(x references a(x) references b(x))
2 /
Table created.
SQL> select constraint_name,constraint_type from user_constraints
2 where table_name = upper('c')
3 /
CONSTRAINT_NAME C
------------------------------ -
SYS_C003198 R
SYS_C003199 R
Am I missing anything here or I have misunderstood your point??
Please clarify this.
Bye!
May 12, 2005 - 1:43 pm UTC
that constraint is on C.X, a single column in table C.
Those constraints cannot constraint other columns in table C
create table c ( x int, y int );
make a column level constraint that makes (x,y) be a foriegn key to T, you cannot.
Dynamic Exception Handling
A reader, September 28, 2005 - 12:48 pm UTC
Hi Tom,
From the code below, How could I retrieve the table name, column name and value inserted when handling exception like unique constraint violation(and other constraints violation as well like referential, integrity etc.). From the second insert statement below, I would like to be able to report to the user something like ---Data value "myfirstname" entered is not Unique - Error on Table "Demo1" column "firstname". Is there any implicit exception cursor which we can retrieve all these information? Cheers..
CREATE TABLE demo1 (demo_id number not null, firstname varchar2(20) not null, lastname varchar2(10) not null);
/
ALTER TABLE demo1 ADD PRIMARY KEY (demo_id);
/
alter table demo1 add unique(firstname);
/
SQL> insert into demo1 (demo_id,firstname,lastname) values(1 ,'myname' ,'mylastname');
1 row created.
SQL> insert into demo1 (demo_id,firstname,lastname) values(1 ,'myname' ,'mylastname');
insert into demo1 (demo_id,firstname,lastname) values(1 ,'myname' ,'mylastname')
*
ERROR at line 1:
ORA-00001: unique constraint (UWSDB.SYS_C007574) violated
September 28, 2005 - 1:39 pm UTC
no, you would have to catch the error message into a string, and parse it to extract the bits you wanted.
Handling Multiple Exception in one go
A reader, September 29, 2005 - 9:22 am UTC
Hi Tom,
From the example above, if unique constraint is defined to each column(e.g. firstname, lastname or any additional columns), and there's a unique violation error to both columns, would i be able to customize my error message saying "both firstname and lastname must be unique". I am using oracle 9i and currently it let you fix the error one at a time. Say if you corrected the firstname entry, then that's the time you see that a unique violation also exists for lastname column. Would there be any way I could get all the offending errors at one time and present it to the user. Cheers..
September 30, 2005 - 7:47 am UTC
There is no way to get all offending errors. Once a constraint is violated - bamm, it is kicked out of the system.
Multiple exception handling in one go
A reader, September 30, 2005 - 12:21 pm UTC
Hi Tom,
So what's the most elegant and efficient way to handle this situation. I don't want the user to correct the first entry error in the page only to find out that there are several other entries which needs correction as well. It would be nice if all errors would be presented at once so the user could correct everything before submission.
September 30, 2005 - 1:59 pm UTC
unless you "do it yourself", there isn't one. Constraints are validated in some order and when the first one fails, that is it.
A reader, September 30, 2005 - 2:05 pm UTC
Null value are unique for Unique constraint.
Why it's not unique for index.
September 30, 2005 - 2:34 pm UTC
I don't understand?
ops$tkyte@ORA10GR2> create table t ( x int unique );
Table created.
ops$tkyte@ORA10GR2> insert into t values ( NULL );
1 row created.
ops$tkyte@ORA10GR2> insert into t values ( NULL );
1 row created.
ops$tkyte@ORA10GR2>
A reader, September 30, 2005 - 3:08 pm UTC
ALTER TABLE "OE"."ORDER_ITEMS"
ADD (CONSTRAINT "TEST" UNIQUE("ITEM_ID", "INVENTORY_ID", "OPTION1"))
1. insert into OE.ORDER_ITEMS (ITEM_ID, INVENTORY_ID, option1, option2 )values (-1,-1,null, null)
1 row created. commit;
2. insert into OE.ORDER_ITEMS (ITEM_ID, INVENTORY_ID, option1, option2 )values (-2,-1,null, null)
1 row created. commit;
drop constraint;
delete from OE.ORDER_ITEMS;
CREATE UNIQUE INDEX "OE"."UNQ_ORDER_INV_IDX"
ON "OE"."ORDER_ITEMS" ("ORDER_ID", "INVENTORY_ID",
OPTION1,OPTION2) ;
1. insert into OE.ORDER_ITEMS (ITEM_ID, INVENTORY_ID, option1, option2 )values (-1,-1,null, null)
1 row created. commit;
2. insert into OE.ORDER_ITEMS (ITEM_ID, INVENTORY_ID, option1, option2 )values (-2,-1,null, null)
ERROR at line 1:
ORA-00001: unique constraint (OE.UNQ_ORDER_INV_IDX) violated
October 01, 2005 - 8:26 pm UTC
you have three things in your constraint - an entirely NULL key is "unique" from everyother entirely NULL key - but if the key has some non-null values.... then the nulls are considered the same.
Primary Key Vs Unique Key
karthick, October 25, 2005 - 8:32 am UTC
I asked a question What is the difference between Primary Key and Unique Key
Most of the web sites gave me the answer which somewhat or other looked like this
In primary key you cannot have null value and in unique key you can have null value
I need clarification here
If I declare a column as unique key and use a check constrains of not null over the column then can I say that it is a primary key
If NO then what is the answer for the question What is the difference between Primary Key and Unique Key
Next
.
I have a parent table P with column p1 and a child table C with column c1
I declare c1 as foreign key to p1
Case 1 there is no constraint on p1
Case 2 there is a Primary Key on p1
Case 3 there is a Unique Key and NOT NULL on p1
Is there going to be any change in the way the referential integrity is achieved in the three cases or all the three are the same.
Correct me if my basic knowledge about constraint is wrong.
October 26, 2005 - 7:11 am UTC
... If I declare a column as unique key and use a check constrains of not null over the column then can I say that it is a primary key.....
No, it is a unique constraint on a column that happens to have a not null constraint as well.
Only a primary key is a primary key.
A primary key is a single constraint that enforces both
a) uniqueness
b) not null
for all columns in the constraint.
case 1 - cannot exist, there must be a primary key or unique constraint on p1.
case 2 and case 3 will allow you to implement the referential integrity constraint from C to P, in case 3 - the "not null" aspect is not relevant to the problem, it need not be not null, that is your choice.
constraints
Anil, November 04, 2005 - 6:53 pm UTC
Hi tom
Is foreign key constraint is necessary, meaning what if the Data Integrity is maintained at Application Server
In this case what will be the performace impact on join of two tables
Emp and EMP_DETAIL table
where empno is Pk of emp and EMP_DETAIL TABLE
and the query is
select * from emp,emp_detail where emp.empno = emp_detail.empno;
I just want to know the performance of the query with/without foreign key if any?
Thanks a LOT
Anil
November 05, 2005 - 4:19 am UTC
You cannot maintain a foreign key integrity constraint in the middle tier without lots of locking -- so, show us how this middle tier accomplishes this magic without locking tables or using select for updates??? Show us the code and I'll describe to you your bug.
lose the RI check in the middle tier, put it in the database where it belongs. We can do it without locking - you cannot (and i'll BET you you are doing it wrong as it is)
It'll be much faster to have integrity checks performed in the database (I write about this at lenght in all of my books - Effective Oracle by design and Expert Oracle Database Architecture
A reader, November 05, 2005 - 10:10 am UTC
One of the databases i support has a third party product which has the same design (no constraints on the database side) and its been a nightmare maintaing database for such an application and as Tom mentioned lots and lots of locks, performance stinks and data integrity is compromised. Finally we will be getting rid of the product soon.
HTH.
Rahul M.
view to find wether at column level or table level
Anurag, November 09, 2005 - 4:31 am UTC
Dear Tom,
Can you provide a query to find out wether a particular constraint created at table level or column level.
November 10, 2005 - 5:11 pm UTC
doesn't really matter. they can all be done at the table level.
we just put them at the end ourselves:
ops$tkyte@ORA10GR2> drop table t;
Table dropped.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t
2 ( x int check (x < 5),
3 y int,
4 check ( y < 10 )
5 )
6 /
Table created.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "OPS$TKYTE"."T"
( "X" NUMBER(*,0),
"Y" NUMBER(*,0),
CHECK (x < 5) ENABLE,
CHECK ( y < 10 ) ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOC
OMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTE
NTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER
_POOL DEFAULT)
TABLESPACE "USERS"
we don't store the original DDL.
A reader, November 22, 2005 - 5:31 pm UTC
Tom,
What does the "?" mean in constraint_type.
Thanks.
SQL>select constraint_name,CONSTRAINT_TYPE,table_name
2 from dba_constraints where table_name='XML_DOCUMENT';
CONSTRAINT_NAME C TABLE_NAME
------------------------------ - ------------------------------
NN_XMLDOC_ID C XML_DOCUMENT
SYS_C009893 ? XML_DOCUMENT
2 rows selected.
SQL>desc XML_DOC
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
ID NOT NULL NUMBER
XML BLOB
TSTAMP TIMESTAMP(6) WITH TIME ZONE
SQL> select dbms_metadata.get_ddl('TABLE','XML_DOC','SCOTT') FROM DUAL;
DBMS_METADATA.GET_DDL('TABLE','XML_DOC','SCOTT')
--------------------------------------------------------------------------------
CREATE TABLE "SCOT"."XML_DOC"
( "ID" NUMBER CONSTRAINT "NN_XMLDOC_ID" NOT NULL ENABLE,
"XML" BLOB,
"TSTAMP" TIMESTAMP (6) WITH TIME ZONE,
SUPPLEMENTAL LOG DATA (ALL) COLUMNS
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"
LOB ("XML") STORE AS "XML_DOC_SEG"(
TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
November 23, 2005 - 9:16 am UTC
means "unknown/internal"
In this case, associated with supplemental logging
ops$tkyte@ORA10GR2> CREATE TABLE "XML_DOC"
2 ( x int,
3 SUPPLEMENTAL LOG DATA (ALL) COLUMNS
4 )
5 /
Table created.
ops$tkyte@ORA10GR2> select constraint_name,CONSTRAINT_TYPE,table_name from dba_constraints where table_name='XML_DOC';
CONSTRAINT_NAME C TABLE_NAME
------------------------------ - ------------------------------
SYS_C007136 ? XML_DOC
A reader, November 22, 2005 - 5:31 pm UTC
Opps Forgot to mention thats 10G r2 on Red Hat AS 3.0
A reader, November 22, 2005 - 5:33 pm UTC
XML_DOC and XML_DOCUMENT are the same table.
A reader, November 23, 2005 - 11:25 am UTC
Thanks Tom.
How can i enforce foreign key with out primary key of other table
Bhavesh Ghodasara, December 29, 2005 - 9:00 am UTC
hi tom,
Say i have table t
which have columns a & b..both are not P.K.
now second table t1
which have columns c & d..i want to check that value of c
already exists in column a of t.
just like foreign key ,,,but i dont have a as primary key.
and when i write
check c in (select a from t);
error arise that i cant use subquery within check..
so how can i do it??
thanks in advance.
Bhavesh..
December 29, 2005 - 12:04 pm UTC
You cannot, you need a primary key OR unique constraint to have a proper foreign key.
If the thing in the other table is unique, it should have a unique constraint and then you can easily add the foreign key.
Why Indexes are created on PK's automatically
A reader, January 05, 2006 - 2:36 pm UTC
Tom, i know the ussage of index and the usage of PK. But might be missing in a valid reason to understand the below question.
why does oracle create index automatically when PK is created ?
Is it becuase PK is used in the join ...
January 05, 2006 - 2:43 pm UTC
because we use the index to verify the key values are in fact unique. Better than a full scan each time.
Woh !!!
A reader, January 05, 2006 - 2:47 pm UTC
excellent.
worth getting up in the night and putting this question to you as the answer is the most convincing one.
was peeping up inthe meantime into expert-one-on-one and effective oracle by design.
Thanks ton.
Invalid relational operator with Check Constraint
A reader, January 20, 2006 - 2:47 pm UTC
Tom, I am trying to alter a table column with a check constraint
eg: Emp table has a salary Column
I am trying the following
Alter table emp modify(salary integer check(replace(sal,-1,null))
and i am getting "invalid relational operator" error
Basically , I want to cahnge the value from -1 to a null value before saving it in the table [for some application requirement].
How do i do this?
Thanks,
January 20, 2006 - 2:53 pm UTC
you cannot use a check constraint to "change a value"
You can use a trigger, but I would strongly encourage you to RETHINK this "requirement".
I hate it when the data I explicitly insert disppears and changes.
this would be somthing you would do in a before insert or update for each row and have
if :new.sal = -1 then :new.sal := null; end if;
but really- honestly, from the bottom of my heart - I strongly encourage to rething implementing "magic" like this. It just confuses the heck out of the rest of the world and will cause many people in the future to just pull out their hair in frustration and confusion.
Thanks Tom
A reader, January 20, 2006 - 3:17 pm UTC
Thanks for your prompt response Tom. I have to do this because, Business objects from the .NET Application we are using, cannot send a null value for integer and they can only pass a -1 [atleast, that's what our front end developers say]. So while saving data , a -1 means nothing, nada,do not insert this value but insert other values they pass in.
So, I thought of doing this from the insert/save functions we already have, but since there are too many to rework, I thought, if I could do this by using a constraint , i can generate some kind of generalised 'Alter table' code and execute them in the database, that would take care of these things in future.
January 20, 2006 - 4:31 pm UTC
you'll have to use a trigger (and I don't like it, a tool that doesn't support NULL and calls itself a relational database tool?!?!?!)
A reader, January 20, 2006 - 3:24 pm UTC
"I have to do this because, Business objects
from the .NET Application we are using, cannot send a null value for integer and
they can only pass a -1 [atleast, that's what our front end developers say]. So
while saving data , a -1 means nothing, nada,do not insert this value but insert
other values they pass in."
Hmm... I know you can do this in Java. I don't see why it would not be possible in C-Sharp, too.
</code>
http://www.csharphelp.com/archives/archive100.html <code>
.NET and null
Mark A. Williams, January 20, 2006 - 6:19 pm UTC
Not knowing the specifics of the code... but you would use "DBNull.Value" to insert a null using a parameter (aka bind variable) in the .NET language of your choice.
- Mark
January 20, 2006 - 6:32 pm UTC
I believe they are alluding to business objects in this particular case...
we are using business objects
A reader, January 21, 2006 - 7:00 am UTC
Yes Tom,we are using business objects and they think it's best to change value from -1 to null before insert.
by passing dbnull might break some other functinality[may be]
January 21, 2006 - 10:55 am UTC
no, it would not. It just makes the code more accurate, makes the code do what it is supposed to do, makes the code more maintainable, makes the code more understandable.
Using -1 is the opposite of all of those good things and there is simply nothing good to be said about using it.
null value
Mark A. Williams, January 21, 2006 - 12:00 pm UTC
> I believe they are alluding to business objects in this particular case...
My comment was more for the person above me who said "I don't see why it would not be possible in C-Sharp, too." I didn't make that especially clear, however. I was mostly pointing out how you would do it in C# or whatever .NET language one wanted to use.
I do not know what is meant by "business objects"... I wonder if it is www.businessobjects.com? Not that it matters really, but if it was just some in-house created "business objects" then they should be able to adapt them.
- Mark
- Mark
Constraints
DBA, February 06, 2006 - 10:27 am UTC
Tom,
is it advisable to have a primary key constraint on every table in the database? If yes, Why? and if no, what can be the downfall?
Thank you!
February 07, 2006 - 12:54 am UTC
is it advisible? sure - absolutely.
what is the downfall if you don't? well, you have tables without primary keys.
are there times you might not - sure (never say always, never say never, I always say). Consider - my audit trail table, I have no primary key on it. In fact, it has no indexes on it whatsoever.
In general, tables will have primary keys, it would be the exception not to have one.
OK
Kumar, February 07, 2006 - 7:43 am UTC
Hi Tom,
Is there any constraint_type with a value of "?" ??
February 08, 2006 - 12:57 am UTC
the view definition permits it:
....
select ou.name, oc.name,
decode(c.type#, 1, 'C', 2, 'P', 3, 'U',
4, 'R', 5, 'V', 6, 'O', 7,'C', '?'),
o.name, c.condition, ru.name, rc.name,
...
and I have some in my 10gr2 database on some queue tables - so "yes".
Primary key
DBA, February 08, 2006 - 1:55 pm UTC
Tom,
is it advisable to have a primary key constraint on every table in the database?
If yes, Why? and if no, what can be the downfall?
Thank you!
Followup:
is it advisible? sure - absolutely.
what is the downfall if you don't? well, you have tables without primary keys.
are there times you might not - sure (never say always, never say never, I
always say). Consider - my audit trail table, I have no primary key on it. In
fact, it has no indexes on it whatsoever.
In general, tables will have primary keys, it would be the exception not to have
one.
Hi Tom,
Based on your response, I am going to ask you if you would put a primary key even on tables having referential key to another table? Would you put a primary key on table with less than 50 rows or so that are not frequently accessed for query?
Thanks,
February 09, 2006 - 4:51 am UTC
the existence or lack thereof of other constraints on a table would have no bearing on my decision to have a primary key.
Almost every single table you ever create will have a primary key. it will be the RARE exception (like an audit trail table - maybe, some of them do have primary keys) that will not have them.
PK
alex, February 09, 2006 - 6:57 am UTC
Hi Tom,
I have following table
Table [col1 number(8),
col2 varchar2(1),
col3 varchar2(3),
col4 date,
col5 varchar2(10),
col6 number(6) ,
.
.]
col1, col2 & col3 are marked as PK. What should be the sequence or ordering of these columns in PK ?.
should it be col1, col2, col3
or col3, col2, col1
data in these three columns will be something like.
col1 - col2 - col3
20050101 - T - 410
possible values for col2 are - T, A, S
possible values for col3 are around 90 (1 to 90)
possible values for col1 could be 70% unique.
February 10, 2006 - 11:04 am UTC
let us make this a more concrete example. Say you copy all_objects and for your purposes - the object_type plus object_name plus owner is unique - the primary key.
(I know - IT IS NOT unique, however, just presume that it was for this discussion).
Now, in your system, you frequently ask the following three questions:
o show me the EMP TABLE owned by SCOTT
o show me all of SCOTT's TABLEs
o show me all of SCOTT's stuff
Then the only logical primary key ordering is likely:
(owner,object_type,object_name)
The selectivity (how many distinct versus how many values) of a column doesn't come into play when you are deciding the order of these things.
HOW YOU QUERY, HOW YOU RETRIEVE the data does.
so, look to your data retrieval, lay out the types of questions you will pose against this primary key.
If it is always "where col1=:a and col2=:b and col3=:c" the order is going to generally be "not relevant", unless you wanted to compress the index - then of course the order would be col2,col3,col1 - to get the maximum compression possible.
if you say "where col2=:x and col3=:y and col1>:z"
then col2 and col3 should come first (in any order) and col1 last - since you equality search on col2 and col3 but range scan on col1..
it is all about how you ASK the questions of this data.
Intersetion of ranges
Rustam Kafarov, February 10, 2006 - 8:56 am UTC
Hi Tom
Is there any method to check for unique ranges of two values? For example table t1 has columns startValue INTEGER, endValue INTEGER. The range of startDate..endDate should be unique for each row, i.e. following condition should be TRUE [startValue1..endValue1]x[startValue2..endValue2] = [], where xxx1 is value for row 1, xxx2 - value for row 2. (Intersect of to sets should be empty set).
Example:
startValue endValue
row1: 10 20 - esixts:
user tries to add:
row2: 15 25 - should fail: [10,20]*[15,25]=[15,20]
row3: 5 25 - should fail: [10,20]*[15,25]=[10,20]
row4: 12 15 - should fail: [10,20]*[15,25]=[12,15]
row5: 25 30 - ok
Have you any idea how to build this constraint/trigger?
February 10, 2006 - 1:00 pm UTC
Alexander, April 06, 2006 - 9:32 am UTC
Tom,
For those of us who forget to name our constraints, and need to disable and enable them on different databases, what choices do we have? We created some stuff via ERWIN because it was easy to design the model, and then have it spit out sql. Unfortunately it didn't name any constraints. So all the dbs we install this code on have different system created constraint names. Now we have an issue where were are updating the keys for some of these tables. (It's a one time thing, normally we'd never do this.)
I tried cascading the primary key, then selecting the constraint name from the data dictionary, but apparently you cannot:
execute immediate 'alter table t enable constraint v_constraint';
in pl/sql.
I was thinking we may have to write a script to drop all the constraints, then recreate them with names, but even to do that I think we'd still have to know the name of the old ones? I'm sure this is a common error so I was hoping to hear your suggested course of action for this mistake, thanks.
April 07, 2006 - 3:32 pm UTC
why not name them? you can use RENAME to rename the existing constraints....
you would:
execute immediate 'alter table t enable constraint ' || v_constraint;
Alexander, April 06, 2006 - 11:39 am UTC
Tom,
You can ignore the above post. I found a way to do this using the data dictionary. Have fun in Canada.
Where to Validate Data
Lisa, April 17, 2006 - 10:07 am UTC
Hello,
I am a little confused about the best place to do certain types of data validation. In your book "Effective Oracle by Design" on p. 23 you recommend dumping the consultant who maintains validation should be done through front end coding. I would have thought that if you were validating something like a SSN, or phone number (not something that is a primary key or foreign key) that it is better to do the validation of the format, etc. in the JSP application (in our case as that is what we use) without the trip to the database. Could you please elaborate on this as it seems so counter-intuitive to make the round-trip to the database for something like this? Thank you.
April 17, 2006 - 11:04 am UTC
you need to just read a few paragraphs further in that chapter (chapter 7, effective schema design) where I write further:
<quote src=effective oracle by design>
Does That Mean That the Client Should Not Do Integrity Checking Itself?
It can be very useful for the client to do integrity checking for a variety of reasons. The important thing to remember, however, is that it must ultimately still be performed in the database. Client-side integrity is not a substitute for server-side integrity; it is a complementary addition. Here are some of the salient reasons for client-side integrity:
Better end-user experience Users can discover as they are typing that the data they entered doesn't stand a chance of being entered into the database. They do not need to wait until they select Save.
Reduced resource usage on the server By preempting bad data on the client, you do not make the server perform work that will ultimately need to be undone.
But, just as there are pros to most everything, there are cons as well. The major disadvantage with client-side integrity is that you now have two places where the rules are, and if they change over time, you must ensure they are changed in both locations. It would be frustrating for a client to not be able to input information the database should accept because the application is working from old rules. It would be equally as frustrating to work on a set of data and feel confident that it is valid, only to discover as you choose Save that the database will reject it. Good configuration management and software engineering principles will reduce the chances of having this happen.
</quote>
So yes - sure - validate something is in the proper and correct format for being an SSN or PHONE-NUMBER in the java script - absolutely, but DO NOT SKIP DOING IT IN THE DATABASE - and realize that if what constitutes a correct and proper phone number over time changes - you have two places you must maintain that logic
You have the place it MUST BE (the database)
And the place you may optionally also include it (the application) - since you will likely have to have that code in more than one place - the database always wins however.
Where to Validate Data
Lisa, April 17, 2006 - 11:10 am UTC
That was great! Thanks a lot!
Export / Import and Constraints
A reader, July 11, 2006 - 6:00 am UTC
Tom,
I am moving very important application to another hardware by exp/imp db schemas. I used the same exp.par and imp.par file. I was suprised that at the end of import logfile I saw ...about to enable constraints... for some schemas, but in some cases it was simple ....Import terminated successfully without warnings... and nothing about enabling constraints.
As I checked dba_constraints all constraints were in place with the status enabled.
Could you please explain this - it is very importatnt data that were imported. I exported data using direct path in all cases.
Thanks
July 11, 2006 - 7:53 pm UTC
totally insufficient data here to actually make any sort of comment.
explain WHAT exactly????
sounds like everything worked.
Can I define constraint for such condition
parag j patankar, July 12, 2006 - 1:44 am UTC
Hi Tom,
I am having following details
drop table t;
create table t ( a varchar2(255));
insert into t values ('parag patankar');
insert into t values ('thomas kyte');
commit;
Now if I try to insert substr of existing strings in a table it should not allow to insert for e.g.
If i try to insert 'arag' (substr of parag), or 'tank' (substr of patankar) or 'omas' (substr of thomas ) or 'yt' ( substr of kyte ) then it should get aborted.
Is it possible to put such a business rule thru constraints ? If yes kindly guide me.
thanks & regards
pjp
July 12, 2006 - 4:15 pm UTC
how big will this table be (this ain't going to be "fast")
what drives a requirement like this? (I am curious)
Re: Export / Import and Constraints
A reader, July 12, 2006 - 4:33 am UTC
Okay, I do exp/imp schema by schema. I use the same imp/exp parameter files. For some schemas I see:
. . importing table "T1" 3 rows imported
. . importing table "T2" 0 rows imported
. . importing table "T3" 0 rows imported
About to enable constraints...
Import terminated successfully without warnings.
and for other schemas:
. . importing table "T1" 333 rows imported
. . importing table "T2" 0 rows imported
. . importing table "T3" 0 rows imported
Import terminated successfully without warnings.
My question is why import tool enabled constraints in first case and did not do this in the second - did not say this expicitly in imp log file? It is the same parameter files for both exp and imp, same target and source database and both schemas have a lot of constraints in it - constraints actually ARE in the schema after import and are enabled.
Thanks in advance
July 12, 2006 - 4:40 pm UTC
got example? I don't need data, just table creates - as FEW as humanly possible (eg: should probably take one table in each)
versions are always good too
(and the fact that the constraints are there... well... ? they are there...)
constraints thru table
Parag J Patankar, July 13, 2006 - 4:44 am UTC
Hi Tom,
regarding your response to my question in this thread
"
how big will this table be (this ain't going to be "fast")
what drives a requirement like this? (I am curious)
"
1. Table expected not to be big, it will have around few hundred rows.
2. This table we are using for mailing list details for external clients. So we do not want user to input wrong names of files ( in our system many outputs are having very close similar names ) which will be send to customers.
3. I tried to put check constraint using instr but unsuccessful.
thanks & regards
pjp
July 13, 2006 - 7:55 am UTC
2) this constraint doesn't seem to do anything useful then.
file "smart"
file "art"
so what if art is in smart and what if art preceded smart?
have you thought this all of the way out?
constraints
Parag Jayant Patankar, July 13, 2006 - 9:53 am UTC
Hi Tom,
Thanks !!!! I agree with your point. It will not be such solution in this case. Just for my education reason, is there way to put such a business rule thru constraints ? I am sure you must be having it. Will you pl tell me ?
thanks & regards
pjp
July 13, 2006 - 12:52 pm UTC
not a simple declaritive constraint, no.
a materialized view with a constraint - perhaps.
Question from my Colleague
Arindam Mukherjee, July 31, 2006 - 11:50 am UTC
Respected Mr. Tom,
Today I was asked one question and frankly speaking I was quite dumb. I also could not find the logic from Oracle Document but Oracle says it happens. You please see the demonstration below.
create table arindam
(x number,
y number,
z number,
constraint arin_uk unique(x)
)
/
SQL> insert into arindam values (1,1,1)
2 /
1 row created.
SQL> insert into arindam values (null,1,1)
/ 2
1 row created.
SQL> insert into arindam values (null,1,1)
/ 2
1 row created.
That means Unique Constraint allows here two null values. OK ?? Again see the same thing.
drop table arindam
/
create table arindam
(x number,
y number,
z number,
constraint arin_uk unique(x,y)
)
/
Please note, here I use COMPOSITE UNIQUE KEY.
SQL> insert into arindam values (1,1,1)
2 /
1 row created.
SQL> insert into arindam values (null,1,1)
/ 2
1 row created.
SQL> insert into arindam values (null,1,1)
/ 2
insert into arindam values (null,1,1)
*
ERROR at line 1:
ORA-00001: unique constraint (SAPH_LOADER.ARIN_UK) violated
That question way - why composite unique does not allow two nulls where single unique constraint encourages all null.
I am really sorry to ask you a silly question but I am unable to answer to my colleague.
July 31, 2006 - 12:32 pm UTC
ANSI said it should work this way.
An entirely NULL key (of 1 or more attributes) is not the same as some other entirely NULL key.
It is the way it is defined to us to be.
Toon Koppelaars, July 31, 2006 - 3:55 pm UTC
Please do not use NULL's.
Most certainly *not* in columns that are involved in data integrity constraints (declarative or not).
They will make bad things happen.
Change your data model.
July 31, 2006 - 8:56 pm UTC
eh? PLEASE DO USE NULL'S SO YOU DO NOT SCREW UP THE OPTIMIZER AND MAKE IT JUST DO THE WRONG THING.
Tell me about bad things - give a for example, and explain how it is not a mistake on the part of the implementor.
do not change your data model.
Using a dummy value, ugh, double ugh.
Peter, August 01, 2006 - 12:23 pm UTC
Hello Tom,
DROP TABLE T1
DROP TABLE T2
DROP TABLE T3
DROP TABLE T4
CREATE TABLE T1 (a INT NOT NULL)
CREATE TABLE T2 (a INT);
ALTER TABLE T2 ADD (CHECK (a IS NOT NULL));
CREATE TABLE T3 AS SELECT * FROM T1 WHERE 1=2
CREATE TABLE T4 AS SELECT * FROM T2 WHERE 1=2
table t3 will have not null constraing
table t4 will not,
is there any diffrence how optimizer will
use not null constraints in t1 and t2?
thanks in advance
August 01, 2006 - 7:11 pm UTC
not null is copied over when possible (not always)
check constraints, foreign keys, primary keys - they are not.
You only have one NOT NULL constraint there, the other is a check constraint.
Advice on data model and constraints
RP, August 07, 2006 - 9:25 am UTC
Hi Tom,
my app needs to collect lots of information from user before an account can be set up for them. There's quite alot of mandatory fields and so I was going to enforce this at the table level.
Problem is, the users have come back and said they want to partially save the data and come back to it later. This means they may not have to supply mandatory data and i still have to save what they have entered.
Which is the better approach:
1. Remove table-level constraints and apply them via a stored procedure check once the user says they have finished?
2. Create a seperate table to hold these 'Incomplete' versions and then when finished, copy across to the real table and catch the errors and report them back to the user?
There's ALOT of fields if that makes a difference.
Cheers
August 07, 2006 - 9:31 pm UTC
#2 - absolutely
you want to move the data from "test" to "production"
You can write even a sanity check routine that presents to them a rather inclusive list of "issues" with the data - before you even move it over.
Implementing Constraints
Hemal, November 21, 2006 - 10:01 am UTC
Hello Tom,
I need your help to implement the constraints on a table.
I am unable to determine whether I should use the primary key or unique key or is there any other way to implement the below mentioned functionality.
( I am worried about how to implement the constarints and problems that could occur when concurrent users are using the below mentioned functionality and its corresponding screen.)
I am designing a Nursing Home Data entry application.
I need to store the various Departments in the Nursing Home.
Following is the table structure:-
DESC DEPARTMENT_MAST;
DEPT_ID NUMBER,
DEPT_NAME VARCHAR2(50) NOT NULL,
STATUS VARCHAR2(1) NOT NULL
DEPT_ID is the primary Key (Generated by Oracle Sequence).
DEPT_NAME is the name of the department.
STATUS is the status of the record.
When a new record is inserted its status will be 'A'-Active.If user deletes the record then it must be a soft delete.Require to set the status of the record as 'I' - In-active instead of permanent delete.
Now,the user has the following specific functional requirement:-
1.User must be able to delete and re-create the department with the same name.It must be a new record in the database with status as 'A'.
2.Records with Status 'A' will be displayed on the screen.
3.There can be multiple records with same department name, but out of them only one record will have status 'A'.Rest of the records will have status as 'I'.
Example:-
On 1st Jan 07 they created a department name as 'ICU'.Status of the record will be active 'A'
On 3rd Jan 07 they deleted the department 'ICU' from the screen.At the backend I will do a soft delete.I will set the status of the record to 'I'.
On 5th Jan 07 they again want to create the same department 'ICU'.This department will have a new dept_id and status of the record will be active 'A'.
Department Master Data entry screen is a Java Based screen having multiple rows.
User can enter as many departments as they want.
The department names are stored in a PL/SQL Object table and passed to the PL/SQL packaged procedure.
Following is the Logic coded in the PL/SQL packaged procedure:-
-->--
In a FOR LOOP :
For each department name stored in the PL/SQL Obect table we check whether the department name is already present in the DEPARTMENT_MAST table and its status is 'A' - Active. If "yes" we raise an Exception as "Duplicate Department Name".
An error message is displayed and processing is stopped.
ELSE
We insert the department name into the DEPARTMENT_MAST table.
FOR LOOP END;
COMMIT;
--<--
After the FOR LOOP I have writtern the "COMMIT" statement.
Therefore,the data is commited only when all the record passes the validation.
Would this logic fail in the multiuser environment ?
Would the following happen when the concurrent user are using the Department Master Data Entry screen:-
a) Suppose the DEPARTMENT_MAST table is having no rows in it.
b) Two users entered the same department names on there screen (about 20 department names)and pressed the SAVE button at the same time.
c) Both of them will pass the validation (since the DEPARTMENT_MAST table is blank and both of them
has not yet executed the "COMMIT" statement ) and thereby insert the same data into the DEPARTMENT_MAST.
Please advice me whether :-
1. whatever I am thinking regarding Concurrent users scenario is right or wrong ?
2. Can I use any constraint on the DEPT_NAME column ?
3. What can be the best way to implement this functionality.
Please help me.
Thanks and Best Regards
-Hemal
November 22, 2006 - 3:38 pm UTC
dept_id CANNOT BE A PRIMARY KEY, it is nullable, you have done something wrong.
if your question is simply "can dept_name be unique for active records", then the answer is:
create unique index dept_name_unique on
( case when status = 'A' then dept_name else null end );
if that is not the question, please clarify.
Implementing Constraints
Hemal, November 23, 2006 - 11:20 am UTC
Hello Tom,
Thanks for your reply.
Yes,dept_no is a primary key therefore a not null field.
Following is my question:-
Since I have not placed any constraints on the dept_name ,What will happen if 2 users try to simultaneously enter the same department name.
We see that I cannot keep the primary key constraint nor a unique key constraint on the dept_name column since there will be multiple records with the same department name.Out of this only one record will have status -'A' - Active.
Rest of the records will have status - 'I'- Inactive.
Therefore,before inserting a new record in the database shall I lock the table using " LOCK TABLE department_mast IN ROW SHARE MODE NOWAIT; "
Due to this, when the 2 users try to simultaneously enter the same department name the user who first executed this statement will hold the lock on the table.Therefore,the other user wont be able to insert the department name .
User can enter multiple department names.Those all will be populated in a Object table.A FOR LOOP will INSERT one by one into the DEPARTMENT_MAST table.At the end of the FOR LOOP I am doing COMMIT.
---------------------------------------------------------
---------------------------------------------------------
Following is my code of PROCEDURE add_department and FUNCTION check_duplicate_dept_name:-
I am worried whether it will fail and allow duplication of department names when concurrent users has entered same department names and clicked on the save button present on the screen :-
-- department_obj_tab: Its is a Object Table type.
-- p_department_obj_tab : Contains the department names entered by the user.
-- p_validation_code : It will have value as 0 if all the records are successfully inserted.
-- It will have value as 1 if duplication of any Department name occurs.
-- p_validation_field_values : It will contain the department name because of which duplication occured.
PROCEDURE add_department (p_department_obj_tab IN department_obj_tab,
p_validation_code OUT NUMBER,
p_validation_field_values OUT VARCHAR2)IS
e_duplicate_dept_name EXCEPTION;
BEGIN
FOR i IN p_department_obj_tab.FIRST .. p_department_obj_tab.LAST LOOP
p_validation_code := check_duplicate_dept_name
(p_department_obj_tab (i).dept_name);
IF p_validation_code = 1 THEN -- Duplicate department name .
p_validation_field_values := p_department_obj_tab (i).dept_name;
RAISE e_duplicate_dept_name;
END IF;
INSERT INTO department_mast
(dept_id,
dept_name,
status)
VALUES (dept_id_seq.NEXTVAL,
p_department_obj_tab (i).group_name,
'A'); -- Status is set to active
END LOOP;
p_validation_code := 0; -- Successful insertion of all records.
COMMIT;
EXCEPTION
WHEN e_duplicate_dept_name THEN
ROLLBACK;
WHEN OTHERS THEN
ROLLBACK;
RAISE;
END add_department;
==========================================
FUNCTION check_duplicate_dept_name (p_dept_name IN
department_mast.dept_name%TYPE)
RETURN NUMBER
IS
CURSOR c_department_mast (p_c_dept_name IN
department_mast.dept_name%TYPE,
p_c_status IN department_mast.status%TYPE)
IS
SELECT 1
FROM department_mast
WHERE UPPER (dept_name) = UPPER (p_c_dept_name)
AND status = p_c_status;
r_department_mast c_department_mast%ROWTYPE;
BEGIN
OPEN c_department_mast (p_dept_name,
'A');
FETCH c_department_mast INTO r_department_mast;
IF c_department_mast%FOUND THEN
CLOSE c_department_mast;
RETURN 1 ; -- Duplication error occurred.
END IF;
CLOSE c_department_mast;
RETURN 0; -- No duplication error occurred.
END check_duplicate_dept_name;
==========================================
Please help me tom.
Will the above code run perfectly when concurrent users has entered same department names and clicked on the save button.
Thanks and Best Regards
-Hemal
November 24, 2006 - 6:29 pm UTC
no, it is not:
DESC DEPARTMENT_MAST;
DEPT_ID NUMBER,
DEPT_NAME VARCHAR2(50) NOT NULL,
STATUS VARCHAR2(1) NOT NULL
do you see a not null? i do not. i stopped reading after the first paragraph.
Implementing Constraints
Hemal, November 25, 2006 - 12:45 am UTC
Hello Tom,
I have now modified it.
Can you please advice me on the question I have mentioned on this link.
Following is the currently the table structure:-
SQL> DESC DEPARTMENT_MAST;
Name Null? Type
DEPT_ID NOT NULL NUMBER
DEPT_NAME NOT NULL VARCHAR2(50)
STATUS NOT NULL VARCHAR2(1)
DEPT_ID is a primary key.
Your help is highly appreciated.
Thanks and Best Regards
-Hemal
November 25, 2006 - 9:16 am UTC
phrase your actual question much more tersely, what is your goal - simply stated.
because I believe I already answered your question two iterations ago. If not, you need to explain it better.
You need to write NO CODE at all to enforce "dept_name must be unique when status = 'A'"
I gave you the simple, single DDL command that does this for you in the most concurrent, correct fashion possible.
Implementing Constraints - Nursing Home Review
Tim, November 25, 2006 - 3:15 pm UTC
I agree with Tom - the post is quite confusing. It mixes asusmptions, questions, and solutions all together.
In my opinion - the place Hemal is going wrong is by moving forward with an incorrect solution and wanting to know how to correct the mistake - but not wanting to change the incorrect the solution.
In fact - the proper approach would be to listen to Tom and change the approach. Use the correct solution.
So - where Hemal says -
"Since I have not placed any constraints on the dept_name ,What will happen if 2 users try to simultaneously enter the same department name."
My suggestion would be - Hemal - do NOT take your initial approach to be cast in stone. Rather - PLACE the constraint on the dept_name as Tom suggests.
Now - the question of two users hitting the save button at the same time... My limited understanding of this is that the Database server will always have one of the users coming before the other - they will NEVER occur at precisely the same time. (If I am wrong on this - I may as well throw my whole application away on Monday!!!) So - the database will (assuming you use the unique constraint as Tom suggests) allow one user to succeed and will have the other user get an exception returned. Competely as you should be wanting to happen.
With regard to your point about "we check whether the department name is already present"... From other postings Tom has replied to - I would agree with Tom and suggest you do the same - which is to NOT bother checking to see if the row already exists. It is a futile check (because it might be there immediately after you check). Just do the insert. Have your client application check for the "DUP_VAL_ON_INDEX" exception which may possibly be raised - and give the user whatever nice and friendly message you want when this occurs.
Implementing Constraints
Hemal, November 26, 2006 - 12:17 am UTC
Hello Tom,
You are really great.
Thanks for your kind help and time you took to answer my question.
I am extremely sorry for everything I did.
Next time I will format my questions to the point.
Best Regards
-Hemal
Implementing Constraints - Nursing Home Review
Hemal, November 26, 2006 - 12:23 am UTC
Hello Tim ( from PA, USA),
Thanks for the suggestion.
I highly appreciate it.
Best Regards
-Hemal
Implementing Constraint on Combination of Columns
Hemal, November 28, 2006 - 9:14 am UTC
Hello Tom,
Based on the explanation you provided on this link for "Implementing Constraints" I am using the Function Based Indexes.I am implementing these for the first time.
I want to know that whether I am doing it perfectly in the cases where for a particular status of the record the combination of columns of different datatypes must be unique OR combination of the two VARCHAR2 columns must be unique.
Please advice whether the method of the concatenation is right ? I have used '-' for concatenation.Whether it will create any problems with the various set of data especially the VARCHAR2 type of data.
Following is the Table Structure and the corresponding code:-
SQL> DESC DEPARTMENT_TRAN;
Name Null? Type
--------------------
DEPT_ID NOT NULL NUMBER
DEPT_NAME NOT NULL VARCHAR2(50)
DEPT_SUB_NAME NOT NULL VARCHAR2(15)
GROUP_ID NOT NULL NUMBER
BUDGET_CODE NOT NULL VARCHAR2(20)
MASTER_ID NOT NULL NUMBER
MASTER_DATE NOT NULL DATE
STATUS NOT NULL VARCHAR2(1)
Requirement is following:-
"If Status is 'A' then the combination of GROUP_ID and BUDGET_CODE must be Unique"
"If status is 'P' then the combination of MASTER_ID and MASTER_DATE must be Unique"
"If status is 'B' then the combination DEPT_NAME and DEPT_SUB_NAME must be Unique"
I have created the following Unique Function Based Indexes:-
-->
SQL> CREATE UNIQUE INDEX DEPARTMENT_TRAN_U1 ON DEPARTMENT_TRAN (CASE "STATUS" WHEN 'A' THEN TO_CHAR("GROUP_ID")||'-'||"BUDGET_CODE" ELSE NULL END );
Index created.
SQL>
SQL> CREATE UNIQUE INDEX DEPARTMENT_TRAN_U2 ON DEPARTMENT_TRAN (CASE "STATUS" WHEN 'P' THEN TO_CHAR("MASTER_ID")||'-'||TO_CHAR("MASTER_DATE") ELSE NULL END );
Index created.
SQL> CREATE UNIQUE INDEX DEPARTMENT_TRAN_U3 ON DEPARTMENT_TRAN (CASE "STATUS" WHEN 'B' THEN "DEPT_NAME"||'-'||"DEPT_SUB_NAME" ELSE NULL END );
Index created.
SQL>
--<--
Please advice
Thanks and Best Regards
-Hemal
November 28, 2006 - 9:39 am UTC
why not:
create unique index i on t(
case when (STATUS in ('A','P','B')
then status
end,
case when status = 'A' then TO_CHAR("GROUP_ID")||'-'||"BUDGET_CODE"
when status = 'P' then ...
when status = 'B' then ...
end
);
you can do it in one index if you wanted.
Implementing Constraint on Combination of Columns
Hemal, November 28, 2006 - 10:22 pm UTC
Hello Tom,
Thanks for the suggestion.
I was worried about the Hyphen '-' I used for concatenation of the VARCHAR2 types of columns.
Now, according to you it is fine.
I will surely implement the solution you have provided me.
Thanks and Best Regards
-Hemal
November 30, 2006 - 8:43 am UTC
NO, not according to me - you suggested it, it was your idea. I would not necessarily do that.
I presumed you knew what you were doing and that "-" could not be in either field (making it safe)
create unique index i on t(
case when (STATUS in ('A','P','B')
then status
end,
case when status = 'A' then TO_CHAR("GROUP_ID")
when status = 'P' then ...
when status = 'B' then ...
end,
case when status = 'A' then "BUDGET_CODE"
when status = 'P' then ...
when status = 'B' then ...
end
);
since it is on two columns, make the index have 3 columns.
A reader, December 12, 2006 - 8:54 am UTC
Hi Tom,
How can I know which column in the table has what constraint using sql?
eg:
SQL> select constraint_type,search_condition,constraint_name
2 from user_constraints
3 where table_name='test_DEALERS';
C SEARCH_CONDITION CONSTRAINT_NAME
- -------------------------------------------------------------------------------- -----------------
U WC_DEALER_UK
P WC_DEALER_PK
R WC_DEALER_CUSTOMER_FK
C "DEALER_ID" IS NOT NULL SYS_C007591
C "CUSTOMER_ID" IS NOT NULL SYS_C007592
C "DEALER_CODE" IS NOT NULL SYS_C007593
C "COUNTRY" IS NOT NULL SYS_C007594
C "REGION" IS NOT NULL SYS_C007595
I would like to know which colums have 'unique' constraint,
which columns make 'primary' key, etc.
Thanks for your help.
December 12, 2006 - 9:03 am UTC
user_cons_columns
Enabling Constraints in Partitioned tables
Paul, August 15, 2007 - 10:04 am UTC
Hi Tom,
There's a lot of useful stuff in here regarding constraints but I can't see anything that could help us with a problem we have here.
We have a data warehouse application where we store data supplied from external projects, on a daily basis, into tables partitioned by date, which we later validate for consistency. We have a series of validation rules that the data should conform to, but we have to accept all data even when it doesn't conform to the rules.
The approach that has been taken is we have a number of check constraints, relational constraints, etc in the database which are disabled during loading. When the loads have finished a validation routine enables the constraints with the exceptions into clause, the exceptions are then used to generate reports and the constraints are disabled prior to the next load.
The problem is when the constraints are enabled all of the previous data is re-validated not just the data of today. We are potentially loading hundreds of thousands of rows into each daily partitioned table, and we have 3 years worth of data online.
Is it possible to enable the constraints for only 1, or a subset of partitions? I would be grateful for any advice.
Regards,
Paul.
August 20, 2007 - 2:24 pm UTC
there is not.
but - since you are not using constraints for anything really (the optimizer cannot use these constraints - they are disabled, you do not use it to load only good data, and so on...) maybe you
a) drop the constraints entirely.
b) encode them into a stored procedure that selects out the rows that would have failed, you can send to this procedure the partition relevant information so that you only query the newly loaded data.
this presumes you use a rolling window partition scheme - so all newly loaded data goes into a single partition in general.
eg: instead of:
create table t
( x int constraint x_check check (x>0),
y int constraint y_check check (y<0),
constraint table_x_y_check check (x+y =0)
)
/
you would just have X and Y and then later:
select *
from (
select rowid rid,
case when NOT x > 0 then 'X check failed, ' end ||
case when NOT y < 0 then 'Y check failed, ' end ||
case when NOT x+y = 0 then 'X/Y check failed' end error
from t
where <partition predicate>
)
where error IS NOT NULL;
heck, you could probably just pull this out of the dictionary.... like:
ops$tkyte%ORA10GR2> create table t
2 ( dt date,
3 x int constraint check_x check (x>0) disable,
4 y int constraint check_y check (y<0) disable,
5 constraint check_x_y check(x+y=0) disable
6 )
7 PARTITION BY RANGE (dt)
8 (partition part1 values less than (to_date( '01-sep-2008', 'dd-mon-yyyy') ),
9 partition part2 values less than (to_date( '01-oct-2008', 'dd-mon-yyyy') )
10 )
11 /
Table created.
ops$tkyte%ORA10GR2> insert into t values ( sysdate,1, -1 );
1 row created.
ops$tkyte%ORA10GR2> insert into t values ( sysdate,2, -2 );
1 row created.
ops$tkyte%ORA10GR2> insert into t values ( sysdate,-1, -1 );
1 row created.
ops$tkyte%ORA10GR2> insert into t values ( sysdate,+1, +1 );
1 row created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace
2 procedure check_constraints
3 ( p_tname in varchar2,
4 p_pname in varchar2,
5 p_cur in out sys_refcursor
6 )
7 as
8 l_number number;
9 l_stmt long;
10 begin
11 begin
12 select 1
13 into l_number
14 from user_tab_partitions
15 where table_name = p_tname
16 and partition_name = p_pname;
17 exception
18 when no_data_found then
19 raise_application_error
20 ( -20000, 'Cannot find that table/partition ' || p_tname ||
21 ' partition (' || p_pname || ')' );
22 end;
23
24 l_stmt := 'select * from ( select rowid rid, '''' ';
25 for x in ( select constraint_name, search_condition
26 from user_constraints
27 where table_name = p_tname )
28 loop
29 l_stmt := l_stmt || ' || case when NOT (' || x.search_condition ||
30 ') then '' Check constraint failed: ' ||
31 x.constraint_name || ''' end';
32 end loop;
33 l_stmt := l_stmt || ' error from ' || p_tname ||
34 ' partition(' || p_pname ||
35 ') ) where error is not null';
36
37 open p_cur for l_stmt;
38 end;
39 /
Procedure created.
ops$tkyte%ORA10GR2> variable x refcursor
ops$tkyte%ORA10GR2> exec check_constraints( 'T', 'PART1', :x );
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> print x
RID
------------------
ERROR
-----------------------------------------------------------------
AAAic3AAEAAAiBkAAC
Check constraint failed: CHECK_X Check constraint failed:
CHECK_X_Y
AAAic3AAEAAAiBkAAD
Check constraint failed: CHECK_Y Check constraint failed:
CHECK_X_Y
that would do all of the constraints in a single statement too - that would be nice (one scan...)
NOT NULL AND DEFAULT VALUE
Laxman Kondal, September 06, 2007 - 3:02 pm UTC
Hi Tom,
I am not sure how to get default value if NULL is passed as IN parameter:
SQL> drop table x;
Table dropped.
SQL> create table x (col1 number, col2 char(1) default 'N' );
Table created.
SQL> alter table x modify (col2 not null);
Table altered.
SQL> insert into x(col1) values(1);
1 row created.
SQL> select * from x;
COL1 C
---------- -
1 N
SQL> REM IF PASS 'NULL' THEN CONSTRAINTS VILOLATES
SQL>
SQL> insert into x(col1, col2) values(2, null);
insert into x(col1, col2) values(2, null)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("OPS"."X"."COL2")
SQL>
Is there any way is null is passed then should pickup default value.
Regards and thanks.
September 11, 2007 - 8:27 am UTC
umm, I'm very very confused by you.
OF COURSE if you ask us to store NULL - we will attempt to store NULL (Null is not any different than 'X' or 'Y' or anything - it is a valid value to store in that column)
I'd be very upset if I told Oracle to put null into a column and it did something DIFFERENT.
could we 'fix' this with a trigger - yes (please - begging you - demanding of you - NOT TO DO THAT)
but it would confuse the heck out of every other person on the planet, cause calls to support and have people wasting many hours of time - as they insert NULL and watch is automagically get turned into something else.
NOT NULL AND DEFAULT VALUE
Laxman Kondal, September 11, 2007 - 10:28 am UTC
Hi Tom,
Thanks for your advice.
There is 'IS NULL', 'IS NOT NULL' but don't know any thing 'some_thing = NULL'.
Only way I could think about is make this IN parameter as last one and pass no value or use name notaion 'p_name => value' in procedure call, but defnitely not trigger. I can understand when you say '(please - begging you - demanding of you - NOT TO DO THAT)'
Thanks and regards.
NOT NULL - Declarative or Check Constraint
Anindya, September 19, 2007 - 9:03 am UTC
Hi Tom,
Oracle documentation does not mention any preference for the same.
"http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_constraints.htm#i1006603"
However, I found two different behavior from the optimizer.
It is on 10.2.0.3.0 on Linux.
SQL> create table t (a number not null, b number, constraint c1 check (b is not null));
Table created.
SQL> select column_name, nullable from user_tab_columns where table_name='T';
COLUMN_NAME N
------------------------------ -
A N
B Y
SQL> select constraint_name, constraint_type, search_condition from user_constraints where table_name='T';
CONSTRAINT_NAME C SEARCH_CONDITION
------------------------------ - -------------------------------------------------------------------------
SYS_C0066366 C "A" IS NOT NULL
C1 C b is not null
SQL> insert into t select rownum, rownum+20 from all_objects;
37603 rows created.
SQL> insert into t select rownum, rownum+20 from all_objects;
37603 rows created.
...
SQL> select count(*) from t;
COUNT(*)
----------
601648
SQL> create index t_a_idx on t(a);
Index created.
SQL> create index t_b_idx on t(b);
Index created.
SQL> alter table t add (c number);
Table altered.
SQL> update t set c=rownum where mod(b,2)=1;
300832 rows updated.
SQL> create index t_c_idx on t(c);
Index created.
SQL> set autotrace on
SQL> select count(*) from t where b is null;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=352 Card=1 Bytes=13)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=352 Card=17 Bytes=221)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1464 consistent gets
0 physical reads
0 redo size
349 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count(*) from t where a is null;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=0 Card=1 Bytes=13)
1 0 SORT (AGGREGATE)
2 1 FILTER
3 2 INDEX (FAST FULL SCAN) OF 'T_A_IDX' (INDEX) (Cost=310 Card=530506 Bytes=68
96578)
Statistics
---------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
349 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count(*) from t where c is null;
COUNT(*)
----------
300816
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=354 Card=1 Bytes=13)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=354 Card=258234 Bytes=3357042)
Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
1527 consistent gets
0 physical reads
0 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
It seems optimizer does not understand the column nature when it is not defined explicitly.
Tom, should we prefer a declarative NOT NULL over Check constraint? Or, there is something wrong in the test case.
please advice.
September 19, 2007 - 1:06 pm UTC
if you have something that is to be NOT NULL
it only seems to make sense to use the NOT NULL constraint doesn't it?
the optimizer understands "not null" means the column is not nullable - will never contain null. That allows it to use (for example) indexes in some cases where it would otherwise not be able to.
when you have a specific constraint like not null available, it doesn't seem to me to make sense to consider using something else?
Not Null constraint
Jonathan Lewis, September 20, 2007 - 12:15 am UTC
This is a special case in the optimizer.
Constraints can be combined with existing predicates to assist the optimizer (from 9i onwards) - but
check constraints of the type
'is null',
'is not null', are deliberately excluded. However the column attribute NOT NULL is considered.
The only irritant with this is that a NOT NULL attribute is not deferrable - whereas the 'is not null' check constraint is.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Constraint and null value
Allio laurent, September 20, 2007 - 11:34 am UTC
Hi,
I have table:
Create table test(a number, b number, c number);
I can have
- null,1,2
- null,1,2
- null,1,2
- 1,2,3
- 2,4,5
a can be null with some rows with null,x,y but if a is not null then a,b,c must be unique.
Is it possible to do this with a constraint or only with a trigger.
Thanks,
September 24, 2007 - 7:25 am UTC
create unique index i
on t( case when a is not null then a end,
case when a is not null then b end,
case when a is not null then c end );
since entirely NULL keys are not indexed (a,b,c being the key here) we return A,B,C when A is not null, we return null,null,null when A is null
hence, only rows where A is not null are contributed to the unique index.
If you do this in a trigger, do not forget to add the very very important
lock table T;
you would have to serialize, the index will not force serialization at the table level.
NOT NULL Constraint - missing features
Anindya, September 21, 2007 - 9:48 am UTC
Thank you Tom for the clarification.
To Jonathan...
"The only irritant with this is that a NOT NULL attribute is not deferrable - whereas the 'is not null' check constraint is."
I think there is atleast one more area. I can not use "RELY" with it. So I do not see any option that can be helpful to bypass the NOT NULL check in a DW environment. And ENABLE NOVALIDATE is no good to the optimizer.
Thanks.
creating a constraint on a range of values in two colloumns
anandS, September 25, 2007 - 5:47 am UTC
Hi Tom,
Request a quick help on the following
I have a table T with 3 colloumns col1, col_lowerVal, col_highVal.
whenever an insert is done , how do we make sure that the range of values between col_lowerVal and col_higherVal is unique both ends inclusive.
i.e.
If I insert
aaa, 100, 200
then the following insert
bbb, 120, 150
should fail
because the range 100 to 200 is already occupied.
What kind of constraint should be set up here.
September 26, 2007 - 9:20 pm UTC
AnandS, September 29, 2007 - 2:51 am UTC
Thank you Tom,
As I gussed the way out is to lock the table check if row already exists and then insert.
Thank you again.
My conclusion is,
a unique update is possible with select ... for update , but for
a unique insert the only way out is to lock a table before insert.
October 03, 2007 - 1:45 pm UTC
I don't know what you mean by comparing a 'unique update' with a 'unique insert'
the rule would be the same any entity integrity constraint that crosses rows in a table or objects in a database - serialization at a higher level is necessary. In your case, the only possible higher level is "table" and it would be true for an UPDATE of an existing record as well as an INSERT (to update - you have to verify that you do not overlap with any existing records - committed OR NOT - so you have to serialize for the update as well)
AnandS, September 29, 2007 - 2:52 am UTC
Thank you Tom,
As I gussed the way out is to lock the table check if row already exists and then insert.
Thank you again.
My conclusion is,
a unique update is possible with select ... for update , but for
a unique insert the only way out is to lock a table before insert.
AnandS, October 05, 2007 - 3:01 am UTC
Hi Tom,
Please let me clarify.
Unique Update: I want to be sure that I will be the only person who will update the row. So in this choice is to use a SELECT ... FOR UPDATE which will effectively lock the row and prevent updates by other users of the same row.
Unique insert:I want to insert a row and make sure that the range of values between col1 and col2 (from my original question above)of that particular row do not overlap with any other existing row.
I would first check if the value for col1 exists in the range between col1 and col2
ditto with the value for col2
and if both do not exist then I insert into the table.
But in the time interval between (1) my check says yes, you can insert , "the row you are trying to insert does not exist" and (2) the time I actually insert into the table,
someone else could insert into the table a row which violate my pre condition for inserting viz range between col1 and col2 should be unique.
looks like this is not possible using any type of contraint which spans across colloumns specifically which says range between two colloums should be unique.
So the option left is to
step1 lock the table
step 2 make the check
step 3 if the check passes insert the row and release lock
Sorry if I am repeating the same thing again, but I want to be sure that our understandng of the question is the same.
foreign key constraint allows null
satya, October 26, 2007 - 10:51 am UTC
can foreign key constraint allows null if so what is the syntax
October 29, 2007 - 11:17 am UTC
the syntax is
[this space intentionally left blank]
is the absence of a NOT NULL constraint on a column.
ps$tkyte%ORA10GR2> create table p ( x int primary key );
Table created.
ops$tkyte%ORA10GR2> create table c ( x references p );
Table created.
ops$tkyte%ORA10GR2> insert into c values (NULL);
1 row created.
ops$tkyte%ORA10GR2> commit;
Commit complete.
OK
Rajesh, March 13, 2008 - 3:56 am UTC
Hello Tom,
Does disabling/re-enabling a primary key constraint
drop and recreate the primary key index automatically?
March 13, 2008 - 8:45 am UTC
if you want it do, it doesn't have to (keep index clause)
data warehouse and constraints
yoav ben moha, July 09, 2008 - 8:42 am UTC
Hi ,
Using F.K-P.K constraints in data warehouse , may cause oracle , for example ,to be able to rewrite a
query using the materialized view.
In 10g data warehouse , i would like to create a star schema on existing two tables , that currently have chiled
without parent.
In order to do that i execute the following steps:
SQL > create table my_dim
2 (
3 a number
4 );
Table created.
SQL >
SQL > create table my_fact
2 (
3 a number,
4 b number
5 );
Table created.
SQL >
SQL > insert into my_dim values (1);
1 row created.
SQL > insert into my_fact values (1,100);
1 row created.
SQL > insert into my_fact values (2,100);
1 row created.
SQL > commit;
Commit complete.
SQL > alter table my_dim ADD CONSTRAINT dim_xpk PRIMARY KEY (a) RELY enable NOVALIDATE;
Table altered.
SQL > ALTER TABLE my_fact
2 ADD CONSTRAINT my_fact_fk FOREIGN KEY (a) REFERENCES my_dim(a)
3 RELY ENABLE NOVALIDATE;
Table altered.
But inserting new record faild :
SQL >
SQL >
SQL > insert into my_fact values (3,100);
insert into my_fact values (3,100)
*
ERROR at line 1:
ORA-02291: integrity constraint (STARQ.MY_FACT_FK) violated - parent key not found
Its seems that the clause : RELY ENABLE NOVALIDATE , did VALIDATE the data , and found that
a=3 is not exists in the parent table(my_dim).
How can i allow new records to be insert into the FACT table without validating the data.
Thanks
July 09, 2008 - 11:10 am UTC
ENABLE did that, it enabled the constraint for all future operations.
Is it possible ?
yoav ben moha, July 09, 2008 - 2:44 pm UTC
Hi Tom,
"How can i allow new records to be insert into the FACT table without validating the data." ?
Thanks
July 09, 2008 - 3:20 pm UTC
you know, if someone said "well the constraint was enabled, it was enable that did it", I might look at the docs and say "hey, there is a disable too..."
be careful of what you wish for though, you lie to Oracle, Oracle will lie right back. Notice that this will lead to incorrect results if you tell us to rely on a constraint and you violate it.
ops$tkyte%ORA10GR2> create table emp
2 as
3 select * from scott.emp;
Table created.
ops$tkyte%ORA10GR2> create table dept
2 as
3 select * from scott.dept;
Table created.
ops$tkyte%ORA10GR2> create materialized view emp_dept
2 build immediate
3 refresh on demand
4 enable query rewrite
5 as
6 select dept.deptno, dept.dname, count (*)
7 from emp, dept
8 where emp.deptno = dept.deptno
9 group by dept.deptno, dept.dname
10 /
Materialized view created.
ops$tkyte%ORA10GR2> alter table dept
2 add constraint dept_pk primary key(deptno)
3 RELY disable NOVALIDATE
4 /
Table altered.
ops$tkyte%ORA10GR2> alter table emp
2 add constraint emp_fk_dept
3 foreign key(deptno) references dept(deptno)
4 RELY disable NOVALIDATE
5 /
Table altered.
ops$tkyte%ORA10GR2> alter table emp modify deptno not null NOVALIDATE;
Table altered.
ops$tkyte%ORA10GR2> insert into emp (empno,deptno) values ( 1, 1 );
1 row created.
ops$tkyte%ORA10GR2> exec dbms_mview.refresh( 'EMP_DEPT' );
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace on explain
ops$tkyte%ORA10GR2> alter session set query_rewrite_integrity=enforced;
Session altered.
ops$tkyte%ORA10GR2> select count(*) from emp;
COUNT(*)
----------
15
Execution Plan
----------------------------------------------------------
Plan hash value: 2083865914
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| EMP | 15 | 4 (0)| 00:00:01 |
-------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
ops$tkyte%ORA10GR2> alter session set query_rewrite_integrity=trusted;
Session altered.
ops$tkyte%ORA10GR2> select count(*) from emp;
COUNT(*)
----------
14
Execution Plan
----------------------------------------------------------
Plan hash value: 155013515
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | MAT_VIEW REWRITE ACCESS FULL| EMP_DEPT | 3 | 39 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
ops$tkyte%ORA10GR2> set autotrace off
Disadvantages of having date column in unique index
Reader, August 04, 2008 - 4:46 pm UTC
Want to know what disadvantages we would have if we include DATE column in unique index. As as example see below:
create table myOidTable(
id number(10),
oid raw(100),
created_dt date, --this column will never undergo updates
updated_dt date,
oid_desc varchar2(100),
...
...
...
)
;
--ID is primary key
alter table myOidTable add constraint myoidtable_pk primary key(id);
--at any point of time we should never have same oid created at same time. So for that purpose i created an unique index so that i don't break the rule.
create unique index oid_crdt_idx on myoidtable(oid,created_dt);
August 04, 2008 - 8:35 pm UTC
so, it is OK for the same OID to be created one second later?
all you are saying is "in the same second, we cannot have the same OID".
a second is not "at the same time", but does it satisfy you?
Thanks for the reply
Reader, August 05, 2008 - 10:19 am UTC
can we use timestamp datatype instead of date. This should fix i believe. Please give me your thoughts. can there be any instance where this can cause problems.
one more point. The demon which does inserts/updates/deletes to this table runs in synchronous. So, there can never be multiple inserts at same instance. It processes one after another.
August 05, 2008 - 10:24 am UTC
this seems like a nonsense requirement.
why would the OID have to be unique at an instant in time
why wouldn't the OID have to be unique all by itself.
please explain the goal, the purpose, the reasoning here. It doesn't seem to make sense at all to me.
How to handle a foreign key constraint check efficiently
Roen, September 27, 2008 - 10:08 pm UTC
Oracle Version:Oracle 10G
Hi Tom,
I have a situation where I have to check every new record being inserted to make sure that the parent key is not violated if it is you don't insert that record and log it..and move on to the next record in the loop.
Is it better to catch this scenerio only during an exception i.e. when you hit ORA-02291..that way..I don't have to check this every time a record is inserted.
Thanks.
September 28, 2008 - 10:27 am UTC
... Is it better to catch this scenerio only during an exception i.e. when you hit
ORA-02291..that way..I don't have to check this every time a record is
inserted.
...
not sure what you mean by that - it sounds like you are planning on doing slow by slow processing (I'll try to convince you in a second to abandon that), but you seem to be saying "should I do the check using a select statement and then insert OR should I just insert and catch the exception". If so - the answer is YOU SHOULD NEVER 'check' YOURSELF, YOU CANNOT DO THE CHECK CORRECTLY AND WE WILL HAVE TO DO IT AGAIN ANYWAY.
You cannot do the check since by the time you checked that the parent exists, the parent might no longer exist (we are a multi-user system after all).
So, you would just declare the foreign key relationship and catch errors if you are to catch errors.
That said, I don't think you want to write ANY CODE - you would just to a single insert with the log errors clause.
http://asktom.oracle.com/Misc/how-cool-is-this.html http://asktom.oracle.com/Misc/how-cool-is-this-part-ii.html
dml error logging and lob support
roen, September 29, 2008 - 2:55 pm UTC
Version: 10G
Tom,
Are you aware of any way to get around the issue of dml error logging not supporting lob columns...I figured I could create my own error table..and skip the lob column..
but as I do my load..to catch the offenders..I get:
ERROR at line 1:
ORA-20203: ORA-06502: PL/SQL: numeric or value error: invalid LOB locator
specified: ORA-22275
ORA-06512: at "MY_SCHEMA.MY_LOAD", line 102
ORA-06512: at line 1
It does log the error in my error log table..but after that it stops the processing with the error above
September 29, 2008 - 3:08 pm UTC
Well, it "supports" the lob column - it just cannot log it (lobs are typically not inserted in the insert - the lob locator is, but the lob data comes AFTER the insert of the row is already done)
What are you doing to get that error - please give a full (small, tiny, short, but 100% complete) example of your approach and state what you are trying to do
Blob and DML Logging
roen, September 29, 2008 - 4:43 pm UTC
In the following test case (Oracle version 10203)..I create a primary key violation for a table that has a blob column..
I am inserting a blob file from an OS directory
(fyi...this test works O.K. if I insert BLOB from an existing table):
--test case log
SQL> drop table dml_logging_blob;
drop table err_dml_logging_blob;
create table dml_logging_blob (col1 integer not null, theblob blob not null);
create table err_dml_logging_blob (
ORA_ERR_NUMBER$ NUMBER NULL,
ORA_ERR_MESG$ VARCHAR2(2000) NULL,
ORA_ERR_ROWID$ UROWID NULL,
ORA_ERR_OPTYP$ VARCHAR2(2) NULL,
ORA_ERR_TAG$ VARCHAR2(2000) NULL,
col1 varchar2(4000) null,
err_inst_dt date DEFAULT SYSDATE NOT NULL)
/
ALTER TABLE DML_LOGGING_BLOB
ADD CONSTRAINT PK_COL1
PRIMARY KEY (COL1)
USING INDEX TABLESPACE MY_SCHEMA_DATA
/
set serveroutput on
declare
v_bfile BFILE;
v_blob dml_logging_blob.theblob%type;
begin
insert into dml_logging_blob (col1, theblob)
values (1, EMPTY_BLOB() )
returning theblob into v_blob
log errors into err_dml_logging_blob ('My Load') reject limit unlimited;
v_bfile := BFILENAME ( 'MY_SCHEMA_FILE_DIR', 'blob_file.txt' ) ;
DBMS_LOB.fileopen ( v_bfile ) ;
DBMS_LOB.loadfromfile ( v_blob, v_bfile, DBMS_LOB.getlength( v_bfile )
) ;
DBMS_LOB.fileclose ( v_bfile ) ;
Table dropped.
SQL>
Table dropped.
SQL> SQL>
Table created.
SQL> SQL> 2 3 4 5 6 7 8 9
Table created.
SQL> SQL> 2 3 4 5
Table altered.
SQL> SQL> SQL> SQL> 2 3 4 5 6 7 8 9 end;
/ 10 11 12 13 14 15
PL/SQL procedure successfully completed.
SQL>
SQL> select col1, dbms_lob.getlength (theblob) length_of_blob from dml_logging_blob;
select count(*) from err_dml_logging_blob;
set lines 500
col ORA_ERR_MESG$ format a20
select col1, ERR_INST_DT, ORA_ERR_MESG$ from err_dml_logging_blob;
COL1 LENGTH_OF_BLOB
---------- --------------
1 51219
SQL> SQL>
COUNT(*)
----------
0
SQL> SQL> SQL> SQL>
no rows selected
SQL> declare
2 v_bfile BFILE;
3 v_blob dml_logging_blob.theblob%type;
4 begin
5 insert into dml_logging_blob (col1, theblob)
6 values (1, EMPTY_BLOB() )
7 returning theblob into v_blob
8 log errors into err_dml_logging_blob ('My Load') reject limit unlimited;
9 v_bfile := BFILENAME ( 'MY_SCHEMA_FILE_DIR', 'blob_file.txt' ) ;
10 DBMS_LOB.fileopen ( v_bfile ) ;
11 DBMS_LOB.loadfromfile ( v_blob, v_bfile, DBMS_LOB.getlength( v_bfile )
12 ) ;
13 DBMS_LOB.fileclose ( v_bfile ) ;
14 DBMS_OUTPUT.PUT_LINE('Number of records inserted: ' || TO_CHAR(SQL%ROWCOUNT));
15 end;
16 /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: invalid LOB locator specified: ORA-22275
ORA-06512: at "SYS.DBMS_LOB", line 637
ORA-06512: at line 11
SQL> select col1, dbms_lob.getlength (theblob) length_of_blob from dml_logging_blob;
COL1 LENGTH_OF_BLOB
---------- --------------
1 51219
SQL> select count(*) from err_dml_logging_blob;
COUNT(*)
----------
1
SQL> set lines 500
col ORA_ERR_MESG$ format a20
select col1, ERR_INST_DT, ORA_ERR_MESG$ from err_dml_logging_blob;SQL> SQL>
SQL> col col1 format a20
SQL> select col1, ERR_INST_DT, ORA_ERR_MESG$ from err_dml_logging_blob;
COL1 ERR_INST_ ORA_ERR_MESG$
-------------------- --------- --------------------
1 29-SEP-08 ORA-00001: unique co
nstraint (MY_SCHEMA.PK_C
OL1) violated
SQL>
September 29, 2008 - 5:15 pm UTC
umm, you see the error is on line 11 right..
line 11 is your
11 DBMS_LOB.loadfromfile ( v_blob, v_bfile, DBMS_LOB.getlength( v_bfile )
12 ) ;
loadfromfile call. You called loadfromfile using a v_blob value that has no value - the row wasn't inserted see - you sort of need to verify you actually inserted the row before you try to use the lob locator.
Nothing to do with dml error logging at all - everything to do with your logic flow - don't use un-initialized values - your insert cannot have returned anything, the insert didn't happen.
Not that I see much use in dml error logging with the VALUES clause and returning in this case. You would only be inserting at most one record there, so why bother with dml error logging - it is useful for SETS of data - not really singleton rows like this.
constraint
A reader, September 29, 2008 - 5:02 pm UTC
Tom;
How do you explain this.
I check a primary key constraint on a table using the USER_CONSTRAINTS table and nothing shows up.
When i try to insert same data to the table, i get a PKCOL constraint.
2. Do you use user_indexed_columns to find out that columns that make up the primary key.
September 29, 2008 - 5:17 pm UTC
smk - i'll explain it thusly:
you did something wrong, but you don't show your work here so we cannot tell you what you did wrong. If you show us your work.....
user_cons_columns would be the place to look for the columns that are the primary key columns.
user_indexed_columns just has the columns with indexes. You'd need to go to the cons tables to find out what the primary key is - user_indexes doesn't tell you that....
constraint
sam, September 29, 2008 - 5:47 pm UTC
here it is
1* select * from user_constraints where table_name='STAGE'
SQL> select * from user_constraints where table_name='STAGE'
2 /
OWNER CONSTRAINT_NAME C TABLE_NAME
------------------------------ ------------------------------ - ------------------
ADMIN SYS_C00145305 C STAGE
ADMIN SYS_C00145306 C STAGE
ADMIN SYS_C00145307 C STAGE
ADMIN SYS_C00145308 C STAGE
ADMIN FK_STAGE_REFERENCE_REF_QA_S R STAGE
ADMIN FK_STAGE_REFERENCE_REF_QA_R R STAGE
6 rows selected
1 insert into stage(bkno,med,con,rec_date,status_code)
2* values (66000,'BA','AB','01-JAN-2008','A')
SQL> /
1 row created.
SQL> /
insert into stage(bkno,med,con,rec_date,status_code)
*
ERROR at line 1:
ORA-00001: unique constraint (ADMIN.PKSTAGE) violated
.
September 29, 2008 - 7:01 pm UTC
a unique index throws the same error.
That is not a "primary key" violation, you have a manually created unique index.
ops$tkyte%ORA10GR2> create table t ( x int );
Table created.
ops$tkyte%ORA10GR2> create unique index t_idx on t(x);
Index created.
ops$tkyte%ORA10GR2> select * from user_constraints where table_name = 'T';
no rows selected
ops$tkyte%ORA10GR2> insert into t select rownum from all_users;
42 rows created.
ops$tkyte%ORA10GR2> insert into t select rownum from all_users;
insert into t select rownum from all_users
*
ERROR at line 1:
ORA-00001: unique constraint (OPS$TKYTE.T_IDX) violated
dml error logging and blob
roen, September 29, 2008 - 6:01 pm UTC
Thanks Tom..for pointing this out:
"loadfromfile call. You called loadfromfile using a v_blob value that has no value - the row wasn't inserted see - you sort of need to verify you actually inserted the row before you try to use the lob locator."
I added a sql%rowcount check..and that fixed it:
if SQL%ROWCOUNT > 0 then
v_bfile := BFILENAME ( 'PLBDT_FILE_DIR', 'blob_file.txt' ) ;
DBMS_LOB.fileopen ( v_bfile ) ;
DBMS_LOB.loadfromfile ( v_blob, v_bfile, DBMS_LOB.getlength( v_bfile )
) ;
DBMS_LOB.fileclose ( v_bfile ) ;
end if;
September 29, 2008 - 7:02 pm UTC
I still don't get why you are bothering with dml error logging, but anyway....
constraint
sam, September 29, 2008 - 9:19 pm UTC
Tom:
WOuld this manually created unique index serve as a primary key or shall i create a primary key on those 6 columns for better query performance. This table is heavily queried or would it better to keep this unique and just add a one column sequence generation as a PK.
September 29, 2008 - 10:05 pm UTC
forget about performance.
You have a primary key
You have not defined the primary key
therefore, you haven't done it right yet. You define your primary keys - always.
whether this set of columns is in fact your primary key is something YOU have to figure out however, I cannot say if you have a true six column natural key or not.
search this site for
surrogate key natural key
we've talked about it a lot over time
constraint
A reader, September 30, 2008 - 11:50 am UTC
Tom:
This is a legacy system that was inherited. I did not do it.
We do not want to do stuff that screws up the app running on it. If I add a surrogate key, they have to check all the app code and retest. something management wants to avoid for the time being.
So i can change that unique index to PK, withtout any hassles if there is benefit to it.
constraints
A User, October 26, 2009 - 12:00 pm UTC
Hi Tom, I am importing from one database to another and notice that some constraints which are notvalidated on the source are enabled and validated by the import. I am noticing this on primary key constraints. Coul dyou please let m eknow if this is an expected behaviour and what might be the reasoning behind it. Thansk for your help.
October 26, 2009 - 3:22 pm UTC
ops$tkyte%ORA9IR2> create table t ( x int, y int );
Table created.
ops$tkyte%ORA9IR2> insert into t values ( 1, 1 );
1 row created.
ops$tkyte%ORA9IR2> insert into t values ( 1, 1 );
1 row created.
ops$tkyte%ORA9IR2> commit;
Commit complete.
ops$tkyte%ORA9IR2> alter table t add constraint t_pk primary key(x) disable novalidate;
Table altered.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> !exp userid=/ owner=ops\$tkyte
Export: Release 9.2.0.8.0 - Production on Mon Oct 26 15:22:18 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table T 2 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select constraint_name, status, validated from user_constraints;
CONSTRAINT_NAME STATUS VALIDATED
------------------------------ -------- -------------
T_PK DISABLED NOT VALIDATED
ops$tkyte%ORA9IR2> drop table t;
Table dropped.
ops$tkyte%ORA9IR2> !imp userid=/ full=y
Import: Release 9.2.0.8.0 - Production on Mon Oct 26 15:22:19 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T" 2 rows imported
Import terminated successfully without warnings.
ops$tkyte%ORA9IR2> select constraint_name, status, validated from user_constraints;
CONSTRAINT_NAME STATUS VALIDATED
------------------------------ -------- -------------
T_PK DISABLED NOT VALIDATED
can you give me a for example?
Strange behaviour with constraints on LOBs
Michael, November 25, 2009 - 3:49 am UTC
Consider (at least in 10.2.0.3):
CREATE TABLE Test(
C CLOB
);
ALTER TABLE Test
ADD CONSTRAINT LENGTH_C_CH CHECK(LENGTH(C) <= 5000);
Table and constraint are created - looks fine to far.
Now:
declare
l_a clob;
l_v varchar2(1000) := lpad('X', 1000, 'X');
begin
insert into rawtest(c) values(empty_clob()) returning c into l_a;
for l_i in 1 .. 100
loop
dbms_lob.append(l_a, l_v);
end loop;
end;
/
Works ... strange ...
select LENGTH(C) from rawtest;
LENGTH(C)
--------------
100000
1 rows selected
My constraint seems to be ignored ... but:
update test
set c = c
Fehlerbericht:
SQL-Fehler: ORA-02290: CHECK-Constraint (MIKE.LENGTH_C_CH) verletzt
02290. 00000 - "check constraint (%s.%s) violated"
*Cause: The values being inserted do not satisfy the named check
*Action: do not insert values that violate the constraint.
Looks like a bug (or did i missed something in the documentation)?
How to limit the size of a LOB
Michael, November 26, 2009 - 5:22 am UTC
Tom, thanks for the clarification.
So, what would you do, if you wanted to limit the size of a LOB?
November 28, 2009 - 12:57 pm UTC
You would provide an API to it - do not let the applications directly access the data (which is almost always the case).
Transactional API's rock and roll
(as opposed to table api's, which I am not a fan of)
constraints
A reader, December 24, 2009 - 11:23 am UTC
Dayne Olivier, February 16, 2010 - 3:51 am UTC
Hi Tom,
1) I refer back to your response in this thread on June 24, 2004 where you answered the question of checking that dates are larger than a specified date. In the response, you used a trigger to check that new records use dates greater than a pre-defined value. I noticed that you used an "after insert or update" trigger. Why did you use "after"?
2) After much reading, I can't seem to understand why we can't just use sysdate in a constraint. Why does Oracle not allow this?
Thanks Tom
February 16, 2010 - 5:16 pm UTC
1) because a before trigger can modify the value of a column - you want to do validate in general AFTER the row has stabilized, after we know what the values will actually be.
2) think about it....
create table t (
....
x date check (x > sysdate ),
....
)
what does that mean exactly??? When I insert - we can verify that x > sysdate, but what if I insert sysdate+1/24 - then in one hour, we'll have data in the table that violates our constraint!!
It doesn't make sense.
constraint
A reader, February 16, 2010 - 6:17 pm UTC
<<<You would provide an API to it - do not let the applications directly access the data (which is almost always the case).
Transactional API's rock and roll >>>
Does this mean always:
1) Separate the SQL statements from presentation Code Logic
2) Always store the SQL (API) in the database
Does not this sometimes makes things harder to understand as sometimes you have:
presentation code...
sql code.....
presentation code....
sql code...
etc.
We would need several tranasction APIs and someone analyzing the code has to read all these calls to understand what the GUI is doing
February 17, 2010 - 8:45 am UTC
Does this mean always:
1) Separate the SQL statements from presentation Code Logic
2) Always store the SQL (API) in the database
Well, I like to say:
never say never,
never say always,
I always say
It is my opinion that the best programs have zero occurrences of select, insert, update, delete, merge, create, etc in them. They have only CALL and DECLARE/BEGIN/END blocks. The reasoning: most 3gl programmers haven't a clue how to spell database yet alone use one properly. Therefore, having a server side person generate the transactional code (in a stored procedure) and exposing that stored procedure to the clients - so the clients can do no real harm - is a practice I follow and promote heavily.
Does not this sometimes makes things harder to understand as sometimes you
have:
You'll have to give me a concrete example here, if you ask me, it is the most understandable thing in the world. In the olden days we used to call this "modular coding", you did things in bite sized chunks, in a logical breakdown of the code, a subroutine (you now call them "methods", how truly far we've come) would fit on a screen (so you could understand it) and spaghetti code that went on for pages and pages was laughed at (well, I always laughed at it and refused to help debug it until it was written to fit on a screen at a time, routine by routine).
...
We would need several tranasction APIs and someone analyzing the code has to
read all these calls to understand what the GUI is doing
and by having SQL in the client - intermixed with the UI - helps this HOW exactly????????????
consider:
do something on the screen....
get_emp_data(?,?,?);
do something with what we just got...
get_mgr_data(?....);
do something else.....
How is that more confusing???
If that is confusing then you better start programming in ASSEMBLER - because the API you are using to interface with your GUI is.... Well, a transactional API that lets you interface with your GUI. Doesn't it confuse you to use several of these transactional GUI API's - someone has to read them all to understand what the GUI is doing.....
Enabling and disabling constraints
Prasad, March 08, 2010 - 6:37 am UTC
Suppose, I disable the constraint and added 1000 rows (bulk insert) later when I try to enable the constraint it gives me error. How I will find out the wrong entry in my database?
March 09, 2010 - 11:09 am UTC
you could use the exceptions into clause.
ops$tkyte%ORA10GR2> create table t ( x int constraint t_check check (x>0) );
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t disable constraint t_check;
Table altered.
ops$tkyte%ORA10GR2> insert into t select rownum-5 from all_users;
66 rows created.
ops$tkyte%ORA10GR2> alter table t enable constraint t_check;
alter table t enable constraint t_check
*
ERROR at line 1:
ORA-02293: cannot validate (OPS$TKYTE.T_CHECK) - check constraint violated
ops$tkyte%ORA10GR2> @?/rdbms/admin/utlexcpt
ops$tkyte%ORA10GR2> rem
ops$tkyte%ORA10GR2> rem $Header: utlexcpt.sql,v 1.1 1992/10/20 11:57:02 GLUMPKIN Stab $
ops$tkyte%ORA10GR2> rem
ops$tkyte%ORA10GR2> Rem Copyright (c) 1991 by Oracle Corporation
ops$tkyte%ORA10GR2> Rem NAME
ops$tkyte%ORA10GR2> Rem except.sql - <one-line expansion of the name>
ops$tkyte%ORA10GR2> Rem DESCRIPTION
ops$tkyte%ORA10GR2> Rem <short description of component this file declares/defines>
ops$tkyte%ORA10GR2> Rem RETURNS
ops$tkyte%ORA10GR2> Rem
ops$tkyte%ORA10GR2> Rem NOTES
ops$tkyte%ORA10GR2> Rem <other useful comments, qualifications, etc.>
ops$tkyte%ORA10GR2> Rem MODIFIED (MM/DD/YY)
ops$tkyte%ORA10GR2> Rem glumpkin 10/20/92 - Renamed from EXCEPT.SQL
ops$tkyte%ORA10GR2> Rem epeeler 07/22/91 - add comma
ops$tkyte%ORA10GR2> Rem epeeler 04/30/91 - Creation
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table exceptions(row_id rowid,
2 owner varchar2(30),
3 table_name varchar2(30),
4 constraint varchar2(30));
Table created.
ops$tkyte%ORA10GR2> alter table t enable constraint t_check exceptions into exceptions;
alter table t enable constraint t_check exceptions into exceptions
*
ERROR at line 1:
ORA-02293: cannot validate (OPS$TKYTE.T_CHECK) - check constraint violated
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from t where rowid in (select row_id from exceptions);
X
----------
-4
-3
-2
-1
0
business rules
Prasad, March 08, 2010 - 6:45 am UTC
What are 5 business rule? Explain them.
March 09, 2010 - 11:15 am UTC
o Once you have their money, you never give it back.
o Never spend more for an acquisition than you have to.
o Never allow family to stand in the way of opportunity.
o Keep your ears open.
o Opportunity plus instinct equals profit.
oh wait, those are 5 rules of acquisition... I don't know what YOUR 5 business rules are - how about you tell us?
CHECK and TRUNC on DATE
Parthiban Nagarajan, March 10, 2010 - 2:50 am UTC
Hi Tom
You taught me that constraints are very important and they can help in generating better execution plans.
For the classic SCOTT.EMP, I created the following constraint.
alter table emp add (constraint emp_hiredate_ck check(hiredate = trunc(hiredate)));
How this information can be useful for our CBO? Can you give an example?
March 10, 2010 - 8:49 am UTC
I never said it would ALWAYS be helpful, I said we can use them.
integrity constraints are for
a) DATA INTEGRITY (which is pretty darn important)
b) PERFORMANCE (secondary, but good thing)
It doesn't have to be for both every single time.
If you have a rule that says "hiredate cannot have a time component", you need that check constraint regardless.
but...
ops$tkyte%ORA11GR2> create table t ( x date check( x=trunc(x,'d') ), y date );
Table created.
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from t where x = to_date( '01-jan-2009 10:11:12', 'dd-mon-yyyy hh24:mi:ss' );
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 18 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("X"=TO_DATE(' 2009-01-01 10:11:12', 'syyyy-mm-dd
hh24:mi:ss') AND TRUNC(INTERNAL_FUNCTION("X"),'fmd')=TO_DATE('
2009-01-01 10:11:12', 'syyyy-mm-dd hh24:mi:ss'))
Note
-----
- dynamic sampling used for this statement (level=2)
ops$tkyte%ORA11GR2> select * from t where y = to_date( '01-jan-2009 10:11:12', 'dd-mon-yyyy hh24:mi:ss' );
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 18 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("Y"=TO_DATE(' 2009-01-01 10:11:12', 'syyyy-mm-dd
hh24:mi:ss'))
Note
-----
- dynamic sampling used for this statement (level=2)
ops$tkyte%ORA11GR2> set autotrace off
Duplicate values on PK
A reader, September 15, 2010 - 7:59 am UTC
Could you explain why could it be done successfully?
Connected to Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
Connected as lm
SQL> CREATE TABLE t (x NUMBER);
Table created
SQL> CREATE INDEX idx_t_x ON t(x);
Index created
SQL> ALTER TABLE t add CONSTRAINT t_pk PRIMARY KEY (x);
Table altered
SQL> ALTER INDEX idx_t_x UNUSABLE;
Index altered
SQL> INSERT /*+ append*/ INTO t
2 SELECT 1 FROM dual UNION ALL SELECT 1 FROM dual;
2 rows inserted
SQL> ALTER INDEX idx_t_x REBUILD;
Index altered
SQL> select * from t;
X
----------
1
1
Why Oracle allow duplicate primary key values-? However, if the index is unique, direct path load insert would fail...
Thanks in advance!
September 15, 2010 - 8:49 am UTC
I filed bug #10116295, the constraint should have to be disabled prior to the load. The bug is that the constraint is not forced to be disabled (as it used to be)
A reader, September 15, 2010 - 9:33 am UTC
Sorry, I can't test it on 10g or 9i...so is it a bug relates to only 11g R1? Is #10116295 a non-public bug-report? I can't find it on metalink.
Thanks in advance!
September 15, 2010 - 9:42 am UTC
I tested it already, confirmed that 11g 'broke' it and it affects 11gr1 and r2. I file my bugs "not published", the developers publish them later. If you want to file an SR with support, you can reference that bug# and your SR will "take over" the bug.
A reader, September 15, 2010 - 9:52 am UTC
Thank you very much!
RE: CHECK and TRUNC on DATE
Parthiban Nagarajan, December 08, 2010 - 7:25 am UTC
Hi Tom
So, you have shown me Oracle using the INTERNAL_FUNCTION.
What it is all about?
Is there anything related to performance?
Where I could find more about it?
Please help.
Thanks In Advance.
December 08, 2010 - 10:38 am UTC
it is internal, it is not something published. It is just the way they do what you've asked them to do.
RE: INTERNAL_FUNCTION
Parthiban Nagarajan, December 09, 2010 - 1:40 am UTC
December 09, 2010 - 10:35 am UTC
I think it is only part of the story. I didn't use timestamps at all. So, there was not any implicit conversions going on.
It should be rather obvious that ANY implicit conversion is
a) evil
b) to be avoided
c) a reason indexes can be obviated and plans go horribly wrong
it isn't because of "internal_function()", it isn't the fault of "internal_function()", "internal_function()" is not the *cause* of the poor performance the person was writing about - it was a symptom.
The root cause of their poor performance was the ever evil implicit conversion.
So, I would argue that the poor little internal function was improperly blamed - it is not the CAUSE, it was a side effect of the implicit conversion and the implicit conversion was the only evil to be written about. In that post - internal function was a "signature, a sign - pointing to the underlying cause - the implicit conversion"
(and no, not all implicit conversions will show up as internal function either, so it is not *always* present during an implicit conversion - it might be. And further it does not IMPLY implicit conversion - it might, but then again it might not)
Data Check - One to One Mapping
V.Hari, July 01, 2011 - 12:23 am UTC
Mr.Oracle,
Have to perform a data integrity check in the below manner,
Table and data are as below,
TBL_MAPPING
Initially the data is like this,
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
1 100 NULL NULL VALID
2 NULL D909 NULL VALID
3 NULL NULL LID_01 VALID
Later we have some business validation and then we would like to map all above 3 in the below fashion
TBL_MAPPING becomes as below,
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
1 100 NULL NULL INVALID
2 NULL D909 NULL INVALID
3 NULL NULL LID_01 INVALID
4 100 D909 LID_01 VALID
By the time we comeup with record ID = 4, the business would have already done some transactions with 1,2,3 ID. So Will have to route all 1,2,3 --> 4 therefore I have another table
TBL_ID_MATCH which is as below,
ID MAPPING_ID
1 4
2 4
3 4
4 4
Question :
After entering record ID = 4 in the TBL_MAPPING table like below
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
4 100 D909 LID_01 VALID
I should not allow any of these ID's(EMP_ID,DEPT_ID or LOCATION_ID) mapped to any other or further more, ie
I should not allow the below data's
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
5 100 D902 LID_01 VALID
6 101 D901 LID_03 VALID
7 101 D909 LID_05 VALID
because
once an mapping is done between EMP__ID,DEPT_ID,LOCATION_ID and made its status as VALID, i should not allow any further mapping on these mapped DATA.
Is there a way to achive this other than CODING. I would like to know if there is any feasibility to control it at contraint level.
I cannot use composite pk.
I cannot user trigger - as it is not multi user friendly due to read consistency.
Check constaint option i am not sure,
Kindly let me know your view guru.
Thanks for your time.
Regards
V.Hari
July 01, 2011 - 9:19 am UTC
so, none of this made sense.
I don't see how the second example represents a "mapping" at all - I don't understand how we transitioned from the initial data to the second bit of data.
I don't know how you know that 1,2,3->4, where did that logic come from.
none of this is clear at all.
Data Check - One to One Mapping
V.Hariharaputhran, July 01, 2011 - 9:51 am UTC
Mr.Oracle,
Trying to make it clear,
Step 1
Initially the data is like this,
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
1 100 NULL NULL VALID
2 NULL D909 NULL VALID
3 NULL NULL LID_01 VALID
Step2 :
The DATA entry person receives a
manual PAPER confirmation from customer to LINK EMP_ID =100, DEPT_ID=D909, LOCATION_ID=LID_01 TOGETHER.
Step 3:
The business User searchs for this combination and links them. The Front End JAVA sends a CSV data with these mapped data.(i,e) CSV =
"100","D909","LID_01" Step4 :
DB should validate whether they are not being mapped earlier to any other ID's and then insert it, hence it becomes
ID(PK) EMP_ID DEPT_ID LOCATION_ID STATUS
1 100 NULL NULL INVALID
2 NULL D909 NULL INVALID
3 NULL NULL LID_01 INVALID
4 100 D909 LID_01 VALID
Negative Scenario :
By Mistake if the Front End team sends me to link
CSV =
"100","D909","LID_100",
It has to be IGNORED since we already mapped as below
100|D909|LID_01 and its status is now VALID Kindly let me know the options to accompanish this constraints wise else let me know the way to achive it.
Regards
V.Hari
July 01, 2011 - 11:46 am UTC
does step 3 include also sending the Id's of these 'mapped' records? If not, I don't see how you can reconcile step 2 with step 4.
You should be able to:
create unique index x on t (
case when status = 'VALID' then emp_id end,
case when status = 'VALID' then dept_id end,
case when status = 'VALID' then location_id end
);
shouldn't you? that will uniquely index ONLY the valid records - and would prevent two valid records with the identical key values from ever existing in a valid state.
DATA integrity
V.Hari, July 01, 2011 - 12:09 pm UTC
Mr. Oracle
Thanks for your time, response and interest.
At STEP 2:
Along with CSV Yes i would receive the ID's (1,2,3) indicating to MERGE, hence 1,2,3 -->4. Missed indicating it.
Test Case :-
create table tbl_test(id number primary key , emp_id number, dept_id varchar2(10), location_id varchar2(10), status varchar2(10) )
create unique index x on tbl_test (
case when status = 'VALID' then emp_id end,
case when status = 'VALID' then dept_id end,
case when status = 'VALID' then location_id end
);
insert into tbl_test values (1,100,null,null,'VALID');
insert into tbl_test values (2,null,'D909',null,'VALID');
insert into tbl_test values (3,null,null,'LID_01','VALID');
insert into tbl_test values (4,100,'D909','LID_01','VALID');
update tbl_test set status = 'INVALID' WHERE ID IN (1,2,3);
commit;
--The below should not be allowed to be inserted--------
insert into tbl_test values (5,101,'D909','LID_01','VALID');
insert into tbl_test values (6,100,'D907','LID_01','VALID');
insert into tbl_test values (7,100,'D909','LID_05','VALID');
commit;
select * from tbl_test
D EMP_ID DEPT_ID LOCATION_ID STATUS
---------------------- ---------------------- ---------- ----------- ----------
1 100 INVALID
2 D909 INVALID
3 LID_01 INVALID
4 100 D909 LID_01 VALID
5 101 D909 LID_01 VALID
6 100 D907 LID_01 VALID
7 100 D909 LID_05 VALID
7 rows selected
I should not allow 5,6,7 to be inserted AFTER the combination of 4 is DONE. Is there a way to handle it. Yes with code i can loop it and find its combiniation and restrict the entry for 5,6,7. But at the table design wise/contraints wise/ index wise or worst case through Trigger is there a option to stop it.
Appreciate your splendid service to the oracle community.
Looks i am wishing more, i,e oracle to perform data integrity check at the intellectual level.
Regards
V.Hari
July 01, 2011 - 12:21 pm UTC
then use:
create unique index t_idx1 on (case when status = 'VALID' then emp_id end );
create unique index t_idx2 on (case when status = 'VALID' then dept_id end );
create unique index t_idx3 on (case when status = 'VALID' then location_id end );
that would allow only one row to have any given EMP_ID value for valid records.
DATA INTEGRITY CHECK
V.Hari, July 01, 2011 - 12:32 pm UTC
Mr.Oracle,
As usual you got it right.Thanks for your time, response, effort.
Just the order of statement is changed. First update valid to invalid then INSERT 4 with status = Valid
Test Case:
drop table tbl_test;
create table tbl_test(id number primary key , emp_id number, dept_id varchar2(10), location_id varchar2(10), status varchar2(10) );
/*
create unique index x on tbl_test (
case when status = 'VALID' then emp_id end,
case when status = 'VALID' then dept_id end,
case when status = 'VALID' then location_id end
);
*/
create unique index t_idx1 on tbl_test (case when status = 'VALID' then emp_id end );
create unique index t_idx2 on tbl_test (case when status = 'VALID' then dept_id end );
create unique index t_idx3 on tbl_test (case when status = 'VALID' then location_id end );
insert into tbl_test values (1,100,null,null,'VALID');
insert into tbl_test values (2,null,'D909',null,'VALID');
insert into tbl_test values (3,null,null,'LID_01','VALID');
update tbl_test set status = 'INVALID' WHERE ID IN (1,2,3);
insert into tbl_test values (4,100,'D909','LID_01','VALID');
commit;
--The below should not be allowed --------
insert into tbl_test values (5,101,'D909','LID_01','VALID');
Exception Got while inserting 5 is as below,
SQL Error: ORA-00001: unique constraint (TIW_TEST.T_IDX2) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
select * from tbl_test;
ID EMP_ID DEPT_ID LOCATION_ID STATUS
---------------------- ---------------------- ---------- ----------- ----------
1 100 INVALID
2 D909 INVALID
3 LID_01 INVALID
4 100 D909 LID_01 VALID
Thanks.
Regards
V.Hari
Unique Constraint on DW - Product docs
Rajeshwaran, Jeyabal, September 19, 2011 - 12:24 am UTC
Tom:
I was reading about the unique constraint from product documentation,
b> http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/constra.htm#i1006259 <quote>
However, there are trade-offs for the data warehouse administrator to consider with DISABLE VALIDATE constraints. Because this constraint is disabled, no DML statements that modify the unique column are permitted against the sales table. You can use one of two strategies for modifying this table in the presence of a constraint:
Use DDL to add data to this table (such as exchanging partitions). See the example in Chapter 16, "Maintaining the Data Warehouse".Before modifying this table, drop the constraint. Then, make all necessary data modifications. Finally, re-create the disabled constraint. Re-creating the constraint is more efficient than re-creating an enabled constraint. However, this approach does not guarantee that data added to the sales table while the constraint has been dropped is unique.
</quote>
I am unable to use Exchange partition on the table having unique constraint in DISABLE VALIDATE state. Is that again a documentation bug?
rajesh@ORA11GR2>
rajesh@ORA11GR2> create table t(
2 x , y )
3 partition by hash(x)
4 (
5 partition p1,
6 partition p2,
7 partition p3,
8 partition p4
9 )
10 as
11 select level,rpad('*',10,'*') as y
12 from dual
13 connect by level <=1000;
Table created.
Elapsed: 00:00:00.42
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t add
2 constraint t_pk unique(x)
3 disable validate;
Table altered.
Elapsed: 00:00:00.45
rajesh@ORA11GR2> create table t1(x number,y varchar2(10));
Table created.
Elapsed: 00:00:00.12
rajesh@ORA11GR2> insert into t1 values(10000,'*');
1 row created.
Elapsed: 00:00:00.09
rajesh@ORA11GR2> commit;
Commit complete.
Elapsed: 00:00:00.04
rajesh@ORA11GR2>
rajesh@ORA11GR2> alter table t exchange partition p1 with table t1;
alter table t exchange partition p1 with table t1
*
ERROR at line 1:
ORA-25132: UNIQUE constraint (RAJESH.T_PK) disabled and validated in ALTER TABLE EXCHANGE PARTITION
Elapsed: 00:00:00.09
rajesh@ORA11GR2>
September 19, 2011 - 5:53 pm UTC
you can use the comments section on the documentation file such issues, the doc writers will get back to you on issues like this. Just log into OTN, file the issue right there and then...
constraints
sam, September 19, 2011 - 8:52 pm UTC
Tom:
Oracle 11g seems to introduce constraints on VIEWS.
I am kind of confused on how would these be enforced since a view is always a virtual table and the base table is going to enforce the column constraints anyway.
Is this mostly used when VIEWS are used for DML (insert, update ,delete) and not select? Why would i need a PK or FK or UNIQUE constraint on a view. Would not this affect the SQL query running behind the view.
view constraint and optimizor
A reader, December 01, 2011 - 8:39 pm UTC
Hi tom,
per your comments, view constaint has below two main benefits:
1> for app who has only access to view to have a good understanding about the underlying data model
2> 'or the optimizer optimize queries'
I am a little bit confused about the 2>, do you mean with 'view constraint' oracle can do more optimization? So does the view constraint need the underlying/base table corresponding constraint supportive?
It might be easier to ask in such a way:
1> I have table dept(id int, name varchar2(10))
2> I know the 'name' will be unique
3> but in considering the big batch load performance, i did not create the unique constraint on it
4> i have index ind1 on dept(name)
5> for below view
create view v1 as select * from dept;
below query will be 'index range scan' obviously
select * from v1 where name='aa';
6> for below view
create view v2(id,name unique disable novalidate) as
select * from dept;
i intentionly add the unique view constraint as i know
its base table will be unique even without the
costraint
below query is possible to have 'index unique scan'?
select * from v2 where name='aa';
I tested it and not what i was thinking. And also i am thinking this is not possible, so could you please elaborate more 2rd point?
Thanks very much.
December 06, 2011 - 10:50 am UTC
... do you mean with 'view constraint'
oracle can do more optimization? ...
yes, we use them in query rewrites
..... So does the view constraint need the
underlying/base table corresponding constraint supportive? ...
No, for example you might have a table:
create table projects( project_name, status, ..... );
and it could be that project_name is NOT unique in that table, however it is known that project_name is unique if status = 'ACTIVE'. You might even have an index:
create unique index i on projects( case when status = 'ACTIVE' then project_name end );
to ensure that - maybe not (I would hope you do if this is an OLTP SYSTEM)
given that project name is unique when status = 'ACTIVE', you could create a view that selects only active projects and add a primary key or unique constraint to it on project name....
.... below query is possible to have 'index unique scan'?
select * from v2 where name='aa'; ....
no it isn't, you started by saying "4> i have index ind1 on dept(name)", that isn't a unique index so you can only have an index range on it - only unique indexes support index unique scans as an access path.
A reader, December 22, 2011 - 10:19 am UTC
I have a few questions about constraints that I have been unable to find answers for:
1) Earlier in the thread, Jonathan Lewis talked about NOT NULL constraints being able to affect the optimizer. When I read his post, it sounds like defining a NOT NULL check constraint in one fashion will allow this, while doing it in another fashion will override this ability. So, I am not really sure I understand what he is saying. Which of the following will allow the optimizer to consider the column as NOT NULL and change the explain plan?
CREATE TABLE test (
val1 NUMBER NOT NULL,
val2 NUMBER,
val3 NUMBER,
CHECK (val2 IS NOT NULL),
CONSTRAINT TEST_VAL3_CK CHECK (val3 IS NOT NULL));
2) I created a type that closely resembles a table I wish to store the data in. The table has check constraints on the columns. Can I put those same constraints on the type? If so, what is the syntax for that? I have tried a few things and I just get compilation errors. I know that I can just write methods to set the data and put my logic in there or I could simply let the table handle the constraints when they try to write the object to the table, but I think it would be much more useful if the type itself threw an exception if you tried to violate the constraints of the value.
3) My third question is a bit more of an abstract question. This is not an issue that I am facing currently, so my example is a bit made up; I am just curious what your thoughts are in this scenario. Let’s say that you are building an order processing system that will take orders from the web and from a terminal. Both systems use a lot of the same entities: Orders, Items, Customers, Payments, etc. But, some of the business logic that can be easily enforced at the database level is different depending on which system it comes from. For example, an online transaction would require an address because the item has to be shipped somewhere while a terminal does not require an address, the customer is potentially taking the item with them from the terminal (it could even go a step further with the idea of physical items vs. virtual items causing the business rule to be in effect). How would you go about implementing that business rule? Would you push that one to the business tier? Would you create a “sub-entity” that would enforce the business rule (like an ORDER table with the common data and a WEB_ORDER table and a TERMINAL_ORDER table with the different business rules)? Would you create two entirely different sets of entities using views to let users see them as one (i.e. different business rules indicate the entities really aren’t the “same”)? Would you push the logic to a stored procedure? Or is there some other slick solution that I don’t know about yet (like maybe conditional check constraints; if column x = ‘WEB’ then column y IS NOT NULL)? Just curious because I have run into these types of situations before, but when they came up I was still in the “business logic belongs in the business tier” frame of mind, and I am quickly changing that approach based on the points you and others make here.
December 22, 2011 - 11:05 am UTC
1) the optimizer will recognize the "NOT NULL" constraint - but it won't see the explicitly verbose check (val3 is not null) as being the same
therefore, never use "check (val3 is not null)", always use "val3 datatype NOT NULL"
2) constraints are for tables, they do not apply to types. It is an implementation restriction.
3)
For example, an online transaction would require an address because the
item has to be shipped somewhere while a terminal does not require an address,
the customer is potentially taking the item with them from the terminal (it
could even go a step further with the idea of physical items vs. virtual items
causing the business rule to be in effect).
not true, there would be some attribute that signifies whether this is an online transaction or an in store. The business rule would be "address is mandatory for online, optional for in store" is all. The rule is at the aggregate level, not the field level in this case.
Would you push that one to the business tier?
probably not - it is a table level check:
check ( trans_type = 'IN STORE' or (trans_type = 'ONLINE' and address is not null ) )
Would you create a “sub-entity” that would enforce the business rule (like an
ORDER table with the common data and a WEB_ORDER table and a TERMINAL_ORDER
table with the different business rules)?
maybe, that is a possibility, it would depend on other requirements. Are they truly different enough to necessitate this - probably not, but maybe. It is an option.
In fact your entire list is possible (with "stored procedure" always being at the top of MY list :) )
setting not null constraint on a large table
Michal Pravda, October 17, 2014 - 6:52 am UTC
Good morning,
I've got a backup table with about bilion (10^9) rows with segments total size 150GB. It is range partitioned monthly by create_date. It has been filled monthly with insert select from "live" table with 3 months old records.
I want to change the filling method from insert select to partition exchange to speed it up. The problem is that the partition exchange can't be done now, because backup table lacks 6 not null constraints when compared to the live table. Apart of that, they are identical. I tried to set it by alter table t modify (col1 not null, ... col6 not null), but it didn't finish in the 8 hours window I have.
The only other option I can think of is to create a new table with the constraints and fill it with 150GB worth of records (hopefully with partition wise parallel execution - but still a lot of work, not mentioning the extra space requirements) and then swap the tables.
Is there any better solution? Why does the straight forward way take so long? If I assume 50MB/s (speed it selects the table) then 150GB should be read and checked for records possibly violating the constraints in an hour.
multi-table check constraint: trigger or FB index?
jess, October 02, 2015 - 11:24 am UTC
Hi Tom,
We have a table of companies, a table of users, and a table of user_types. Each 'user' row has a company_id and also a user_type_id. Each 'company' row has a 'contact_user_id' column (containing user_id of the person who is their main contact). That user must be of a specific user_type.
Since I can't use a check constraint spanning multiple tables (at least I don't think I can...), I think my options are to have a trigger on the table or a function-based index doing the corresponding check. Do you have a preference between the two (or a better idea of how to do this)? Wha do you recommend in this case?
Many thanks, as always.
October 03, 2015 - 2:10 am UTC
Take a look at this magazine article
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html Head to the bottom and see the example where an on-commit materialized view is used to enable validation of this sort.
Triggers for this thing *can* work, but they are often harder than first glance due to locking and concurrency concerns.
Hope this helps.
multi-Table insert between parent and child Table
Rajeshwaran, Jeyabal, July 15, 2016 - 8:36 am UTC
Team,
Was doing a multi-Table insert between parent and child, but looks like constraints are not validated at the end of transaction, but seem to be validated at each row level.
Could you help us to understand this?
demo@ORA11G> create table t as select * from all_objects;
Table created.
demo@ORA11G> create table claim(claim_id int constraint claims_pk primary key,
2 claim_name varchar2(30));
Table created.
demo@ORA11G> create table claim_line_item(claim_id int, line_id int,
2 seq_id int constraint claim_line_pk primary key,
3 constraint claim_line_fk foreign key(claim_id) references claim );
Table created.
demo@ORA11G> insert all
2 when ( running_line_seq = 1) then
3 into claim(claim_id,claim_name) values(claim_id,owner)
4 when ( running_line_seq > 0) then
5 into claim_line_item(claim_id,line_id,seq_id)
6 values(claim_id,line_id,s.nextval )
7 select owner,object_name,object_id,
8 dense_rank() over( order by owner ) as claim_id,
9 dense_rank() over( partition by owner order by object_name ) as line_id,
10 row_number() over( partition by owner order by object_name,object_id ) as running_line_seq
11 from t ;
insert all
*
ERROR at line 1:
ORA-02291: integrity constraint (DEMO.CLAIM_LINE_FK) violated - parent key not found
demo@ORA11G> select count( case when running_line_seq=1 then owner end ) cnt1,
2 count( distinct case when running_line_seq >0 then owner end ) cnt2
3 from (
4 select owner,object_name,object_id,
5 dense_rank() over( order by owner ) as claim_id,
6 dense_rank() over( partition by owner order by object_name ) as line_id,
7 row_number() over( partition by owner order by object_name,object_id ) as running_line_seq
8 from t
9 order by claim_id,running_line_seq
10 ) ;
CNT1 CNT2
---------- ----------
27 27
1 row selected.
demo@ORA11G>
July 15, 2016 - 9:00 am UTC
The order in which operations are done in a multi-table insert is *indeterminate*, so you can can't rely on something like:
insert all
when "parent" then
when "child" then
to do things parent-then-child. We *might* do all children first, we *might* do all parents first, we might mix and match etc etc.
A deferrable constraint should do the trick.
also on complex check constraints
Jess, July 10, 2017 - 10:28 am UTC
Hi guys,
Say you have a table of convicts with usual 'person' details where first/last name represent current names. Each convict can have a bunch of alternate names. We store all of those separately. At least one of those names must be the name on the current conviction. For business purposes, we only ever select all names as a 1-column list. (I've kept first/last name as 1 column for simplicity here).
This can be implemented one of 2 ways:
1.
- table convict: id, first_last_name, etc.
- table convict_known_names: id, fk_convict_id, other_name, is_current_conviction_name, etc.
2.
- table convict: id, first_last_name, current_conviction_name, etc.
- table convict_known_names: id, fk_convict_id, other_name
Exactly 1 name for each convict must be the current conviction name.
Implementing things as (2) removes the need for needing to check that with a constraint: it'd just be a not-null column. However, to provide a list of names, it would need to be a pivot, which seems less than ideal.
Implementing things as (1) is more clean and streamlined, but we would need to envorce that, for every record in 'convict_known_names', there must exist precisely 1 record for each 'fk_convict_id' that has 'is_current_conviction_name' = 'Y'.
Is there a simple way to do this? If so, could you help?
As an aside, from the design perspective, is your preference (1) or (2), esp. in terms of performance of pivit vs. check constraint?
July 11, 2017 - 2:59 pm UTC
Personally I lean towards solution 2, where the current name is a column in convicts. This is because it's much easier to enforce (just a not null constraint). It's much harder to do this with option 1.
It's also easier to change the current conviction name. You just update the row. Whereas with option 1 you need to update the old name and insert/update the new one.
What is it that concerns you about pivot?
If it's the hassle of writing the query, create a view that does this and unions with the other names. Then use the view where needed.
If it's performance, then there's only one answer:
Benchmark and test! ;)
If you want to go down the route of an is_current flag on the list of names, you guarantee exactly one is_current child by adapting the solutions at:
https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:42170695313022 (there's an MV based solution way down the thread, look for "union-all master+detail, than aggregate and check").
PS - this would be better as a new question!
more info on the last comment about complex check constraints...
Jess, July 10, 2017 - 7:31 pm UTC
More info on the last comment/question...
>> but we would need to envorce that, for every record in 'convict_known_names', there must exist precisely 1 record for each 'fk_convict_id' that has 'is_current_conviction_name' = 'Y'.
The best I could think of was function-based index:
create unique index fx_convict_known_names_current_name on convict_known_names
(case when is_current_conviction_name = 'Y' then fk_convict_id else null end);
But the table is obviously searched by fk_convict_id, which begs for a non-unque index on just that column, which seems a bit redundant... Am I going about it wrong?
July 11, 2017 - 3:02 pm UTC
I'm not sure what you mean. You need to include fk_convict_id in your function-based unique index. Otherwise you can only have one is_current row in the whole table!