Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Jingye .

Asked: May 02, 2000 - 1:21 pm UTC

Last updated: July 16, 2013 - 3:38 pm UTC

Version:

Viewed 100K+ times! This question is

You Asked

This may seem like a simple question: Update Column a1
in Table A with all data in Column b1 in Table B. But I am
trapped by the method that without using cursor to achieve it.
There are one column in each table, call id, to link them.
Using cursor is ok, but it brings with a un-toleratable speed
when operating on a large table. Even the idea create a temporary
table holding only primary key and Column b, and then apply
cursor to it is slow.
I am think of the way without using cursor, script as below. But
the 2nd Where clause simply return the message of `more than one
row is return', since the id is unpredictable and this create a
`many to many' relationship in both tables.
For your kindly advice.
Many Thanks,

(script)
REM* the where-clause of the update cannot work

UPDATE table b SET
column_b1 = ( SELECT MAX(column_a1)
FROM table_a a, table_b b
WHERE a.id=b.id
GROUP BY a.id)
WHERE table_b.id IN (SELECT MIN(id)
FROM table_a
GROUP BY id);





and Tom said...


Your example is somewhat confusing -- you ask "update column a1 in table a where data in column b1 in table b" but your update shows you updating column b1 in table B with some data from table a. Additionally -- given the way the where and set clauses are CODED in the above -- it would succeed. The query would in fact execute without any error messages since the correlated subquery in SET clause returns EXACTLY one row for each row in B and the where clause executes without error.

Anyway. Lets say you have a table A ( id int, a1 varchar2(25) ) and a table B ( id int PRIMARY KEY, b1 varchar2(25) ). I put a primary key on B(id) because if there is NO SUCH constraint -- the update is totally ambigous -- if there is more then 1 value of B1 in B for a given ID -- we cannot tell what row in B should be used to update the row in A.

Given that, we can:

tkyte@ORACLE> create table a ( id int, a1 varchar2(25) );
Table created.

tkyte@ORACLE> create table b ( id int primary key, b1 varchar2(25) );
Table created.

tkyte@ORACLE> insert into a values ( 1, 'Hello' );
tkyte@ORACLE> insert into a values ( 2, 'xxxxx' );
tkyte@ORACLE> insert into b values ( 2, 'World' );


tkyte@ORACLE> update ( select a1, b1 from a, b where a.id = b.id )
2 set a1 = b1
3 /

1 row updated.

tkyte@ORACLE> select * from a
2 /

ID A1
---------- -------------------------
1 Hello
2 World


The following updates are equivalent as well - they do the same thing differently. It is best to use the above update of a JOIN if possible (optimizer has best chance of getting a good plan). It will need the primary key constraint on B though, if you do not have one BUT b.id is unique, you can:

tkyte@ORACLE> update a
2 set a1 = ( select b1 from b where b.id = a.id )
3 where EXISTS ( select b1 from b where b.id = a.id )
4 /

1 row updated.

tkyte@ORACLE> update a
2 set a1 = ( select b1 from b where b.id = a.id )
3 where a.id in ( select id from b )
4 /

1 row updated.



Rating

  (121 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, October 23, 2001 - 10:16 am UTC

Migration Workbench doc didn't have great examples of converting update joins into Oracle syntax and this article had just that! thanks.

Updating joined tables

Robert Massey, March 20, 2002 - 12:58 pm UTC

I can think of another instance (at least with v8.1.5) where the second example (updating the table via a subquery) works better than the first example (updating the joined tables).

If the user has update permission on table A, but only has select permission on table B, they cannot update via the first example. Oracle will return ORA-01031 (insufficient privileges).

Updates are slow

HB, November 05, 2002 - 1:18 pm UTC

Hi Tom,
I have a related question for an UPDATE - that takes unexpected long time.
I am using Table T1 to insert or update the data inthe Table T2 (which has a unique key column -c1+ c2) in oracle 8.1.7.
The pl/sql block first tries to insert if exception occur because of sqlcode = -1 then it knows it is for update. The performance degrade when T1 has all rows for update.
I would appreciate your help
Thank

declare
exec_code varchar2(200);
exec_message varchar2(2000);
cursor cur is select * from t1;

begin
for x in cur loop
declare
begin
Insert into T2(c1,c2,c3,c4,c5)
values(x.c1, x.c2, x.c3 , x.c4, x.c5) ;

exception when others then
if sqlcode = -1 then
declare
begin
update t2 set
c3=x.c3,
c4=x.c4 ,
c5=x.c5 where c1=x.c1 and c2=x.c2;
exception when others then
exec_code := sqlcode;
exec_message := sqlerrm;
insert into t3_exception(
error_code, error_message)
values (exec_code, exec_message);
commit;
end;
else
exec_code := sqlcode;
exec_message := sqlerrm;
insert into t3_exception(
error_code, error_message)
values (exec_code, exec_message);
commit;
end if;
end;
end loop;
-- rollback;
end;

Tom Kyte
November 05, 2002 - 3:01 pm UTC

Danger Will Robinson! Danger!!   (anyone remember "lost in space?")

committing in for loop == terrible, horrible, really truly BAD idea.  Especially since you only commit ON ERROR!!!!  Man, do you break transactional integrity or what.

Either:

o move the error log into an autonomous transaction
o just LOSE the commit all together (without changing the outcome of your program at all!!)

Consider what happens when.....

o you hit an error on the update
o you log it and commit
o you subsequently (later) hit an ora-1555 on the SELECT and bump out of the loop


whoops -- you processed 1/2 of the table.

Anyway -- my suggestion -- lose the procedural code.  totally -- just update and insert and if you have 9i, just MERGE.  <b>make sure to use CBO or you'll have to use HINTS as the RBO isn't smart enough to do the NOT IN really good</b>.  Consider (i'll be updating most all of the rows, well, all of them actually):

ops$tkyte@ORA920.US.ORACLE.COM> create table t1 as select object_id, object_name from all_objects;

Table created.

ops$tkyte@ORA920.US.ORACLE.COM> alter table t1 add constraint t1_pk primary key(object_id);

Table altered.

ops$tkyte@ORA920.US.ORACLE.COM> create table t2 as select * from t1;

Table created.

ops$tkyte@ORA920.US.ORACLE.COM> alter table t2 add constraint t2_pk primary key(object_id);

Table altered.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> analyze table t1 compute statistics
  2  for table
  3  for all indexes
  4  for all indexed columns;

Table analyzed.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> analyze table t2 compute statistics
  2  for table
  3  for all indexes
  4  for all indexed columns;

Table analyzed.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> set timing on
ops$tkyte@ORA920.US.ORACLE.COM> declare
  2          l_icnt number := 0;
  3          l_ucnt number := 0;
  4  begin
  5      for x in ( select * from t1 )
  6      loop
  7          begin
  8              insert into t2 ( object_id, object_name ) values ( x.object_id, x.object_name );
  9                          l_icnt := l_icnt+1;
 10          exception
 11              when dup_val_on_index then
 12                  update t2 set object_name = x.object_name where object_id = x.object_id;
 13                                  l_ucnt := l_ucnt+1;
 14          end;
 15      end loop;
 16          dbms_output.put_line( 'Inserted ' || l_icnt );
 17          dbms_output.put_line( 'Updated  ' || l_ucnt );
 18  end;
 19  /
Inserted 0
Updated  29317

PL/SQL procedure successfully completed.
<b>
Elapsed: 00:01:04.07</b>
ops$tkyte@ORA920.US.ORACLE.COM> rollback;

Rollback complete.

Elapsed: 00:00:00.86
ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> begin
  2      update ( select t1.object_name t1_oname, t2.object_name t2_oname
  3                 from t1, t2
  4                where t1.object_id = t2.object_id )
  5         set t1_oname = t2_oname;
  6
  7          dbms_output.put_line( 'Updated  ' || sql%rowcount );
  8
  9         insert into t1
 10      select * from t2 where t2.object_id not in ( select object_id from t1 );
 11
 12          dbms_output.put_line( 'Inserted ' || sql%rowcount );
 13  end;
 14  /
Updated  29317
Inserted 0

PL/SQL procedure successfully completed.
<b>
Elapsed: 00:00:02.44</b>
ops$tkyte@ORA920.US.ORACLE.COM> rollback;

Rollback complete.

Elapsed: 00:00:01.07
ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> merge into t1
  2  using t2
  3  on ( t2.object_id = t1.object_id )
  4  when matched then
  5     update set t1.object_name = t2.object_name
  6  when not matched then
  7      insert (object_id, object_name) values( t2.object_id, t2.object_name);

29317 rows merged.

Elapsed: 00:00:02.87
ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> rollback;

Rollback complete.

Elapsed: 00:00:01.08




 

Oracle 9i Merge

A reader, November 06, 2002 - 4:02 pm UTC

Hi Tom,
Your help is always the best.
My last PL/SQL code was just a test as i was interested more in the performance during update, so i missed the commit point.

I tried the MERGE command and it is 10 fold fast, however i have few questions.

You said to use Oracle9I Merge statement when inserting or updating the data from a stagging table to history table.
In my case Table T1 (staging table with 300 columns) will have 200 thousand rows every day for insert or update to the
history table T2 ( 280 columns) that has 10 Million historical rows.
Since I am inserting a massive amount of data, using Merge -
1.) How can I commit at every 1000 rows
2.) If the row to load fails during Insert or update, can i trap that rows and put it into another table T3 .
3.) How can i find out how many rows are inserted verser update.
4.) Is Merge the best case in above scenario.

Thanks


Tom Kyte
November 06, 2002 - 4:54 pm UTC

1) you don't want to -- incrementation commits -- baaaaaddddd terrible practice. size your system for your workload. (answer = you cannot, the statement is transactional. I could develop procedural workarounds but we'd be right back at slow=true)

2) now you are back to procedural row by row processing. suggestion: put the constraints on the staging table -- SAME constraints, find them there, weed them out there.

3) you don't -- you could do the two step (update existing, insert new)

4) IMO -- if you have requirements that permit it -- yes.

Re: Updating a table from another table

CJ, November 07, 2002 - 7:54 am UTC

I'm sure this is a stupid question, but if we must process the table we are updating/inserting from in timestamp order (i.e. table b has a timestamp column), this must be done with procedural code?

Tom Kyte
November 07, 2002 - 10:28 am UTC

give me a concrete example -- having a super hard time trying to figure out why the order of update matters to you at all.

Re: Updating a table from another table

CJ, November 07, 2002 - 11:57 am UTC

Updates/inserts/deletes come from another system into a staging instance. The staging instance tables looks exactly like the main tables other than a timestamp and an update_type column are added. The records must be processed in order so that for instance, if a record is updated, deleted, inserted, then updated again (not likely, but it *could* happen) those operations should happen in the correct order.

Note - I didn't design this system, but I have to work with it.

Tom Kyte
November 07, 2002 - 12:38 pm UTC

Oh, well -- then you cannot do it in a single update anyway -- as the table being updated would NOT be key preserved (and hence the result of the update would be very ambigous).

If the table containing the changes can have MORE then one occurrence of the "primary key" of the other table -- no chance for a single statement. If you tried:

table t1( x int primary key, y int );
table t2( x int, y int );

insert into t1 values ( 1, 0 );
insert into t2 values ( 1, 100 );
insert into t2 values ( 1, 200 );

then

update ( select t1.y t1_y, t2.y t2_y from t1, t2 where t1.x = t2.x )
set t1_y = t2_y

would be "ambigous" -- no way we could know if y would end up with 100 or 200 -- hence we don't even permit it.

So, you are forced to do it procedurally.

A reader, November 08, 2002 - 5:24 pm UTC

Hi Tom,
As you said in my case scenario I cannot use UPSERT(MERGE) and i have to write a pl/sql in case to achieve -

1.) Insert /Update from temporary table to actual table.
(Nt- As i am loading data into temporaray table from a legacy system, i want this load to be as fast as possible so i am not putting any constrain on temp table and handling most of data error inside oracle. All fields in the temporary tables are varchar2) The actual Temp as well as Target table has 300 plus columns.

2.) All rows failed during INSERT/uPdate log them to a Fault table
3.) Find out number of rows inserted verses update.


I am using the PL/SQL to handle this, the Inserts are fine but the Updates are very slow. I thnk there is a better way to write this.The test PL/SQL is as follows -

declare
todate date;
current_mode char(10) :='load';
exec_code varchar2(200);
exec_message varchar2(2000);
this_total number :=0; this_succ number:=0;
this_update number:=0;
this_fail number:=0;
this_insert number:=0;
this_start char(10) ; this_end char(10);
this_table_name varchar2(10) :='test';
cursor c is select * from test_temp ;
commit_point number :=0;
test_case number := null;
fault_id varchar2(30);
begin
select to_char(sysdate, 'hh24:mm:ss') into this_start from dual;
delete from test_load_result where table_name =upper(trim(this_table_name));
commit;
for xc1 in c loop
this_total:=this_total+1;
commit_point:=commit_point+1;
if commit_point > 10000 then
commit;
commit_point:=0;
end if;
declare
begin
test_case := xc1.test_jdate;
insert /*+ append */ into test( test_id, test_code ,test_case ,test_month ,test_jdate ,
sequence_no , check_status
)
values (
trim(xc1.test_id),
trim(xc1.test_code),
to_date( to_char(to_date('12-31-1900','mm-dd-yyyy') ,'j') + abs(test_case),'j'), upper(to_char((to_date( to_char(to_date('12-31-1900','mm-dd-yyyy') ,'j') + abs(test_case),'j')),'mon')),
trim(xc1.test_jdate),
trim(xc1.sequence_no),
trim(xc1.check_status)
);
this_succ:=this_succ + 1;
exception when others then
if sqlcode = -1 then
declare
begin
update test set
check_status = trim(xc1.check_status)
where
test_id = trim(xc1.test_id) and
test_code=trim(xc1.test_code) and
test_jdate = trim(xc1.test_jdate) and
sequence_no =trim(xc1.sequence_no) ;

this_update := this_update + 1;
exception when others then
exec_code := sqlcode;
exec_message := sqlerrm;
select sysdate into todate from dual;
if (this_update >0) then
this_update := this_update - 1;
end if;
this_fail:=this_fail + 1;
fault_id:= trim(xc1.rms_index);
insert into test_fault
select (select sysdate from dual), st.* from test_temp st where
trim(rms_index)=trim(fault_id);
insert into test_exception(
exec_date, exec_mode, table_name,rms_index, exec_code,
exec_message) values
(todate,current_mode,this_table_name,fault_id,exec_code,exec_message);
end;
else
exec_code := sqlcode;
exec_message := sqlerrm;
select sysdate into todate from dual;
this_fail:=this_fail + 1;
fault_id:= trim(xc1.rms_index);
insert into test_fault
select (select sysdate from dual), st.* from test_temp st where
trim(rms_index)=trim(fault_id);
insert into test_exception(
exec_date, exec_mode, table_name,rms_index, exec_code,
exec_message) values
(todate,current_mode,this_table_name,fault_id,exec_code,exec_message);
end if;
end;
end loop;
this_succ := this_succ+this_update;
select decode(sign(this_succ -(this_update+this_fail)),1,this_succ -(this_update+this_fail),0) into this_insert from dual;
select to_char(sysdate, 'hh24:mi:ss') into this_end from dual;
test_results(todate,this_start, this_end, this_table_name, this_total, this_succ,this_insert, this_update, this_fail);
commit;
end;
/



Tom Kyte
November 08, 2002 - 5:36 pm UTC

you do realize that /*+ append */ hint is just a waste of keystrokes right? It isn't doing anything beyond confusing the reader of your code....

why do you believe the update is slow -- what led you to that particular conclusion.

A reader, November 09, 2002 - 2:10 pm UTC

Can we not use /*+append*/ inside a pl/sql block?

Tom Kyte
November 09, 2002 - 3:06 pm UTC

sure you can.

You just cannot use it with VALUES. It only works with INSERT as SELECT.

Similar sub-queries in same parent query

Praveen, August 07, 2003 - 2:27 pm UTC

Hi Tom,

I have a huge table similar to the following:

eno ename dno sal mgr
---------------------------------
101 A 1 100
102 B 1 200
103 C 1 300
104 D 2 100
105 E 2 200
---------------------------------

Here I want to update the 'mgr' column with 'eno' value having largest 'sal' for each dno. Thus the result would be:

eno ename dno sal mgr
------------------------------------
101 A 1 1000.00 103
102 B 1 2000.00 103
103 C 1 3000.00
104 D 2 1000.00 105
105 E 2 2000.00
------------------------------------
I wrote a query like this:

UPDATE emp e1 SET mgr = (SELECT eno FROM emp e2
WHERE sal = (SELECT MAX(sal)
FROM emp e3
WHERE e2.dno=e3.dno)
AND e1.dno = e2.dno)
WHERE e1.sal < (SELECT MAX(sal) FROM emp e3
WHERE e1.dno = e3.dno);

There are around 6 million data in the table (having about
350 fields). I feel that since I am using the same sub-quey at 2 places -(SELECT MAX(sal) FROM emp e3
WHERE e1.dno = e3.dno)- Oracle has to parse them 2 times and then fetch which will reduce performance.
In what way can we let oracle know that there are two similar queries and retrieving similar data from the table ?

Thanks and regards

Praveen KV


Tom Kyte
August 09, 2003 - 5:46 pm UTC

ops$tkyte@ORA920.US.ORACLE.COM> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> update emp set mgr = null;

14 rows updated.

ops$tkyte@ORA920.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> update ( select sal, mgr,
  2                 (select max( to_char( e2.sal, 'fm000000000.00' ) || e2.empno )
  3                          from emp e2
  4                         where e2.deptno = emp.deptno ) new_mgr
  5             from emp )
  6     set mgr = substr( new_mgr, 13 )
  7   where sal < substr( new_mgr, 1, 12 )
  8  /

10 rows updated.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> break on deptno skip 1
ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> select deptno, sal, empno, mgr
  2    from emp
  3   order by deptno, sal desc
  4  /

DEPTNO        SAL      EMPNO        MGR
------ ---------- ---------- ----------
    10       5000       7839
             2450       7782       7839
             1300       7934       7839

    20       3000       7788
             3000       7902
             2975       7566       7902
             1100       7876       7902
              800       7369       7902

    30       2850       7698
             1600       7499       7698
             1500       7844       7698
             1250       7521       7698
             1250       7654       7698
              950       7900       7698


14 rows selected.
 

insert/update in faster way

Shahadat, September 29, 2003 - 4:35 am UTC

I have a table named A containing say 100000 records.I have another table B containg
10000 records of incremented and edited records of A table.Both table has same
structure and 46 field.I am using the following codes to append data from B to A.
--For incremental/New data-----
insert into A
select * from B where column_name NOT IN
(select column_name from B);
--For Edited Data-------
cursore C_AB
select * from A
minus
select * from A
For R in C_AB
loop
Update A set....where ...
end loop
End;

It's working but taking a huge time/sometimes hang the computer.
Would u please help me how can i faster my procedure.

Tom Kyte
September 29, 2003 - 8:08 am UTC

not a version to be seen....  hmmmm.

to update 10,000 rows in a 100,000 row table should take seconds (it'll be a direct function of the number of indexes).  And it should take no procedural code at all.

In 8i and before, it'll take 2 sql statements.
In 9i it'll take one sql statement.

consider:

ops$tkyte@ORA920> @big_table 100000
ops$tkyte@ORA920> create table big_table
  2  as
  3  select rownum id, a.*
  4    from all_objects a
  5   where 1=0
  6  /
Table created.
 
ops$tkyte@ORA920> declare
  2      l_cnt number;
  3      l_rows number := &1;
  4  begin
  5      insert /*+ append */
  6      into big_table
  7      select rownum, a.*
  8        from all_objects a;
  9
 10      l_cnt := sql%rowcount;
 11
 12      commit;
 13
 14      while (l_cnt < l_rows)
 15      loop
 16          insert /*+ APPEND */ into big_table
 17          select rownum+l_cnt,
 18                 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 19                 OBJECT_ID, DATA_OBJECT_ID,
 20                 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 21                 TIMESTAMP, STATUS, TEMPORARY,
 22                 GENERATED, SECONDARY
 23            from big_table
 24           where rownum <= l_rows-l_cnt;
 25          l_cnt := l_cnt + sql%rowcount;
 26          commit;
 27      end loop;
 28  end;
 29  /
old   3:     l_rows number := &1;
new   3:     l_rows number := 100000;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920> alter table big_table add constraint
  2  big_table_pk primary key(id)
  3  /
Table altered.
 
 
ops$tkyte@ORA920> select count(*) from big_table;
 
  COUNT(*)
----------
    100000
 
ops$tkyte@ORA920> create table small_table
  2  as
  3  select * from big_table
  4  where 1=0;
Table created.
 
ops$tkyte@ORA920> insert into small_table
  2  select * from big_table
  3  where mod(id,20) = 0;
5000 rows created.
 
ops$tkyte@ORA920> insert into small_table
  2  select -rownum, a.*
  3  from all_objects a
  4  where rownum <= 5000;
5000 rows created.
 
ops$tkyte@ORA920> analyze table big_table compute statistics for table;
Table analyzed.
 
ops$tkyte@ORA920> analyze table small_table compute statistics for table;
Table analyzed.
 
ops$tkyte@ORA920> set timing on

<b>here is the 9i way..</b>

ops$tkyte@ORA920> merge into big_table B
  2  using small_table S
  3  on ( b.id = s.id )
  4  when matched then update set
  5   b.owner = s.owner, b.object_name = s.object_name,
  6   b.subobject_name = s.subobject_name, b.object_id = s.object_id,
  7   b.data_object_id = s.data_object_id, b.object_type = s.object_type,
  8   b.created = s.created, b.last_ddl_time = s.last_ddl_time,
  9   b.timestamp = s.timestamp, b.status = s.status, b.temporary = s.temporary,
 10   b.generated = s.generated, b.secondary = s.secondary
 11  when not matched then insert
 12   ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
 13     OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
 14     GENERATED, SECONDARY )
 15  values
 16   ( s.ID, s.OWNER, s.OBJECT_NAME, s.SUBOBJECT_NAME, s.OBJECT_ID, s.DATA_OBJECT_ID,
 17     s.OBJECT_TYPE, s.CREATED, s.LAST_DDL_TIME, s.TIMESTAMP, s.STATUS, s.TEMPORARY,
 18     s.GENERATED, s.SECONDARY );
 
10000 rows merged.
 
Elapsed: 00:00:01.22

<b>that won't "hang the computer" for very long..</b>

ops$tkyte@ORA920> rollback;
Rollback complete.
 
Elapsed: 00:00:02.19

ops$tkyte@ORA920> alter table small_table add constraint small_table_pk primary key(id)
  2  /
Table altered.
 
Elapsed: 00:00:00.34

<b>this is needed to do this in 8i -- small table needs a primary key</b>

ops$tkyte@ORA920> update ( select b.ID B_ID, b.OWNER B_OWNER, b.OBJECT_NAME B_OBJECT_NAME,
  2  b.SUBOBJECT_NAME B_SUBOBJECT_NAME, b.OBJECT_ID B_OBJECT_ID,
  3  b.DATA_OBJECT_ID B_DATA_OBJECT_ID, b.OBJECT_TYPE B_OBJECT_TYPE,
  4  b.CREATED B_CREATED, b.LAST_DDL_TIME B_LAST_DDL_TIME,
  5  b.TIMESTAMP B_TIMESTAMP, b.STATUS B_STATUS,
  6  b.TEMPORARY B_TEMPORARY, b.GENERATED B_GENERATED,
  7  b.SECONDARY B_SECONDARY,
  8  s.ID S_ID, s.OWNER S_OWNER, s.OBJECT_NAME S_OBJECT_NAME,
  9  s.SUBOBJECT_NAME S_SUBOBJECT_NAME, s.OBJECT_ID S_OBJECT_ID,
 10  s.DATA_OBJECT_ID S_DATA_OBJECT_ID, s.OBJECT_TYPE S_OBJECT_TYPE,
 11  s.CREATED S_CREATED, s.LAST_DDL_TIME S_LAST_DDL_TIME,
 12  s.TIMESTAMP S_TIMESTAMP, s.STATUS S_STATUS,
 13  s.TEMPORARY S_TEMPORARY, s.GENERATED S_GENERATED,
 14  s.SECONDARY S_SECONDARY
 15  from big_table b, small_table s
 16  where b.id = s.id )
 17  set
 18  b_ID = s_ID,
 19  b_OWNER = s_OWNER,
 20  b_OBJECT_NAME = s_OBJECT_NAME,
 21  b_SUBOBJECT_NAME = s_SUBOBJECT_NAME,
 22  b_OBJECT_ID = s_OBJECT_ID,
 23  b_DATA_OBJECT_ID = s_DATA_OBJECT_ID,
 24  b_OBJECT_TYPE = s_OBJECT_TYPE,
 25  b_CREATED = s_CREATED,
 26  b_LAST_DDL_TIME = s_LAST_DDL_TIME,
 27  b_TIMESTAMP = s_TIMESTAMP,
 28  b_STATUS = s_STATUS,
 29  b_TEMPORARY = s_TEMPORARY,
 30  b_GENERATED = s_GENERATED,
 31  b_SECONDARY = s_SECONDARY
 32  /
 
5000 rows updated.
 
Elapsed: 00:00:02.82

ops$tkyte@ORA920> insert into big_table
  2  select * from small_table
  3  where id not in (select id from big_table)
  4  /
 
5000 rows created.
 
Elapsed: 00:00:00.25
ops$tkyte@ORA920>

<b>so, a couple of seconds (add time for each index you have of course)</b>

NO PROCEDURAL CODE!!! 

insert/update in a faster way

Shahadat, October 01, 2003 - 7:00 am UTC

Thanks for kind reply.
Here i am using oracle8i(8.1.6.0.0) and developer6i and windows2000 server.
I have a table named A containing say 100000 records.I have another table B containg
10,000 records of incremented and edited records of A table.Both table has same
structure and 46 field.I am using the following codes to append data from B to A.
--For incremental/New data-----
insert into A
select * from B where column_name NOT IN
(select column_name from B);
--For Edited Data-------
cursore C_AB
select * from B
minus
select * from A
For R in C_AB
loop
Update A set....where ...
end loop
End;

It's working but taking a huge time/sometimes hang the computer.
Would u please help me how can i faster my procedure.u have given information 9i basis.will it same for oracle8i & dev6i.please help me


Tom Kyte
October 01, 2003 - 8:50 am UTC

hmmm, how to say this

"page up"


this was asked and answered already.

Insert problem (taking time)

Shahadat, October 02, 2003 - 4:29 am UTC

Thank u very much for ur kind reply.ur update information is really nice and
working very well.but insert----same prolem.I don't understand what's the
problem.i am going to give u full overview of my problem.Here's i have a
centralized place(Head office).The software is available in different
portion of the country for data entry and report generation etc.Every month
the client office is to give data(NEW & EDITED) "BY DATE RANGWISE"
to the headoffice in CD.The Headoffice is merge the data into their
system.In this way the whole system is going on.

For migration data first of all i create another
temporary user named VISTEMP then cotinuing this kinds of code

insert into VISTEMP.REGISTRATION(BIN,NAME,NAME_ALIAS,COR_GROUP,AUTHOBY,AUTHODATE,CG_NAME,ADD1,ADD2,ADD3,TEL1,FAX1,ADD5,ADD6,ADD7,TEL2,FAX2,ADD9,ADD10,ADD11,TEL3,FAX3,TP_TYPE,TRD_LINC,TRD_FY,TRD_AUTH,IMP_REG,EXP_REG,REG_TYPE,TIN,STATUS,PRE_BIN,DATREG,STAT_CHNG,ACT_CODE,
ACT_MULT,ITEM_TYPE,OLD_ACT,APP_CAT,LCODE,
ISSUE_DATE,VREG,M_POSI,MFUNC,SFUNC,LAST_USER,LAST_ACCS,TREG,PAY_FREQ,CREG,EREG,OREG,OP_BAL,OP_BAL_DT)
select BIN,NAME----
from VIS.REGISTRATION where BIN NOT IN
(select BIN from VIS.REGISTRATION);

RUN and Computer in BIG Busy mood(Hang).

When i am sending data i am using date rang?am in write way?Please help me.

Tom Kyte
October 02, 2003 - 7:50 am UTC

you are probably using the brain dead RBO

you are using NOT IN

it is slow -- deadly slow.

use the CBO and make sure BIN in VIS.REGISTRATION is "not null". Read:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7565644027229 <code>

for the "why"

insert in a faster way

Shahadat, October 04, 2003 - 5:06 am UTC

Thank u very much for ur kind & very helpful reply.Its working very nicely.i will greateful to u forever.i was suffering by this problem for a long time.now i am using the following
/
insert into VISTEMP.REGISTRATION(BIN,NAME,NAME_ALIAS,COR_GROUP,AUTHOBY,AUTHODATE,CG_NAME,ADD1,ADD2,ADD3,TEL1,FAX1,ADD5,ADD6,ADD7,TEL2,FAX2,ADD9,ADD10,ADD11,TEL3,FAX3,TP_TYPE,TRD_LINC,TRD_FY,TRD_AUTH,IMP_REG,EXP_REG,REG_TYPE,TIN,STATUS,PRE_BIN,DATREG,STAT_CHNG,ACT_CODE,ACT_MULT,ITEM_TYPE,OLD_ACT,APP_CAT,LCODE,ISSUE_DATE,VREG,M_POSI,MFUNC,SFUNC,LAST_USER,LAST_ACCS,TREG,PAY_FREQ,CREG,EREG,OREG,OP_BAL,OP_BAL_DT)select BIN,NAME,NAME_ALIAS,---- from VIS.REGISTRATION where not exists
(select null from VIStemp.REGISTRATION where vistemp.registration.bin=vis.registration.bin)
/am i right?
Again i have some query
a)cbo/rbo mean?
b)in privious question i told u that i am exporting incremental and edited data by using system date.Whenever anyuser insert/update anything,system date is inserted with them and i am exporting that data by using that system date."AM I IN RIGHT WAY?"
c)if i have a composite key then
"---where a.key1=b.key1 and a.key2=b.key2 and----)
am i right for both insert/update by given ur advice?
d)I want to call windows dialogue box from forms.
Thanks..



Tom Kyte
October 04, 2003 - 9:56 am UTC



a) uh oh. you don't know what CBO and RBO mean? Please see:
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96533/optimops.htm#721 <code>

b) i don't know? are you? it is your system.... i could envision problems with only a second of granularity on a clock -- you could miss a record or two due to that.

c) er?

d) I don't program forms -- you can try otn.oracle.com -> discussion forums.

Update problem

Shahadat, October 06, 2003 - 2:04 am UTC

Thank u very much for ur nice reply.
In update portion i am facing another problem by ur recomended code.In my B(big table record-79186) and in S(small table record-12871).

UPDATE
(SELECT B.BIN B_BIN,S.BIN S_BIN
FROM VISTEMP.REGISTRATION B,VIS.REGISTRATION S
WHERE B.BIN=S.BIN) SET
B_BIN=S_BIN;

12871 rows updated.

Elapsed: 00:00:28.91

UPDATE
(SELECT B.BIN B_BIN,S.BIN S_BIN
FROM VIS.TESTREG B,VIS.REGISTRATION S
WHERE B.BIN=S.BIN) SET
B_BIN=S_BIN;

2 rows updated.

Elapsed: 00:00:02.93

Have u got any discrimination?when i am using different user "IT'S UPDATING ALL THE RECORDS & TAKING TOO MUCH TIME".
but when i using same user but different table its updating only the originally edited record and taking few times.why this happening??

Tom Kyte
October 06, 2003 - 8:04 am UTC

obviously the two tables have DIFFERENT DATA????

VISTEMP.REGISTRATION apparently has 12,871 records that match the S.BIN value. It is not updating EVERY record in VISTEMP.REGISTRATION, you said "that table has 79,186 records" -- only 12,871 (which happens to match the number of records in S) are updated.


It is apparent that when you join vis.registration to vistemp.registration -- 12,871 rows are "matched".

when you join to testreg -- 2 rows are.

You are comparing apples to toaster ovens.

The differences in run time is easily explained -- it takes longer to update 6,000 times the data.


there is nothing wrong here -- the computer is just doing exactly what you told it to do.

again insert problem

Shahadat, October 06, 2003 - 4:18 am UTC

Hi Tom
again the same problem.Taking 34 minutes for inesrt.whats happening what should i do.i will lose my job

SQL>         INSERT INTO VISTEMP.HSCODELIST
  2          (BIN,ACT_SL,ACT_CODE,ACT_VAL,ENTRY_DATE)
  3          SELECT BIN,ACT_SL,ACT_CODE,ACT_VAL,ENTRY_DATE 
  4          FROM VIS.HSCODELIST
  5          WHERE not exists
  6          (SELECT NULL FROM VIStemp.HSCODELIST where vistemp.HSCODELIST.bin=vis.HSCODELIST.bin and 
  7          vistemp.HSCODELIST.act_sl=vis.HSCODELIST.act_sl)
  8  /

12881 rows created.

Elapsed: 00:34:2098.07 

Tom Kyte
October 06, 2003 - 8:07 am UTC



sorry about that? not much i can do sitting here, this this amount of information.

do you know how to use sql_trace?
tkprof?


read about them in the performance guide, you need them in order to see what this is doing and then tune it.

I have a related question

brao, October 06, 2003 - 8:44 am UTC

Hi,
i have a question , say for example
we have an update statement which fetches values from different tables for update.
we have tables upd ( which has to be updated) upd(p,a,b,c)
Table a(p,a)
Table b(p,b)
Table c(p,c)

i have an update like this
update upd set a = (select (........) from a where upd.a=a.a),
b = (select (........) from b where upd.b=b.b),
c = (select (........) from c where upd.c=c.c)

but the table a,b,c has around 3000000 records
so the quesry takes loooong time to finish.
what could be the best approach to update this table.
the tables a,b,c are remote tables.

Hope i made my question clear.
Thanks
brao



Tom Kyte
October 06, 2003 - 10:29 am UTC

are there values of "p" in A that are not in B or C and vice versa -- eg -- if you join a to b to c without using full outer joins -- would you lose any rows?

versions are always necessary.

I have a related question

Brao, October 06, 2003 - 10:42 am UTC

Sorry about that .
I have oracle 9i and im using this update statement in a stored procedure.
Yes there are some rows in the upd table which are not in a ,b or c.
And moreover im using aggregate function like min(),max() group by p in each subquery
like
update upd set a = (select min(a) from a where upd.a=a.a group by a.p),
b = (select (....) from b where upd.b=b.b),
c = (select (........) from c where upd.c=c.c)

Thanks for answering my question and we (our team) always admire your way of solving problems.
Brao

Tom Kyte
October 06, 2003 - 10:58 am UTC

no, i mean

does A have a value of P that is not in B

that is, would we have to full outer join A to B to C in order to get all of the rows.

i have a question

BRao, October 06, 2003 - 11:35 am UTC

yes there are some rows in a which are not in b and c
further elaborating the query

UPDATE upd au
Set purchase_no = (SELECT Count(DISTINCT WebOrderID)
FROM A ot
WHERE total > 0 and
(ot.ostatus <> -1 and ot.ostatus <> -1) and
ot.UREFITEM = au.UREFITEM),
Amount = (SELECT ISNULL(Sum(Total), 0)
FROM b ot
WHERE (ot.ostatus <> -1 and ot.ostatus <> -1) and
ot.UREFITEM = au.UREFITEM),
date1 = (SELECT min(sodate)
FROM c so left join E ot on ot.sorefitem = so.sorefitem
WHERE total > 0 and
(so.ostatus <> -1 and so.ostatus <> -1) and
so.UREFITEM = au.UREFITEM),
date2 = (SELECT max(sodate)
FROM d so left join E ot on ot.sorefitem = so.sorefitem
WHERE total > 0 and
(so.ostatus <> -1 and so.ostatus <> -1) and
so.UREFITEM = au.UREFITEM);

Tom Kyte
October 06, 2003 - 1:31 pm UTC

what I suggest then is not to do it in a single sql statement -- just proving that "there are exceptions to every rule".  Normally, I would try to use a single sql statment -- here, due to the "data being spread all over the place", and being distributed and all.

this shows how I would approach getting the first two columns -- just add the other 2 and use merge to keep filling temp -- and then update the join:

ops$tkyte@ORA920> create global temporary table temp
  2  ( urefitem number(3) primary key,
  3    purchase_no int,
  4    amount  int
  5  )
  6  /
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table upd
  2  ( urefitem number(3),
  3    purchase_no int,
  4    amount int
  5  )
  6  /
 
Table created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into upd
  2  select rownum, null, null from all_objects where rownum < 1000;
 
999 rows created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into temp( urefitem, purchase_no)
  2  select urefitem, count(distinct weborderid)
  3    from a@remote
  4   where total > 0
  5     and ostatus <> -1
  6   group by urefitem
  7  /
 
388 rows created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> merge into temp
  2  using (select urefitem, sum(total) sum_total
  3           from b@remote
  4                  where ostatus <> -1
  5              group by urefitem ) b
  6  on (temp.urefitem = b.urefitem)
  7  when matched then update set amount = b.sum_total
  8  when not matched then insert (urefitem,amount) values ( b.urefitem, b.sum_total)
  9  /
 
398 rows merged.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> update ( select upd.purchase_no u_pno, upd.amount u_a,
  2                  temp.purchase_no t_pno, temp.amount t_a
  3                     from upd, temp
  4            where upd.urefitem = temp.urefitem )
  5    set u_pno = t_pno, u_a = t_a
  6  /
 
635 rows updated.
 
 

i have a quesion

brao, October 06, 2003 - 3:31 pm UTC

hi,
thanks for the idea , it works :)
but can i follow the approach of updating each column using a separate cursor (to catch the error )when updating on bulk basis.
Thanks again
BRAO

Tom Kyte
October 06, 2003 - 3:47 pm UTC



catch what errors? looks pretty straight forward -- numbers.


using a cursor means you are back to "slow=very_true"


you already WERE updating on a bulk basis??? i didn't change that at all.

i have a question

brao, October 06, 2003 - 3:57 pm UTC

To catch dml errors.
but that is fine , i got your short cut working great.
and the query is working.
Thanks
Brao



UPDATE PROBLEM

Shahadat, October 08, 2003 - 4:36 am UTC

Hi Tom
SQL> DESC VIS.HSCODELIST;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 BIN                                                VARCHAR2(10)
 ACT_SL                                             VARCHAR2(3)
 ACT_CODE                                           VARCHAR2(11)
 ACT_VAL                                            NUMBER(14,2)
 ENTRY_DATE                                         DATE

SQL>  DESC VIS.DBHSCODELIST;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 BIN                                                VARCHAR2(10)
 ACT_SL                                             VARCHAR2(3)
 ACT_CODE                                           VARCHAR2(11)
 ACT_VAL                                            NUMBER(14,2)
 ENTRY_DATE                                         DATE

SQL>  UPDATE (SELECT DBHSCODELIST.BIN B_BIN,DBHSCODELIST.ENTRY_DATE B_ENTRY_DATE,
  2  HSCODELIST.BIN S_BIN,HSCODELIST.ENTRY_DATE S_ENTRY_DATE
  3  FROM DBHSCODELIST,HSCODELIST 
  4  where DBHSCODELIST.BIN=HSCODELIST.BIN)
  5  SET B_BIN=S_BIN,B_ENTRY_DATE =S_ENTRY_DATE;
SET B_BIN=S_BIN,B_ENTRY_DATE =S_ENTRY_DATE
    *
ERROR at line 5:
ORA-01779: cannot modify a column which maps to a non key-preserved table
what's wrong??? 

Tom Kyte
October 08, 2003 - 6:58 am UTC

hscodelist is perhaps missing the mandatory primary/unique constraint on BIN?

the database needs to know that each row in dbhscodelist will map to AT MOST one row in hscodelist - this mandates a primary or unqiue key constraint on the join columns

this is discussed in the original answer above.

update too slow

anandhi, December 23, 2003 - 1:50 pm UTC

We have a 2 CPU machine where at normal times, the topmost entry in top command shows only .2 or .3 percentage of CPU use. But when I run the following query, it takes up 50% of CPU. And takes around 8 to 12 minutes to run. The tables
concerned have around 12000 rows each. This is on a test database where nothing else is going on concurrently.

tabA has these columns: id, cycle, pop
tabB has these columns: id, cycle, site_id,rel_cd,groupid


update tabA a
set pop=
(select count(*)
from tabB b
where b.id=a.id
and a.cycle >= b.cycle
and b.site_id=44
and b.rel_cd in('code1','code2','code3')
and b.groupid='123')
where pop is null and id in(select id from tabB);


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 496.35 499.54 7530955 9902630 76532 11444
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 496.35 499.54 7530955 9902630 76532 11444

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 305

Rows Row Source Operation
------- ---------------------------------------------------
1 UPDATE tabA
11445 MERGE JOIN
5942 VIEW VW_NSO_1
5942 SORT UNIQUE
31227 TABLE ACCESS FULL tabB
17385 SORT JOIN
12601 TABLE ACCESS FULL tabA

Now my questions are:

1. Is this time, 10 minutes for 12000 rows tables an acceptable (expected) time?
2. Where do I start with tuning this query?

We have several such updates that creates the same problems on the server from
time to time and I would appreciate some guidance to resolve this.
Thanks
Anandhi

Tom Kyte
December 23, 2003 - 5:16 pm UTC

well, you've really got to ask:

is this time, 10 minutes, acceptable for running this query:

select count(*)
from tabB b
where b.id=:x1
and :x2 >= b.cycle
and b.site_id=44
and b.rel_cd in('code1','code2','code3')
and b.groupid='123'

12,000 times?


That means -- just using math here -- that we have 600 seconds, 12,000 queries to run, 12000/60 = 20, so we are doing 20 per second -- or each query is taking 0.05 cpu seconds to run.

0.05 cpu seconds is awesome for a query.

do anything 12,000 times and you might have a problem tho!


Ok, so what are some methods to correct this? this might be one of the rare times that a temp table can be useful. What about:



create global temporary table gtt
( id int primary key,
cnt int
)
on commit delete rows
/

you'll add that ONCE, it'll become part of your schema forever....

Now, you "two step" it:



insert into gtt
select b.id, count(*) cnt
from tabb b, taba a
where a.id = b.id
and a.cycle >= b.cycle
and b.site_id = 44
and b.rel_cd in ( 'code1', 'code2', 'code3' )
and b.groupid = '123'
and a.pop is null
group by b.id
/

that gets all of the id/cnts for only the rows of interest. Now we can update the join:

update ( select a.pop, b.cnt
from taba a, gtt b
where a.id = b.id )
set pop = cnt
/


and thats it. give that a whirl -- let us know how it does!

performance issue

A Reader, December 31, 2003 - 5:53 am UTC

Hi Tom,
IÂ’m selecting approximately 1 million records from some tables and populating another set of tables. Here the source tables have data with leading spaces and the target data should be without spaces.
I plan to use RTRIM function in the WHERE clause to get specific set records and then after fetching, IÂ’ll insert with a RTRIM on each column.
INSERT into abc values (RTRIM(col1), RTRIM(col2)Â…Â…Â….)
There is another option where I can use SELECT RTRIM(col1), RTRIM(col2) from xxx where RTRIM(col1) = ‘yyy’ and then directly use insert into target table without RTRIM.
Can you please tell me which option performs better if the number of records are in millions.


Tom Kyte
December 31, 2003 - 9:48 am UTC

insert as select

will be better then

for x in ( select )
insert


in virtually all cases.

Update

ARC, January 02, 2004 - 3:22 am UTC

Hi Tom,
I need your help in update.
I have a table t1 which is having all orders information.
Another table summary of orders tt1 which is having current year summation and respective previous year summation columns.
SQL> desc tt1
Name                            Null?    Type
------------------------------- -------- ------------
ORDER_NUMBER                             NUMBER(10)
ORDER_DATE                               DATE
CY_ORD_AMT                               NUMBER
PY_ORD_AMT                               NUMBER

Order date is current year date. I am getting current year data first into above table after I am trying to update previous year amount. I this updation I want to find the equalent date/day in previous year corresponding to current year date and update the amount. For this I am using another table to find previous year dates for current year dates. But I am not able to identify the correct record to update.
Please help.
Thanks
  

Tom Kyte
January 02, 2004 - 9:21 am UTC

and i, not knowing how your data all fits together, cannot say either.

one would need an example (complete, yet concise, with sample data and an explanation of how the data all fits together)

automatic update

reader, March 21, 2004 - 5:08 am UTC

hi tom ,

desc child_table
-------------------
child_id number ;
child_birth_date date;
child_20_flag number(1) ;

where the child_20_flag should indicate if the child reach 20 years old or not.
now , what is the best way to handle it automaticlly ?



Tom Kyte
March 21, 2004 - 9:59 am UTC

don't store it. it is called "derived data"

create or replace view v
as
select child_id, child_birth_date,
case when months_between( sysdate-child_birth_date ) >= 12*20 then 'Y'
else 'N'
end
from child_table;

that is the right way to do it.

Reader., March 21, 2004 - 1:45 pm UTC

dear sir ,
Acually my problem is huge that that simple , i just try to give u a sample of my problem.
i just need to know the best way to update some column value that depends on other columns value in same table (or /and ) in other tables .
specially when one of them is a date.
i do made a script that run every day at 5 am. but i need to know the best way for that .

Tom Kyte
March 21, 2004 - 7:01 pm UTC

they are derived columns and should not be stored -- especially something like "20 years old flag" as that can change at any second.


that is, and will remain, my answer unless you give me a real world scenario that would demand otherwise (the 20 year old flag should NOT be stored, period)

delete child table

A reader, March 22, 2004 - 5:44 am UTC

Hi

I have three tables, A, B and C

A is parent of B and C, A has a composite PK deptno and dname

I would like to know if following delete SQLs are equivalent

1.------------------------------------------
delete b
where
(deptno, dept_name) in (select a.deptno, dname
from a, c
where a.deptno = c.deptno
and a.dname = c.dept_name);

2.-----------------------------------------
delete b
where
deptno = (select a.deptno
from a, c
where a.deptno = c.deptno
and dname = dpt_name)
and
dept_name = (select a.dname
from a, c
where a.deptno = c.deptno
and a.dname = c.dpt_name);

Are they :-?

Tom Kyte
March 22, 2004 - 7:16 am UTC

hows about this -- you write out in english what each one does -- write the "specs" if you will that describe what each query does.

then, we'll all know.


(but #2 looks "wrong" if a's primary key is deptno,dname -- that means a given deptno can return many dnames and a dname many deptnos and hence the "deptno = ( select .... )" would return more than one row in general leading to an error)....




big table update

Lee, May 10, 2004 - 4:22 pm UTC

Tom,

I'm using Oracle 9i second version.

I have a table with about 5,000,000 records, the table have about 70 columns. I need to update 1 column (number) with a column from another table (around 350,000 records) based on a join value in the 2 tables. In the first table (the 5,000,000 record table) all the records will be updated.

UPDATE /* NOLOGGING */ BIG_TABLE A SET
A.MID_TABLE_ID = (SELECT A.MID_TABLE_ID
FROM MID_TABLE B
WHERE A.JOIN_COL = B.JOIN_COL);

Now, if I do a regular simple update (like above) it takes for ever (never actualy had a chance to compleat it). I tried all kinds of variations with loading the data into collections and using index-by-varchar tables with no suceess in improving preformance by much.

Can you think of a good way to do this in as little time as possible? (I'll be happy with anything under 20 minutes)

Thanks in advance.



Tom Kyte
May 10, 2004 - 8:54 pm UTC

nologging is not a hint.
nologging will *never* be useful for an update.

you do realize your update sets a.mid_table_id to either:

a) itself (you selected a.mid_table_id again)
b) null if nothing exists in b

probably *not* what you meant?

I'll assume you *meant* b.mid_table_id


as long as that column is not indexed, this won't take very long at all.

update ( select a.mid_table_id a_mtid, b.mid_table_id b_mtid
from big_table a, mid_table b
where a.join_col = b.join_col )
set a_mtid = b_mtid;


that assumes (naturally) that a primary/unique constraint exists on mid_table(join_col) -- if not, add one -- it *must be true* or the update would be ambigous (if many rows can exist in B, which one to use?)

make SURE to use CBO!


can we move not null values into one side and null values into one side?

dmv, May 11, 2004 - 7:09 am UTC

Tom
we have a table with the following structure :

inv_id addr_1 addr_2 addr_3 addr_4

1 xxx null null yyy
2 null xxx null yyy
3 null null xxx yyy
4 null null null xxx
5 xxx null yyy null
6 xxx null yyy zzz

output should be :

inv_id addr_1 addr_2 addr_3 addr_4

1 xxx yyy null null
2 xxx yyy null null
3 xxx yyy null null
4 xxx null null null
5 xxx yyy null null
6 xxx yyy zzz null

all Null values should move into one side, and all Not Null values should move into one side.

Is it possible in a single update command?

Thanks
dmv

Tom Kyte
May 11, 2004 - 9:34 am UTC

be nice to have a create table and insert into statements.

Like the review like shows:

....
<quote>
Also if your followup includes an example you want me to look at, I'll need it to have a create table, insert into statements and such that I can easily cut and paste into sqlplus myself (like I give you) in order to play with. I spent too many hours turning something like:

I have a table like:

scott@ORA9IR2> desc dept
 Name                                 Null?    Type
 ------------------------------------ -------- -------------------------
 DEPTNO                                        NUMBER(2)
 DNAME                                         VARCHAR2(14)
 LOC                                           VARCHAR2(13)

with the data:
 
scott@ORA9IR2> select * from dept;
 
    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

I *need* to have

I have a table:

create table dept( deptno number(2) primary key, dname varchar2(14), loc varchar2(13) );

with this data:

insert into dept values ( 10, 'accounting', 'new york' );
....

and please -- NO tablespaces/storage clauses, etc. simple, concise examples!
</quote>


ops$tkyte@ORA9IR2> create table t ( id int primary key, a int, b int, c int, d int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 1, 1, null, null, 2 );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 2, null, 1, null, 2 );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 3, null, null, 1, 2 );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 4, null, null, null, 2 );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 5, 1, null, 2, null );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 6, 1, null, 2, 3 );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
        ID          A          B          C          D
---------- ---------- ---------- ---------- ----------
         1          1                                2
         2                     1                     2
         3                                1          2
         4                                           2
         5          1                     2
         6          1                     2          3
 
6 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> update
  2  (
  3  select id, olda, oldb, oldc, oldd,
  4         a,
  5             b,
  6         decode(shift,0,c,d) c,
  7             decode(shift,0,d,null) d
  8    from (
  9  select t.*, decode(c,null,1,0) shift
 10    from (
 11  select id, olda, oldb, oldc, oldd,
 12         a,
 13         decode(shift,0,b,1,c,2,d) b,
 14         decode(shift,0,c,1,d) c,
 15         decode(shift,0,d,null) d
 16    from (
 17  select t.*, decode(b,null,decode(c,null,2,1),0) shift
 18    from (
 19  select id, olda, oldb, oldc, oldd,
 20         decode(shift,0,a,1,b,2,c,3,d) a,
 21         decode(shift,0,b,1,c,2,d) b,
 22         decode(shift,0,c,1,d) c,
 23         decode(shift,0,d,1,null) d
 24    from (
 25  select a olda, b oldb, c oldc, d oldd, t.*, decode(a,null,decode(b,null,decode(c,null,3,2),1),0) shift
 26    from t
 27         )
 28             ) t
 29             )
 30             ) t
 31             )
 32  )
 33  set olda = a, oldb = b, oldc = c, oldd = d
 34  /
 
6 rows updated.
 
ops$tkyte@ORA9IR2> select * from t;
 
        ID          A          B          C          D
---------- ---------- ---------- ---------- ----------
         1          1          2
         2          1          2
         3          1          2
         4          2
         5          1          2
         6          1          2          3
 
6 rows selected.


There are probably an infinite number of ways to do it, that was just the first that popped into mind.   make sure all 4 columns are the SAME TYPE of course.

If you don't see what the query is doing -- please run it 
bit by bit from the inside out.  starting with:

25  select a olda, b oldb, c oldc, d oldd, t.*, decode(a,null,decode(b,null,decode(c,null,3,2),1),0) shift
 26    from t


and then

 19  select id, olda, oldb, oldc, oldd,
 20         decode(shift,0,a,1,b,2,c,3,d) a,
 21         decode(shift,0,b,1,c,2,d) b,
 22         decode(shift,0,c,1,d) c,
 23         decode(shift,0,d,1,null) d
 24    from (
 25  select a olda, b oldb, c oldc, d oldd, t.*, decode(a,null,decode(b,null,decode(c,null,3,2),1),0) shift
 26    from t
 27         )


and so on -- you'll see how I built it. 

big table update

A reader, May 11, 2004 - 9:18 am UTC

Tom,

Sorry about the typeO (A. instead of B.).

first, how come /*+ NOLOGGING */ will do nothing for updates? I thought that /*+ NOLOGGING */ skip redo/undo (to simplify).

Ok, this is the reall query:
UPDATE STG_CLAIM_TRY A SET
A.ELIGIBILITY_KEY = (SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE AND
A.DATE_SERVICE <= B.DATE_TERMINATION);

If there is not match/join then it's ok to update STG_CLAIM_TRY to null.
I tried your suggestion but this is a many to one (one on STG_F_ELIGIBILITY_TRY many on STG_CLAIM_TRY) join and I get this error:
ORA-01779: cannot modify a column which maps to a non key-preserved table.

The pk on STG_F_ELIGIBILITY_TRY is ELIBIGILITY_KEY.

This is the explain plan:
Operation Object Name Rows Bytes Cost

UPDATE STATEMENT Optimizer Mode=CHOOSE 5 M 13464
UPDATE STG_CLAIM_TRY
TABLE ACCESS FULL STG_CLAIM_TRY 5 M 119 M 13464
TABLE ACCESS BY INDEX ROWID STG_F_ELIGIBILITY_TRY 3 72 6
INDEX RANGE SCAN IDX_TEMP 3 3



Tom Kyte
May 11, 2004 - 10:09 am UTC

nologging is *not a hint*

nologging is an attribute of a segment.
only bulk operations can use it.

insert /*+ append */ can skip logging of the TABLE data since append writes above the high water mark (does not touch ANY existing data).

insert /*+ append */ cannot skip logging of the INDEX data on that table, regardless of the nologging attribute of an index -- since you are mucking about with EXISTING DATA (and a failure in the middle would destroy your DATA!)

A failure in the middle of an append into a table -- harmless, the temporary extents we were writing to just get cleaned up.

nologging on an index only affects things like:

o create (no existing data)
o rebuild (no existing data is touched)

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:485221567528 <code>



tell us, how fast is the query:

SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
stg_claim_try a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE AND
A.DATE_SERVICE <= B.DATE_TERMINATION

or if by "then it's ok to update STG_CLAIM_TRY to null. " you meant
"hen it's MANDATORY to update STG_CLAIM_TRY to null. " what is the runtime of:

SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
stg_claim_try a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID(+) AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE(+) AND
A.DATE_SERVICE <= B.DATE_TERMINATION(+)


big table update

Lee, May 11, 2004 - 4:15 pm UTC

Tom,
Thanks for the Clarification of NOLOGGING.

If there is not match/join then STG_CLAIM_TRY should be null. (in theory, that could never happen).
I changed the index a little so the explain plan is a little different then before, the index on STG_F_ELIGIBILITY_TRY have CLIENT_MEMBER_ID, DATE_EFFECTIVE, DATE_TERMINATION.

**************************************************
This one take a fraction of a second to show first results
SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
stg_claim_try a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE AND
A.DATE_SERVICE <= B.DATE_TERMINATION

Operation Object Name

SELECT STATEMENT Optimizer Mode=CHOOSE
TABLE ACCESS BY INDEX ROWID STG_F_ELIGIBILITY_TRY
NESTED LOOPS
TABLE ACCESS FULL STG_CLAIM_TRY
INDEX RANGE SCAN ASDFASDF

**************************************************
This one take a fraction of a second to show first results
This ran in 3 minutes and 4 seconds
SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
stg_claim_try a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID(+) AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE(+) AND
A.DATE_SERVICE <= B.DATE_TERMINATION(+);

Operation Object Name

SELECT STATEMENT Optimizer Mode=CHOOSE
NESTED LOOPS OUTER
TABLE ACCESS FULL STG_CLAIM_TRY
TABLE ACCESS BY INDEX ROWID STG_F_ELIGIBILITY_TRY
INDEX RANGE SCAN ASDFASDF


If I do count(*) on any of the 2 queries above it take around 25-30 minutes to return.


Thanks.

Tom Kyte
May 11, 2004 - 8:55 pm UTC

need ALL rows -- count(*) doesn't do it.

your update speed will necessarily be gated by the performance of those queries...

based on the really bad plans, I'll guess you are using the RBO? Analyze, use the CBO and look for nice big juicy HASH JOINS

thanks for the query - Moving not null values into one side

dmv, May 12, 2004 - 2:32 am UTC

Hi Tom
Thankyou very much for your query. Surely next time, i will follow your instructions regarding create and insert statements, which helps you to answer quickly.

Sorry for the inconvenience.


Regards
dmv

big table update

Lee, May 12, 2004 - 9:16 am UTC

Tom,
I am using the cost based optimizer, I followed your suggestion and analyzed the 2 tables, this is what I get now.

******************************************************

00:01:34 to finish.

SELECT /*+ ALL_ROWS */
B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
STG_CLAIM_TRY a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE AND
A.DATE_SERVICE <= B.DATE_TERMINATION;

Operation Object Name Rows Bytes Cost

SELECT STATEMENT Optimizer Mode=HINT: ALL_ROWS 24 K 17499
HASH JOIN 24 K 1016 K 17499
TABLE ACCESS FULL STG_F_ELIGIBILITY_TRY 351 K 8 M 202
TABLE ACCESS FULL STG_CLAIM_TRY 5 M 81 M 13422
****************************************************

00:03:07 to finish.

SELECT /*+ ALL_ROWS */
B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B,
STG_CLAIM_TRY a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID(+) AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) >= B.DATE_EFFECTIVE(+) AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) <= B.DATE_TERMINATION(+);

Operation Object Name Rows Bytes Cost

SELECT STATEMENT Optimizer Mode=HINT: ALL_ROWS 5 M 18490
HASH JOIN OUTER 5 M 200 M 18490
TABLE ACCESS FULL STG_CLAIM_TRY 5 M 81 M 13422
TABLE ACCESS FULL STG_F_ELIGIBILITY_TRY 351 K 8 M 202
********************************************************

UPDATE STG_CLAIM_TRY A SET
A.ELIGIBILITY_KEY = (SELECT B.ELIGIBILITY_KEY
FROM STG_F_ELIGIBILITY_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) >= B.DATE_EFFECTIVE AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) <= B.DATE_TERMINATION);

Operation Object Name Rows Bytes Cost

UPDATE STATEMENT Optimizer Mode=CHOOSE 5 M 13422
UPDATE STG_CLAIM_TRY
TABLE ACCESS FULL STG_CLAIM_TRY 5 M 143 M 13422
TABLE ACCESS BY INDEX ROWID STG_F_ELIGIBILITY_TRY 1 25 4
INDEX RANGE SCAN ASDFASDF 1 3
**********************************************************



Tom Kyte
May 12, 2004 - 6:47 pm UTC

my concept now, that the join is "fast" is to use a global temporary table with a primary key - insert the results of the select join into it and update the join (which we can do since the gtt will have a proper primary key on it)

big table update

Lee, May 13, 2004 - 11:04 am UTC

Tom,
I tried that but I think I'm doing something wrong.

You are suggesting this right?

UPDATE ( SELECT /*+ ALL_ROWS */
A.ELIGIBILITY_KEY A_ELIG_KEY,
B.ELIGIBILITY_KEY B_ELIG_KEY
FROM STG_F_ELIGIBILITY_TRY B,
TMP_STG_CLAIM_TRY a
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE >= B.DATE_EFFECTIVE AND
A.DATE_SERVICE <= B.DATE_TERMINATION) SET
B_ELIG_KEY = A_ELIG_KEY;

I created a PK on TMP_STG_CLAIM_TRY (Not on ELIGIBILITY_KEY) and I still get this error:
ORA-01779: cannot modify a column which maps to a non key-preserved table.

I do have a PK on STG_F_ELIGIBILITY_TRY on ELIGIBILITY_KEY.

What am I doing wrong?

Thanks.






Tom Kyte
May 13, 2004 - 12:10 pm UTC

no, I mean:

create global temporary table gtt
( b_elig_key ..., client_member_id ... PRIMARY KEY )
on commit delete rows;


once in your database, then to update:

insert into that gtt the job of A and B as above (add client_id to the select list) and the update the join of the gtt to the A table.

big table update

Lee, May 14, 2004 - 2:32 pm UTC

Tom,

This is what I did:

DROP TABLE PROC_CLAIM_ELIG_JOIN_TMP;

CREATE GLOBAL TEMPORARY TABLE PROC_CLAIM_ELIG_JOIN_TMP
(ELIGIBILITY_KEY NUMBER,
CLIENT_MEMBER_ID NUMBER,
DATE_SERVICE DATE)
ON COMMIT PRESERVE ROWS;

ALTER TABLE PROC_CLAIM_ELIG_JOIN_TMP ADD CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMP
PRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE) USING INDEX;

INSERT /*+ APPEND */ INTO PROC_CLAIM_ELIG_JOIN_TMP(ELIGIBILITY_KEY, CLIENT_MEMBER_ID, DATE_SERVICE)
SELECT /* ALL_ROWS */
DISTINCT
B.ELIGIBILITY_KEY,
A.CLIENT_MEMBER_ID,
A.DATE_SERVICE
FROM STG_CLAIM_TRY A,
STG_F_ELIGIBILITY_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) >= B.DATE_EFFECTIVE AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) <= B.DATE_TERMINATION

UPDATE (SELECT A.ELIGIBILITY_KEY A_ELIG_KEY,
B.ELIGIBILITY_KEY B_ELIG_KEY
FROM PROC_CLAIM_ELIG_JOIN_TMP A,
STG_CLAIM_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE = B.DATE_SERVICE) SET
B_ELIG_KEY = A_ELIG_KEY;



The insert took about 8-10 minutes.
I had to kill the update after more then an hour.

Any suggestions?

Thanks.

Tom Kyte
May 15, 2004 - 12:15 pm UTC

how many rows to be updated

is eligibility key indexed (are you mass updating an indexed key)

is your update bumping into other row level updates.

updating millions of rows is a couple minute process if

o column is not indexed
o you are not contending for the data

updating millions of rows is a couple (hour|day|week|month|year) process potentially otherwise.


(one thing I forgot to mention I think -- use dbms_stats.set_table_stats to set the numrows in the gtt using sql%rowcount after the insert so the optimizer has a clue)

big table update

Lee, May 17, 2004 - 11:09 am UTC

Tom,

In the updated table (STG_CLAIM_TRY) all the records will be updated (value or null), around 5,000,000.

There are no indexes or constraints on STG_CLAIM_TRY.

The updating session is the only session in the db.


IÂ’m with you on the fact that this update should take no more then few minutes Â… but itÂ’s not :-) . Is there any other information that I can provide you with to help shade some light on this pain in the neck update?.

I changed the global temporary table to index organized table, the insert takes minutes and the update never finish (itÂ’s still running now for about 30 minutes already).

CREATE TABLE PROC_CLAIM_ELIG_JOIN_TMP
(ELIGIBILITY_KEY NUMBER,
CLIENT_MEMBER_ID NUMBER,
DATE_SERVICE DATE,
CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMP
PRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE))
ORGANIZATION INDEX
INCLUDING ELIGIBILITY_KEY
OVERFLOW
NOLOGGING
PARALLEL

ALTER TABLE PROC_CLAIM_ELIG_JOIN_TMP ADD CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMP
PRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE) USING INDEX;

INSERT /*+ APPEND */ INTO PROC_CLAIM_ELIG_JOIN_TMP(ELIGIBILITY_KEY, CLIENT_MEMBER_ID, DATE_SERVICE)
SELECT /* ALL_ROWS */
DISTINCT
B.ELIGIBILITY_KEY,
A.CLIENT_MEMBER_ID,
A.DATE_SERVICE
FROM STG_CLAIM_TRY A,
STG_F_ELIGIBILITY_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) >= B.DATE_EFFECTIVE AND
TO_NUMBER(TO_CHAR(A.DATE_SERVICE, 'YYYYMMDD')) <= B.DATE_TERMINATION;



ANALYZE TABLE PROC_CLAIM_ELIG_JOIN_TMP COMPUTE STATISTICS FOR TABLE FOR ALL INDEXES FOR ALL INDEXED COLUMNS;

ANALYZE TABLE STG_CLAIM_TRY COMPUTE STATISTICS FOR TABLE FOR ALL INDEXES FOR ALL INDEXED COLUMNS;

UPDATE (SELECT /* ALL_ROWS */ A.ELIGIBILITY_KEY A_ELIG_KEY,
B.ELIGIBILITY_KEY B_ELIG_KEY
FROM PROC_CLAIM_ELIG_JOIN_TMP A,
STG_CLAIM_TRY B
WHERE A.CLIENT_MEMBER_ID = B.CLIENT_MEMBER_ID AND
A.DATE_SERVICE = B.DATE_SERVICE) SET
B_ELIG_KEY = A_ELIG_KEY;

Operation Object Name Rows Bytes Cost Object Node In/Out

UPDATE STATEMENT Optimizer Mode=CHOOSE 5 M 2396
UPDATE STG_CLAIM_TRY
HASH JOIN 5 M 271 M 2396 :Q54396002 P->S QC (RANDOM)
INDEX FAST FULL SCAN PK_PROC_CLAIM_ELIG_JOIN_TMP 1 M 40 M 30 :Q54396000 P->P HASH
TABLE ACCESS FULL STG_CLAIM_TRY 5 M 143 M 1678 :Q54396001 P->P HASH



Tom Kyte
May 17, 2004 - 4:01 pm UTC

you are using parallel query (not necessary)...

but do you see any messages in your alert log like "cannot allocate new log"?

why Bad idea

A reader, May 24, 2004 - 4:58 pm UTC

Could you please give me more explaination on
"committing in for loop == terrible, horrible, really truly BAD idea." since I found such code in our applications.

Thanks!!


Tom Kyte
May 24, 2004 - 7:07 pm UTC

search this site for

ora-01555

also ask yourself, so, what happens when we crash in the middle of the loop. can I restart that process or did the programmer not even begin to think about that eventuality?

A reader, June 14, 2004 - 11:00 pm UTC

1.
Well, we see that you repeatedly advocate the below


tkyte@ORACLE> update ( select a1, b1 from a, b where a.id = b.id )
2 set a1 = b1
3 /

And then as you state, for doing this kind of an update. The primary key columns of a should be mandatorily present in b, to ensure that only one row is updated.

Is my understanding right?

2.
Further, as per your demo a merge is faster than a regular update, when a merge does just an update. Why is it so? Is it because of the way MERGE works?



Tom Kyte
June 15, 2004 - 3:21 pm UTC

1) b must have a primary key, yes.

2) a merge is an UPDATE AND INSERT. where did i show merge being faster than a single update? it could be -- if the update could not join.

A reader, June 14, 2004 - 11:07 pm UTC

Is the insert clause of the Merge statement optional, i.e. can I user the merge just for an update. Does it make sense to use a merge table for just update without any inserts?

Tom Kyte
June 15, 2004 - 3:22 pm UTC

in 10g yes,
in 9i no.

Update without a cursor

siva, January 17, 2005 - 1:21 am UTC

Tom,
I have a similar question on update. I'm providing the sample script below.

create table testupdate
(sno number(4),
itemcode varchar2(8),
value number(4),
appliedvalue number(4))


insert into testupdate (sno,itemcode,value) values(1,'item1',200);
insert into testupdate (sno,itemcode,value) values(2,'item2',100);
insert into testupdate (sno,itemcode,value) values(3,'item3',300);
insert into testupdate (sno,itemcode,value) values(4,'item4',200);
insert into testupdate (sno,itemcode,value) values(5,'item5',50);
insert into testupdate (sno,itemcode,value) values(6,'item6',200);
insert into testupdate (sno,itemcode,value) values(7,'item7',400);


SNO ITEMCODE VALUE APPLIEDVALUE
-------- -------- ---------- ------------
1 item1 200
2 item2 100
3 item3 300
4 item4 200
5 item5 50
6 item6 200
7 item7 400
Now:
I'm writing a stored procedure in which I have to update the appliedvalue column of the above table.
For this I have something called the actualValue which is stored in the variable vnum_actualValue.
This value in the variable vnum_actualValue should be distributed among the appliedvalue column as follows.


vnum_actualValue = 1000;

After the distribution of the value 1000, the updated table should look like this:


SNO ITEMCODE VALUE APPLIEDVALUE
-------- -------- ---------- ------------
1 item1 200 200
2 item2 100 100
3 item3 300 300
4 item4 200 200
5 item5 50 50
6 item6 200 150
7 item7 400 0

( TOTAL:1000 )

Now, I can very well do this with a cursor. Is there any way to do this with a single update statement? I'm using 9i R2.

Thanks very much in advance.




Tom Kyte
January 17, 2005 - 8:09 am UTC

ops$tkyte@ORA9IR2> select * from t;
 
       SNO ITEMCODE      VALUE APPLIEDVALUE
---------- -------- ---------- ------------
         1 item1           200
         2 item2           100
         3 item3           300
         4 item4           200
         5 item5            50
         6 item6           200
         7 item7           400
 
7 rows selected.
 
ops$tkyte@ORA9IR2> merge into t
  2  using (
  3  select sno, itemcode, value,
  4         greatest( least( value, :x -(sum(value) over (order by sno)-value)),0) new_appliedvalue
  5    from t
  6  ) X
  7  on (t.sno = x.sno )
  8  when matched then update set appliedvalue = new_appliedvalue
  9  when not matched then insert (sno) values (null) -- NEVER can happen
 10  /
 
7 rows merged.
 
ops$tkyte@ORA9IR2> select * from t;
 
       SNO ITEMCODE      VALUE APPLIEDVALUE
---------- -------- ---------- ------------
         1 item1           200          200
         2 item2           100          100
         3 item3           300          300
         4 item4           200          200
         5 item5            50           50
         6 item6           200          150
         7 item7           400            0
 
7 rows selected.
 

siva, January 17, 2005 - 9:23 pm UTC

Excellent, as always!!

can I do this?

sue, January 18, 2005 - 1:06 pm UTC

what I want to do is update one column based on the values of 4 other columns like such:

t1:
recordno, begindate

t2:
recordno, date1, date2, date3, date4,

I've tried to do the following

update t1 set t1.begindate =
(select greatest(greatest(st.date1, st.date2),
greatest(st.date3, st.date4)) as greatest
from t2 where t2.recordno = t1.recordno);

it just freezes up on me....

any ideas?




Tom Kyte
January 18, 2005 - 1:09 pm UTC

it is not "freezing up"

it is doing precisely what you asked of it to do.


funny -- did you know greatest takes many inputs...

select greatest(a,b,c,d,e) from t.....


since you asked to update every row -- probably best to:


update ( select t1.begindate, greates(st.date1,st.date2,st.date3,st.date4) newdt
from t1, t2
where t1.recordno = t2.recordno)
set begindate = newdt;


(eg: like the original examples demonstrate.................)

Raghav, January 29, 2005 - 3:40 am UTC

Hi Tom,

I have a query on the data porting from one table to another table. Suppose, the data in table t1 is ported in to
the table t2.

The number of rows in both the tables is same after porting. But there is a posibility of one row inserted twice and another row may not be inserted at all. Even though the records in both the tables are same.

Now, if I want to certify that the data in the second table is same as in the first table.

Thanks and Regards in Advance
Raghav

Tom Kyte
January 29, 2005 - 8:30 am UTC

if you did

insert into t2 select * from t1;

you are "certified" that AS OF the point in time that the insert started, the contents of t2 and t1 are the same (but if t1 was modified at all after the insert started -- they will obviously not be the same)

there is no change that a row would be inserted twice or skipped, unless you wrote procedural code and had a bug in it, in which case the solution is

a) erase code
b) do insert



update very slow

A reader, April 14, 2005 - 2:15 pm UTC

Hi Tom,

I have an update statement as follows:

update a
set a.col1 = (select b.col1 from b
where b.id = a.id)
where a.col2 between 500000 and 1000000;

This update statement has been running for a very long time. Is there something wrong with the update statement or is there any easier way to do this?

Thanks.


Tom Kyte
April 14, 2005 - 2:20 pm UTC

depends, slow rbo or smarter cbo?

I might have coded that as:

update ( select a.col1 a_col1, b.col1 b_col1
from a, b
where a.id = b.id
and a.col2 between 50000 and 100000 )
set a_col1 = b_col1

asumming that for every a.id where col2 between those values, there was a b.id to join to (if there wasn't, then your update will set a.col1 to null, mine would skip it)


it is all about the plan, what did the plan look like.
is a.col1 indexed?
will that update 1 row, 1000000000000000 rows ?

A reader, April 14, 2005 - 2:27 pm UTC

Hi Tom,

Thanks for the feedback. a.col1 is not indexed and this statement will update 500000 rows. I will try your suggested update and let you know.

Thanks a lot Tom.


A Newbie

A reader, May 22, 2005 - 3:54 pm UTC

Tom,

I need to update 3 columns on table A (200 million rows) with 3 columns from table B (150 million rows). I don't have the space to do a create table as. Table A has 1,000 partitions and many indexes, including an index on the 3 columns to be updated. Table B is not partitioned or indexed. Please advise me on what the most efficient way to accomplish this is. Thanks in advance for your help.

Tom Kyte
May 22, 2005 - 4:47 pm UTC

wow, more background here please.

how many of these rows will be updated. I know there are 200 and 150 million but does that mean 150 million, 200 million, or 5 rows will be actually modified.

why would a "newbie" be placed in charge of doing something so large to what is obviously a "big system"?

what constraints must we live with (eg: that index will have to probably go for a little while, or at least you'll want to)

is this information modified by other stuff at the same time (do we have locking issues to deal with)


A Newbie

A reader, May 22, 2005 - 6:42 pm UTC

150 of the 200 million rows *should* be updated.

Limited resources....also, thought I could use the experience.

Indexes can be dropped if need be.

Nothing else will be hitting this table until the update is complete.

Tom Kyte
May 23, 2005 - 8:07 am UTC

well, I would feel most comfortable if this newbie had a mentor who would help them and verify that what they are about to do is sensible....

you could end up migrating 150 million rows (bad).

you could end up running for N hours and then failing due to some sizing issue (bad).

do you have someone on site to work out the details with, this is a "big deal" basically.

I could give you many ways to accomplish this from

o write procedural code (ugh)

o parallel dml

o don't update (join instead, perhaps this data doesn't belong in that table yet)

o correlated subqueries, update a join (assuming the right keys are in place)

o how to size the undo you'll need.

o how to make this restartable and checkpointable (so you do updates for N minutes, commit, continue on but have the ability to restart where you left off when it fails)

o how to do "do it yourself" parallelism.


I'm afraid of the row migration however, that could indicate that a rebuild is necessary and since you have 1,000 partitions, each partition is only 200,000 records give or take (assuming even distribution -- but now you see assumptions start to creep into the mix) rebuilding each partition with the joined data would be easy (and can bypass redo/undo generation if we needed).

But, do you have someone there to work with?

NY newbie

A reader, May 23, 2005 - 2:41 pm UTC

Hi Tom,

Thanks for your responses. I have found an on site "mentor" to assist me with this, and I believe we are going to change direction...waiting to hear what the chiefs decide. Many many thanks.

question on different form of update,

sns, May 23, 2005 - 5:13 pm UTC

I have seen this kind of update statement in Teradata.
But I get error in Oracle. Why?

Details:
SQL> desc abc      
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 X                                                  NUMBER

SQL> desc xyz
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 Y                                                  NUMBER

SQL> update abc 
  2  set x=(select y from xyz where abc.x = xyz.y)
  3  where x in (select y from xyz);

10 rows updated.


SQL> UPDATE a
  2  FROM abc a, xyz c
  3  SET a.x = c.y
  4  WHERE EXISTS (SELECT 'X'
  5                FROM xyz c
  6                WHERE a.x = b.y
  7                )
  8  AND a.x = b.y;
FROM abc a, xyz c
*
ERROR at line 2:
ORA-00971: missing SET keyword

The last update statement works in teradata database.

thanks

 

Tom Kyte
May 23, 2005 - 7:20 pm UTC

because we do it the ansi way

update (join)
set ....

you update a "subquery", an inline view.

quick example here:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:273215737113 <code>

update statement

A reader, May 27, 2005 - 12:26 pm UTC

Hi Tom,

I have 2 tables t1 and t2

create table t1
(
id number,
name varchar2(40),
dnumber varchar2(20)
);

create table t2
(
id number,
dnumber varchar2(20)
);

I want to update t1.dnumber based on t2.dnumber

update t1
set t1.dnumber = (select t2.dnumber from t2
where t2.id = t1.id)
where t1.id in (select t2.id from t2);

This is taking a very long time. The dnumber column is not a primary key. Please help.

Thanks.


Tom Kyte
May 27, 2005 - 12:48 pm UTC

update (select t1.dnumber t1d, t2.dnumber t2d
from t1, t2
where t1.id = t2.id )
set t1d = t2d;


now, if t2(id) is not a primary key -- make it be so, it has to be unique for this update to work anyway.

make sure you are using the cbo

and if dnumber is indexed AND you are updating most of the rows, consider disabling it

AND make sure you are not just getting blocked by some other user (meaning the update isn't slow, you are just frozen)

update problem

Ramis, June 26, 2005 - 6:18 pm UTC

Hello

I am facing a huge problem. i would appreciate if you help me..

I have a table 'Matches' with columns team1_id, team2_id and series_id


SQL> create table matches (team1_id number, team2_id number, series_id number)
 /

insert into matches (team1_id, team2_id , series_id ) values (1, 2, 1);
insert into matches (team1_id, team2_id , series_id ) values (4, 3, 1);
insert into matches (team1_id, team2_id , series_id ) values (3, 5, 2);
insert into matches (team1_id, team2_id , series_id ) values (2, 6, 2);
insert into matches (team1_id, team2_id , series_id ) values (6, 1, 3);
insert into matches (team1_id, team2_id , series_id ) values (7, 2, 3);
insert into matches (team1_id, team2_id , series_id ) values (9, 4, 3);

SQL> select * from matches
/

Team1_id  Team2_id   Series_id
1               2               1
4               3               1
3               5               2
2               6               2
6               1               3
7               2               3
9               4               3


NOW USING THE FOLLOWING QUERY

SQL> SELECT TEAM1_ID FROM Matches WHERE SERIES_ID = 1
  2  UNION
  3  SELECT TEAM2_ID FROM MATCHES WHERE SERIES_ID = 1
  4  
SQL> /

  TEAM1_ID
----------
         1
         2
         3
         4

IT SHOWS THE DISTINCT "TEAM_IDs" from series 1. (it shows the ID's of all teams who 

played series 1)


i have another table Series with columns SERIES_ID, SRNO, teams

SQL> create table series (series_id number, teams_id number)
 /

this table is made to have the record of which teams played in each series..

the problem is that i dont want to update this table manually, i want it to be get 

updated automatically based on the data entered in the "Matches" table..
As soon as the team1_id, team2_id and series_id information is added in "Matches" 

table the "series" table  gets updated according to the format given below. 

I want a database procedure which will select distinct team_ids from each  series 

when data is saved in the matches table and then the procedure would automatically 

update the series table with team_id's and the series_id

so what I require is that procedure should automaticlally update the "series" table in 

the following format based on the data entered in the "Matches" table.

SQL> select * from series
Series_id        Teams_id
1                    1
1                    2
1                    3
1                    4
2                    2
2                    3
2                    5
2                    6
3                    1
3                    2
3                    4
3                    6
3                    7
3                    9

Note: (Series_id and teams_id are jointly primary key which means they cannot be 

repeated combined..)

best regards,
Ramis.
 

Tom Kyte
June 26, 2005 - 6:57 pm UTC

may I ask "why" -- it seems you are doing things "by series" and with a proper indexes, getting the list of team ids for a series would be a fast operation. What purpose would this table serve?

update problem

Ramis., June 26, 2005 - 8:20 pm UTC

the series table actualy has mnay more columns storing other series information. For the clarification of my question I mentioned only those columns which are related to my problem..I would like to see this done by the procedure which i asked in my original question..

regards,
Ramis.


Tom Kyte
June 26, 2005 - 8:51 pm UTC

my original question stands, with a proper pair of indexes, this would not be a "long query".

so, why the desire to create this redundant data?

updating a table from external table source

A reader, August 08, 2005 - 6:47 pm UTC

Tom,

I need to update the destination table (table1) column with external table source, table2(flat file on OS). I thougt of using "JOIN UPDATE"

" For ex: update (QUERY)set col1=col2"

but the table2 needs PK on it in order to do that. PK can not be created on external table.

So JOIN UPDATE cannot be used DIRECTLY.

I have the following choices to consider :

Method 1:

(a) Create global temporary table from <external table source> on commit delete rows and add PK on it.
(b) Do the JOIN update

Method 2:

(a) Use merge oracle9i feature. But this will do row-by-row update not bulk update.

To me, Method 1 is the way to GO for faster performance.

I am talking about 50,000 rows of update in destination table.


Do you have any better option to do that ?

Tom Kyte
August 08, 2005 - 8:41 pm UTC

benchmark it.

Benchmark results -- Join update using GTT wins

A reader, August 09, 2005 - 5:57 pm UTC

Tom,

I benchmarked it using your great runstats tool and it shows JOIN UPDATE (using Global temp table) wins over MERGE (using External table) for update.

dbadmin@TESTRECO> @update_benchmark_using_runstats_tool

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04

5202 rows updated.

Elapsed: 00:00:00.38

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

5202 rows merged.

Elapsed: 00:00:00.64
Run1 ran in 39 hsecs
Run2 ran in 64 hsecs
run 1 ran in 60.94% of the time

Name Run1 Run2 Diff
LATCH.FOB s.o list latch 0 1 1
LATCH.enqueues 11 10 -1
LATCH.mostly latch-free SCN 10 11 1
LATCH.session idle bit 13 14 1
STAT...cursor authentications 3 2 -1
STAT...recursive cpu usage 3 4 1
STAT...parse count (hard) 2 1 -1
LATCH.hash table modification 1 0 -1
LATCH.channel operations paren 2 0 -2
LATCH.simulator lru latch 34 36 2
STAT...change write time 7 9 2
STAT...sorts (memory) 5 7 2
LATCH.session allocation 2 4 2
LATCH.enqueue hash chains 11 9 -2
LATCH.library cache load lock 2 4 2
LATCH.redo allocation 5,228 5,231 3
STAT...opened cursors current 1 4 3
STAT...redo entries 5,208 5,211 3
LATCH.Consistent RBA 8 12 4
STAT...index scans kdiixs1 3 8 5
STAT...parse time cpu 1 6 5
STAT...shared hash latch upgra 3 8 5
STAT...parse time elapsed 1 6 5
STAT...db block changes 10,497 10,503 6
LATCH.undo global data 1 9 8
STAT...parse count (total) 7 15 8
STAT...execute count 8 17 9
LATCH.messages 77 87 10
STAT...opened cursors cumulati 7 18 11
LATCH.child cursor hash table 20 7 -13
STAT...no work - consistent re 691 676 -15
STAT...table scan blocks gotte 691 673 -18
STAT...free buffer requested 587 607 20
STAT...CPU used by this sessio 42 65 23
STAT...CPU used when call star 42 65 23
STAT...Elapsed Time 42 67 25
LATCH.list of block allocation 29 0 -29
LATCH.checkpoint queue latch 615 583 -32
STAT...recursive calls 46 99 53
LATCH.shared pool 222 311 89
LATCH.library cache pin alloca 74 168 94
STAT...index fetch by key 0 95 95
STAT...rows fetched via callba 0 96 96
STAT...table fetch by rowid 0 97 97
STAT...calls to get snapshot s 18 119 101
LATCH.library cache pin 110 240 130
STAT...bytes received via SQL* 1,523 1,381 -142
STAT...buffer is not pinned co 691 869 178
STAT...consistent gets 714 990 276
STAT...consistent gets - exami 3 288 285
LATCH.library cache 292 782 490
LATCH.simulator hash latch 290 804 514
STAT...deferred (CURRENT) bloc 583 0 -583
LATCH.row cache enqueue latch 108 966 858
LATCH.row cache objects 110 979 869
STAT...table scan rows gotten 35,611 30,411 -5,200
STAT...db block gets 5,349 19,877 14,528
STAT...session logical reads 6,063 20,867 14,804
STAT...redo size 1,297,932 1,275,112 -22,820
LATCH.cache buffers chains 27,829 66,846 39,017
STAT...session pga memory max 1,376,256 1,114,112 -262,144
STAT...session uga memory 0 524,952 524,952
STAT...session uga memory max 0 524,952 524,952
STAT...session pga memory 0 786,432 786,432

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
35,725 77,740 42,015 45.95%

PL/SQL procedure successfully completed.


Update using SELECT FROM SELECT

MITA, August 18, 2005 - 11:30 am UTC

I am trying to

UPDATE t1
set a1 = (select x
from (select rownum rnum, b
from table2@xyz.world t2
where t2.y1 = t1.y1
order by t2.z1 desc)
where rnum = 1 )
where x1 = 12345 ;

It is giving me INVALID COLUMN NAME on

WHERE t2.y1 = t1.y1 line.

Is there any limitation on using SELECT FROM SELECT in UPDATE statement ??

Tom Kyte
August 18, 2005 - 4:33 pm UTC

correlation variables can only go one level down -- you are trying to push them to the second level.


might be best to:

create global temporary table gtt ( y1 PRIMARY KEY, x );


and then:

insert into gtt
select y1, x
from ( select y1, x, row_number() over (partition by y1 order by z1 desc ) rn
from table2@xyz.world
<can have more here to get just the y1's you need if desired>
)
where rn = 1;

update (select t1.x t1_x, t2.x t2_x
from t1, gtt t2
where t1.y1 = t2.y2
and t1.x1 = 12345 )
set a1 = x


Another pointer

Bob B, August 18, 2005 - 5:33 pm UTC

I'd also like to point out that:
select x
from (select rownum rnum, b
from table2@xyz.world t2
where t2.y1 = t1.y1
order by t2.z1 desc)
where rnum = 1

Is equivalent to
select b
from table2@xyz.world t2
where t2.y1 = t1.y1
and rownum = 1

ORDER BY is done after the SELECT elements are evaluated. So ROWNUM will be evaluated and then the elements in the select list are sorted.

EX:

SELECT ROWNUM RN, ROWNUM * 5 VAL
FROM DUAL
CONNECT BY LEVEL <= 3
ORDER BY VAL DESC

Tom Kyte
August 18, 2005 - 7:03 pm UTC

correct, good eye, wasn't even looking for that!



Temporary tablespace

friend, September 13, 2005 - 1:42 pm UTC

Hi Tom,

I have upgrade one database from 8174 to 9206 and create temporary tablespace temp as create temporary tablespace which was causing trouble due to some file system issue
ORA-01114: IO error writing block to file 407 (block # 68105)
ORA-27063: skgfospo: number of bytes read/written is incorrect
ORA-01114: IO error writing block to file 407 (block # 68105)
ORA-27063: skgfospo: number of bytes read/written is incorrect
ORA-01114: IO error writing block to file 407 (block # 68105)
ORA-27063: skgfospo: number of bytes read/written is incorrect
ORA-01114: IO error writing block to file 407 (block # 68105)
ORA-27063: skgfospo: number of bytes read/written is incorrect


One DBA created one tablespace as create tablespace ....temporary and now every body is happy and blaming me why I created create temporary tablespace :(

Temporary tablespace

friend, September 15, 2005 - 6:27 pm UTC

Tom,
pLease help

OWNER SEGMENT_NA PARTITION_ SEGMENT_TY TABLESPACE_NAME 
HEADER_FILE HEADER_BLOCK BYTES BLOCKS EXTENTS INITIAL_EXTENT 
NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE FREELISTS FREELIST_GROUPS 
RELATIVE_FNO BUFFER_
---------- ---------- ---------- ---------- ------------------------------ 
----------- ------------ ---------- ---------- ---------- -------------- 
----------- ----------- ----------- ------------ ---------- --------------- 
------------ -------
SYS 220.2 TEMPORARY TEMP01 
220 2 1.4133E+10 1725210 345042 40960 40960 
1 2147483645 0 1 1 220 
DEFAULT
SQL> l
1* SELECT * FROM DBA_SEGMENTS WHERE SEGMENT_TYPE='TEMPORARY'

and I am dropping temporary tablespace which is taking lot of time and not getting drop
Please suggest
I have 20 hrs down time from tommorow onwards to drop Temp tablespace
SQL> select count(*) from dba_extents where tablespace_name='TEMP01';
COUNT(*)
----------
211808

This dictionary manager TBS with 40k next extent :(
 

Tom Kyte
September 15, 2005 - 6:53 pm UTC

it will take very long time to drop.

perhaps you just create a new and offline this, worry about it later? if at all?

Temporary tablespace

friend, September 15, 2005 - 7:06 pm UTC

Tom,
Right now one tablespace is already there now How to Drop TEMP01 expeditely?

Please suggest

temporary tablespace

friend, September 15, 2005 - 7:45 pm UTC

1. Find any temporary segments that need to be cleaned up

select s.segment_name, s.tablespace_name, ts.ts# from dba_segments s, v$tablespace ts

where ts.name = s.tablespace_name and s.segment_type = 'TEMPORARY';

Take note of the tablespace number.

2. Request the segments be cleaned up

alter session set events 'immediate trace name DROP_SEGMENTS level <ts#+1>';

<ts#> refers to the tablespace number returned by the query in step 1 (column TS#).


3. Verify the cleanup

select s.segment_name, s.tablespace_name, ts.ts# from dba_segments s, v$tablespace ts
where ts.name = s.tablespace_name and s.segment_type = 'TEMPORARY and ts.ts# = <ts# from step 1.> ;

4. Repeat if step 1 shows more that one TS# needs clean up

will above help?


Tom Kyte
September 15, 2005 - 8:42 pm UTC

nope, the high expense is the dropping of that many dictionary managed extents.

So, why drop them?

Temporary tablespace

friend, September 15, 2005 - 8:49 pm UTC

Tom,

Please suggest me on above . I will appreciate that

I have production database for 20 hours only to drop tablespace.


Tom Kyte
September 15, 2005 - 9:06 pm UTC

again, i ask, why drop?

it could take a really long time to clean up 211808 dictionary managed extents.

just create new, offline old - worry about it later.

temporary

friend, September 15, 2005 - 10:13 pm UTC

20 hours will be enough?

Is there any parameter i can set to make it fast
any parameter for SMON to make it fast
PGA_aggregate_target is 1gb

Please suggest

Temporary

friend, September 16, 2005 - 2:12 pm UTC

Tom,
Its management decision to drop and I am the executor so Please suggest me anythin which cam make this process fast.

I am planning to increase pga_aggregate_target to 2GB from 1GB



Tom Kyte
September 16, 2005 - 2:15 pm UTC

I don't get the pga_aggregate_target tie in here, but...

did you take "our" idea to management? The one where they have no downtime?

temporary

friend, September 16, 2005 - 2:19 pm UTC

They are saying tablespacae has to be drop :)
I dont why?

Anyways, today is start date and downtime is of 24 hours to drop TEMP01.

I will appreciate if you please suggest me something to make it fast
drop tablespace temp01 including contents and datafiles;


temporary

friend, September 16, 2005 - 2:22 pm UTC

Remove following lines from init
*.event="10269 trace name context forever, level 10"
*.event="10061 trace name context forever, level 10"

then startup restrict and
drop tablespace temp01 including contents and datafiles;

Thanks!

Getting Errors Executing Merge

A reader, September 19, 2005 - 2:39 pm UTC

I'm trying to execute the example that you used above, but got the following error message when trying to execute the merge statement. Couldn't really figure it out why it didn't. The syntax seems to be correct.


SQL> select * from v$version;

BANNER                                                      ----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Prod
PL/SQL Release 10.1.0.4.0 - Production
CORE    10.1.0.4.0    Production
TNS for Linux: Version 10.1.0.4.0 - Production
NLSRTL Version 10.1.0.4.0 - Production                 

SQL> CREATE TABLE t1 AS
  2  SELECT object_id, object_name
  3  FROM   all_objects;

Table created.

SQL> 
SQL> ALTER TABLE t1 ADD CONSTRAINT t1_pk
  2  PRIMARY KEY(object_id);

Table altered.

SQL> 
SQL> CREATE TABLE t2 AS SELECT * FROM t1;

Table created.

SQL> 
SQL> ALTER TABLE t2 ADD CONSTRAINT t2_pk
  2  PRIMARY KEY(object_id);

Table altered.

SQL> 
SQL> ANALYZE TABLE t1 COMPUTE STATISTICS
  2  FOR TABLE
  3  FOR ALL INDEXES
  4  FOR ALL INDEXED COLUMNS;

Table analyzed.

SQL> 
SQL> ANALYZE TABLE t2 COMPUTE STATISTICS
  2  FOR TABLE
  3  FOR ALL INDEXES
  4  FOR ALL INDEXED COLUMNS;

Table analyzed.

SQL> MERGE INTO t1
SP2-0042: unknown command beginning "MERGE INTO..." - rest of line ignored.
SQL> USING t2
SP2-0042: unknown command "USING t2" - rest of line ignored.
SQL> ON (t2.object_id = t1.object_id)
SP2-0042: unknown command beginning "ON (t2.obj..." - rest of line ignored.
SQL> WHEN MATCHED THEN
SP2-0042: unknown command beginning "WHEN MATCH..." - rest of line ignored.
SP2-0044: For a list of known commands enter HELP
and to leave enter EXIT.
SQL>    UPDATE SET t1.object_name = t2.object_name
  2  WHEN NOT MATCHED THEN
  3     INSERT (object_id, object_name)
  4     VALUES (t2.object_id, t2.object_name);
   UPDATE SET t1.object_name = t2.object_name
          *
ERROR at line 1:
ORA-00903: invalid table name 


SQL> 
 

Tom Kyte
September 19, 2005 - 4:49 pm UTC

you are using an 8i or before Oracle client sqlplus....

sqlplus is raising the error....

put it in begin merge .... end; and you'll "see"

get a current sqlplus in order to make use of all of the syntax.

temporary

friend, October 06, 2005 - 10:34 pm UTC

Tom,

If I offline the temp tablespace with 200k extents and do offline then will it go for deallocation of extent before changing the status offline?

If No then I think it will be fast way to drop the tablespace right?

Tom Kyte
October 07, 2005 - 8:15 am UTC

you still have to deallocate extents *sometime*.

offline would offline.

Update takes too long!

Bob, July 01, 2006 - 10:59 am UTC

Dear All,

Following on from this: I have the following query which I am running under Oracle 9.2:

EXPLAIN PLAN
SET STATEMENT_ID = 'TEST'
FOR
UPDATE LARGE t1
SET t1.CL_DIST_NO = t1.DIST_NO
WHERE EXISTS
(SELECT
FROM SMALLISH c
WHERE t1.dlr_no = c.cust_no
and t1.dist_no = c.dist_no
and t1.recv_branch_no = c.branch_no
and c.current_flag = 'Y')

LARGE has 1.2 millions rows
SMALLISH has 17986 rows

query_plan
--------------------------------------------------------------------------------
1,20129 UPDATE STATEMENT TEST Cost = 20129 20129
2,1 UPDATE LARGE Cost = 1
3,1 NESTED LOOPS Cost = 1 20129
4,1 SORT UNIQUE Cost = 1
5,1 TABLE ACCESS FULL SMALLISH Cost = 1 88
4,2 BITMAP CONVERSION TO ROWIDS Cost = 2
5,1 BITMAP INDEX SINGLE VALUE LARGE Cost = 1


I have created a bitmap concatenated index on
LARGE (t1.dlr_no, t1.dist_no, t1.recv_branch_no) - which appears to be used in the explain plan.

select count(distinct dlr_no||dist_no||recv_branch_no) from LARGE

gives 18085 records

But it does a full table scan to SMALLISH - which makes my update really slow. It takes 30-45 mins. Anyone have any ideas how I can speed this up?


Slow update now run on Oracle 9.2.0.1.0

Bob, July 01, 2006 - 6:50 pm UTC

Interestingly...in my previous comment that was run on
Oracle 9.2.0.1.0.

If I run this same SQL on Oracle 9.2.0.6.0:

I get the following explain plan:

query_plan
--------------------------------------------------------------------------------
1,3008 UPDATE STATEMENT TEST Cost = 3008 3008
2,1 UPDATE TR00 Cost = 1
3,1 HASH JOIN Cost = 1 3008
4,1 SORT UNIQUE Cost = 1
5,1 TABLE ACCESS FULL SMALLISH Cost = 1 29
4,2 TABLE ACCESS FULL LARGE Cost = 2 2806

6 rows selected.

Notice the cost is much less and the plan is different. What makes two full table scans and a hash join more efficient?

In both cases I have run dbms_stats.gather_table_stats and
dbms_stats.gather_index_stats!!




Tom Kyte
July 01, 2006 - 8:56 pm UTC

I would expect them to be different - the row counts are radically different.

Just out of curiosity - use an IN instead of where exists and report back.

Slow update on Oracle 9.2.0.1.0

Bob, July 02, 2006 - 11:11 am UTC

OK - if I run this with the IN clause. Which is what I should have done in the first place! Because the subquery - returns comparatively a small amount of records:

My plan is:

query_plan
--------------------------------------------------------------------------------
1,17643 UPDATE STATEMENT TEST Cost = 17643 17643
2,1 UPDATE LARGE Cost = 1
3,1 MERGE JOIN SEMI Cost = 1 17643
4,1 SORT JOIN Cost = 1 17490
5,1 TABLE ACCESS FULL LARGE Cost = 1 8829
4,2 SORT UNIQUE Cost = 2 153
5,1 TABLE ACCESS FULL SMALLISH Cost = 1 88

7 rows selected.

If I put a USE_HASH optimizer hint in - I get the following plan:

query_plan
--------------------------------------------------------------------------------
1,9120 UPDATE STATEMENT TEST Cost = 9120 9120
2,1 UPDATE LARGE Cost = 1
3,1 HASH JOIN Cost = 1 9120
4,1 SORT UNIQUE Cost = 1
5,1 TABLE ACCESS FULL SMALLISH Cost = 1 88
4,2 TABLE ACCESS FULL LARGE Cost = 2 8829

6 rows selected.


So I presume when I'm using the HASH_JOIN hint it is hashing the smaller table (SMALLISH) in this particular case. And then doing a sort, and then doing a hash join to LARGE.

Would it be better to to make a hash table of LARGE?

I presume I would need to have optimal values for HASH_AREA_SIZE and PGA_AGGREGATE_TARGET for this to work.

Here's the TKPROF output:

UPDATE /*+ USE_HASH (c t1) */ LARGE t1
SET t1.CL_DIST_NO = t1.DIST_NO
WHERE (t1.dlr_no, t1.dist_no, t1.recv_branch_no) in
(SELECT c.cust_no, c.dist_no, c.branch_no
FROM SMALLISH c
WHERE c.current_flag = 'Y')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 40.98 503.01 58825 124017 1156364 1102954
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 40.98 503.03 58825 124017 1156364 1102954

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 165

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE
1102954 HASH JOIN
17421 SORT UNIQUE
17421 TABLE ACCESS FULL SMALLISH
1203517 TABLE ACCESS FULL LARGE

Tom, is there a way for me to reduce the Physical disk reads any further?

Thanks

Slow update on Oracle 9.2.0.1.0

Bob, July 02, 2006 - 11:16 am UTC

Should have included this too:

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 58257 0.05 2.09
db file scattered read 71 0.00 0.00
log buffer space 281 0.24 13.90
free buffer waits 862 0.99 444.86
log file switch completion 5 0.18 0.62
log file switch (checkpoint incomplete) 3 0.99 1.66
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

My update completes in 8.9 minutes if I use the USE_HASH optimizer hint. Is it possible for me to make this any faster?

Thanks
Bob

Wait Time

Jonathan Lewis, July 03, 2006 - 1:44 pm UTC

Most of your wait time is spent in "free buffer waits". This means DBWR can't keep up with clearing dirty buffers to disc because (a) you are generating so many dirty blocks so quickly, and (b) you managed to do several log file switches so rapidly.

Increasing the size of your log files may help, as it could reduce the pressure on DBWR. Spreading your large table over more discs MAY be appropriate, ditto your undo tablespace. Possibly you also have an extremely aggressive MTTR fast start that is putting extra pressure on DBWR.

Your execution plans (almost all of them) are perfectly reasonable, as you are updating virtually every row in the large table. The only dubious plan is the one where you created a bitmap index on the small table, when a B-tree would probably have been more appropriate.



Slow Update

Bob, July 03, 2006 - 5:37 pm UTC

Hi Jonathan,

Thanks for your feedback. I actually created a bitmap index on the LARGE table (please look at the history). However, when I am using the USE_HASH optimizer hint it's not being used anyway - so I dropped it.

Can you define what you mean by a "dirty block". What is a MTTR, and how do I know if it's
"aggressive".

How do I spread my table across multiple tablespaces - please explain.

Cheers,

Bob

Hope this helps!

Srinivas Narashimalu, July 05, 2006 - 9:10 am UTC

Bob,

MTTR means Meantime To Recover. This is the time (in secs) that you set (in the parameter file) for the instance to recover incase of failure. You set this parameter as fast_start_mttr_target="some value". In short this is the time needed for an instance recovery.

What Jonathan means is you might have set this "time" to a very small value which in turn is making the dbwr write to the disk the dirty buffers more often.

Dirty buffers are blocks that are different from the disk, i.e they are changed/modified blocks.

Spread your tablespace - create datafiles belonging to the tablespace on different disks. This is done to balance the I/O.

Hope it helps!

Thanks,
Srinivas

Slow update

Bob, July 06, 2006 - 4:16 am UTC

Perfect answer. Thanks Srinivas.

Bob

a reader from china!!!

nmgzw, July 06, 2006 - 11:04 pm UTC

I have been learning your excellent example,thanks you very much!
I am simulating your above example,but I could not implement successful.


CREATE TABLE "EMP" (
   "EMPLOYEE_ID" NUMBER(6, 0), 
   "SALARY" NUMBER(12, 2), 
   "MANAGER_ID" NUMBER(12, 2), 
   "DEPARTMENT_ID" NUMBER(4, 0)) ;

CREATE UNIQUE INDEX "EMP_PK" ON "EMP" ("EMPLOYEE_ID" ) ;

insert into emp values ( 100 , 24000.00 , 100.00 , 90 );
insert into emp values (101 , 17000.00 , 100.00 , 90 );
insert into emp values (  102 , 17000.00 , 100.00 , 90 );
 insert into emp values ( 103 , 9000.00 , 103.00 , 60 );
 insert into emp values ( 104 , 6000.00 , 103.00 , 60 );
 insert into emp values ( 105 , 4800.00 , 103.00 , 60 );
 insert into emp values ( 106 , 4800.00 , 103.00 , 60 );
 insert into emp values ( 107 , 4200.00 , 103.00 , 60 );
 insert into emp values ( 108 , 12000.00 , 108.00 , 100 );
 insert into emp values ( 109 , 9000.00 , 108.00 , 100 );
 insert into emp values ( 110 , 8200.00 , 108.00 , 100 );
 insert into emp values ( 111 , 7700.00 , 108.00 , 100 );
 insert into emp values ( 112 , 7800.00 , 108.00 , 100 );
 insert into emp values ( 113 , 6900.00 , 108.00 , 100 );

when i execute below,it always return error code ora-01732.

SQL> update (select employee_id,salary,
       manager_id,
       (select max(to_char(e2.salary, 'fm000000000.00') || e2.employee_id)
          from emp e2
         where e2.department_id = emp.department_id) new_mgr
  from emp)
  set manager_id=substr(new_mgr,13)
  where salary<to_number(substr(new_mgr,1,12)) ;

ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

but if i remove "where" condition at last,then it success.

SQL> update (select employee_id,salary,
       manager_id,
       (select max(to_char(e2.salary, 'fm000000000.00') || e2.employee_id)
          from emp e2
         where e2.department_id = emp.department_id) new_mgr
  from emp)
  set manager_id=substr(new_mgr,13);

14 rows updated.



 

Tom Kyte
July 08, 2006 - 10:41 am UTC

ops$tkyte@ORA10GR2> update ( select manager_id,
  2                  substr(
  3                  (select max(to_char(e2.salary, 'fm000000000.00') || e2.employee_id)
  4                     from emp e2
  5                    where e2.department_id = emp.department_id),
  6                  13 ) new_mgr_id
  7            from emp
  8           where salary <  to_number(substr(
  9                   (select max(to_char(e2.salary, 'fm000000000.00') || e2.employee_id)
 10                      from emp e2
 11                     where e2.department_id = emp.department_id), 1, 12 ) )
 12          )
 13    set manager_id=new_mgr_id
 14  /

11 rows updated.
 

to: nmgzw from china!!!!

Duke Ganote, July 07, 2006 - 1:01 pm UTC

I'll be interested in Tom's answer because the concept of key-preserved seems a bit nebulous for this query.  However, if you have 8i or greater (I used 10gR2 and 9iR2), you can use analytic functions to give a cleaner, successful implementation:

update (
select employee_id
     , salary
     , manager_id
     , (select first_value(employee_id) over
                ( order by salary desc ) new_mgr
          from emp e2
         where e2.department_id = emp.department_id
           and rownum = 1) new_mgr
     , (select first_value(salary) over
                ( order by salary desc ) new_mgr
          from emp e2
         where e2.department_id = emp.department_id
           and rownum = 1) new_mgr_salary  from emp
) set manager_id = new_mgr
 where salary < new_mgr_salary
SQL> /

11 rows updated.

I agree the distinction between that successful UPDATE and the following unsuccessful UPDATE is not obvious to me either:

update (
select employee_id
     , salary
     , manager_id
     , (select max(to_char(e2.salary,'fm000000000.00')||e2.employee_id )
          from emp e2
         where e2.department_id = emp.department_id
       ) AS new_mgr
  from emp
) set manager_id = substr(new_mgr,13)
  where salary < to_number(substr(new_mgr,1,12))
173:GR2\db2> /
e (
  *
 at line 1:
ORA-01732: data manipulation operation not legal on this view 

Thanks you!

nmgzw, July 08, 2006 - 5:06 am UTC

You are great!!Thanks you very much!!

Combination of WITH DATA and Join Update

A reader, October 27, 2006 - 7:42 pm UTC

Hi Tom,
In response to follow up above:-
"Oh, well -- then you cannot do it in a single update anyway -- as the table
being updated would NOT be key preserved (and hence the result of the update
would be very ambigous).

If the table containing the changes can have MORE then one occurrence of the
"primary key" of the other table -- no chance for a single statement. If you
tried:

table t1( x int primary key, y int );
table t2( x int, y int );

insert into t1 values ( 1, 0 );
insert into t2 values ( 1, 100 );
insert into t2 values ( 1, 200 );

then

update ( select t1.y t1_y, t2.y t2_y from t1, t2 where t1.x = t2.x )
set t1_y = t2_y

would be "ambigous" -- no way we could know if y would end up with 100 or 200 --
hence we don't even permit it.

So, you are forced to do it procedurally.
"
Is it possible to do sum of t2.y first and then update? i.e., update t1 with value 300, may be by using "with data" clause or in-line view etc.?
Regards,
Tarun

Tom Kyte
October 27, 2006 - 8:28 pm UTC

sure, if you make the "key" a "primary key" by grouping by it - we can use MERGE

Key Creation

A reader, October 28, 2006 - 3:21 am UTC

Hi Tom,
Thanks for your quick response.
Do we need to create key physically? Or Oracle will make a decision that the key will be preserved through grouping function?
Regards,


Tom Kyte
October 28, 2006 - 10:38 am UTC

you would need to use merge:

ops$tkyte%ORA10GR2> create table t1 ( x int, y int );

Table created.

ops$tkyte%ORA10GR2> insert into t1 values ( 1, 1 );

1 row created.

ops$tkyte%ORA10GR2> insert into t1 values ( 1, 2 );

1 row created.

ops$tkyte%ORA10GR2> insert into t1 values ( 1, 3 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2 ( a int, b int );

Table created.

ops$tkyte%ORA10GR2> insert into t2 values ( 1, null );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> update ( select t2.b, t1.sum_y
  2             from t2, (select x, sum(y) sum_y from t1 group by x) t1
  3            where t2.a = t1.x )
  4     set b = sum_y
  5  /
   set b = sum_y
       *
ERROR at line 4:
ORA-01779: cannot modify a column which maps to a non key-preserved table


ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> merge into t2
  2  using (select x, sum(y) sum_y from t1 group by x) t1
  3  on (t2.a = t1.x)
  4  when matched then update set b = sum_y;

1 row merged.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from t2;

         A          B
---------- ----------
         1          6
 

Thanks a lot!

A reader, October 28, 2006 - 7:07 pm UTC


NOT IN vs IN

Ashutosh, October 29, 2006 - 9:32 am UTC

Tom,
Tab has 2 million records. Evtab has 10 records.

1. select * from tab where event not in ('ACCR','READ');
2. select * from tab where event in (select event from evtab where event not in ('ACCR','READ'));

Who is faster in above two? What is other efficient way to fetch the data.

Tom Kyte
October 29, 2006 - 9:45 am UTC

benchmark it? you seem to have the two queries and the data with which to test.


I would say that unless event is horribly skewed and ALMOST ALL RECORDS are in fact ACCR, READ - this had better full scan and both of the above would be about the same.

UPDATE help

V, November 08, 2006 - 5:12 pm UTC

I have a LDAP lookup in a function that returns a NAME based on an EMP_NUM passed in.
select ldap_email_lookup_empid('B123','NAME') from dual;

This returns a valid name.

I also have a table EMP:
EMP_NUM
NAME

with only EMP_NUM populated.
My question is, how can I UPDATE the NAME in EMP with the result from the LDAP Call.



Tom Kyte
November 08, 2006 - 8:14 pm UTC

did you try "update"?



?

V, November 09, 2006 - 10:05 am UTC

Yes, I have it working in an IMPLIED loop. But can this be done in a single UPDATE statement?

Something like:
UPDATE EMP SET NAME = (select ldap_email_lookup_empid(EMP.emp_num,'NAME') from dual) where emp.emp_num = ???;


Tom Kyte
November 09, 2006 - 2:21 pm UTC

update emp
set ename = f(x);


f = your ldap stuff
x = your inputs


pretend your ldap function was called "substr" - how would you have used substr in an update?

just like you'll use your plsql function.

insert with update

dmv, November 24, 2006 - 7:34 am UTC

hi tom

I have a table t1 with the following values

facid conid stat

pr c1 y
pr c2 y


i want to insert all the records belong to 'pr' into the same table with facid as 'se'.

for example, after insertion my table should looks like,

facid conid stat

pr c1 y
pr c2 y
se c1 y
se c2 y

can i achieve it by single statement? advice me..

This is for table and record creation :

create table t1(facid varchar2(2), conid varchar2(2), stat varchar2(1));

insert into t1 values ('pr', 'c1', 'y');
insert into t1 values ('pr', 'c2', 'y');



Tom Kyte
November 24, 2006 - 6:42 pm UTC

insert into t
select 'se', conid, stat from t where facid = 'pr'

insert with update

dmv, November 26, 2006 - 11:17 pm UTC

Hi Tom

Just for testing purpose i have given only three columns, but actually in my table more than 50 columns are there. Out of that only one column (i.e., facid) i want to change it from 'pr' to 'se' while inserting. Remaining all other values should be same.

is there any other way to get it done without using temp tables?

Regards
DMV

Tom Kyte
November 27, 2006 - 7:48 am UTC

huh?

so what if there are 500 columns - why would that change anything - and where do "temporary tables" even enter the discussion?

update

A reader, December 18, 2006 - 7:17 am UTC

Dear tom,

Here is the set-up

-----------------------------------------------------
CREATE TABLE T1
(
ORG_ID CHAR(4 BYTE),
BR_ID VARCHAR2(12 BYTE),
ACCT_ID VARCHAR2(36 BYTE),
ASOF_TMS DATE,
CURR_CODE CHAR(3 BYTE),
AVAL_BAL_SUMM VARCHAR2(78 BYTE),
LED_BAL_SUMM VARCHAR2(78 BYTE),
DATA_GRP_CTL_NUM NUMBER
) TABLESPACE USERS;


ALTER TABLE T1 ADD (
CONSTRAINT PK_T1 PRIMARY KEY (DATA_GRP_CTL_NUM)
USING INDEX
TABLESPACE USERS
);


------------------------------------------------------------------------------------------------------------------------


CREATE TABLE T2
(
ORG_ID CHAR(4 BYTE),
BR_ID VARCHAR2(12 BYTE),
ACCT_ID VARCHAR2(36 BYTE),
ASOF_TMS DATE,
CURR_CODE CHAR(3 BYTE),
AVAL_BAL VARCHAR2(78 BYTE),
OPEN_BAL VARCHAR2(78 BYTE),
LED_BAL VARCHAR2(78 BYTE),
STMT_NO VARCHAR2(78 BYTE),
PRIN_INC_IND CHAR(1 BYTE),
HOLD_STAT_TYP CHAR(5 BYTE),
NLS_CDE CHAR(7 BYTE),
SRC_SYS_ID CHAR(3 BYTE),
TRANS_REF_NO VARCHAR2(48 BYTE),
FILE_ID VARCHAR2(70 BYTE),
DATA_GRP_CTL_NUM NUMBER
)
TABLESPACE USERS;



----------------------------------------------------------------------------------------------------------------

INSERT INTO T2 ( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL, OPEN_BAL, LED_BAL,
STMT_NO, PRIN_INC_IND, HOLD_STAT_TYP, NLS_CDE, SRC_SYS_ID, TRANS_REF_NO, FILE_ID,
DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/10/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 'GBP', '100', '100', '100', NULL, 'P', 'POSTE', NULL, NULL, NULL, '2006010101', 1111);

INSERT INTO T2 ( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL, OPEN_BAL, LED_BAL,
STMT_NO, PRIN_INC_IND, HOLD_STAT_TYP, NLS_CDE, SRC_SYS_ID, TRANS_REF_NO, FILE_ID,
DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/10/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 'GBP', '50', '50', '50', NULL, 'I', 'POSTE', NULL, NULL, NULL, '2006010101', 1111);

INSERT INTO T2 ( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL, OPEN_BAL, LED_BAL,
STMT_NO, PRIN_INC_IND, HOLD_STAT_TYP, NLS_CDE, SRC_SYS_ID, TRANS_REF_NO, FILE_ID,
DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/11/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 'GBP', '100', '100', '100', NULL, 'P', 'POSTE', NULL, NULL, NULL, '2006010102', 1112);

INSERT INTO T2 ( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL, OPEN_BAL, LED_BAL,
STMT_NO, PRIN_INC_IND, HOLD_STAT_TYP, NLS_CDE, SRC_SYS_ID, TRANS_REF_NO, FILE_ID,
DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/11/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 'GBP', '50', '50', '50', NULL, 'I', 'POSTE', NULL, NULL, NULL, '2006010102', 1112);

INSERT INTO T2 ( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL, OPEN_BAL, LED_BAL,
STMT_NO, PRIN_INC_IND, HOLD_STAT_TYP, NLS_CDE, SRC_SYS_ID, TRANS_REF_NO, FILE_ID,
DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '9878', TO_DATE( '11/11/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 'GBP', '50', '50', '50', NULL, 'I', 'POSTE', NULL, NULL, NULL, '2006010102', 1113);



INSERT INTO T1( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL_SUMM,
LED_BAL_SUMM, DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/10/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, NULL, '200', '180', 1111);

INSERT INTO T1( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL_SUMM,
LED_BAL_SUMM, DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '1111', TO_DATE( '11/10/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, NULL, '160', '170', 1112);

INSERT INTO T1( ORG_ID, BR_ID, ACCT_ID, ASOF_TMS, CURR_CODE, AVAL_BAL_SUMM,
LED_BAL_SUMM, DATA_GRP_CTL_NUM ) VALUES (
'CITI', '600', '9878', TO_DATE( '11/10/2006 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, NULL, '50', '50', 1113);


COMMIT;

-----------------------------------------------------------

The summaries of the balance value in the t2 & t2 table are different & I want to now correct it.
The input to my process would be file_id. So for that file_id, I need to summarise all the balances from the t2 table grouped by data_grp_ctl_num & update the t1 table correclt. This is what I'm trying to do..but is failing with obvious results...
Can I achieve this in a single query??

The query I'm trying is

-------------------------------------------------------
UPDATE t1 b
SET (b.data_grp_ctl_num, AVAL_BAL_SUMM , LED_BAL_SUMM) = ( SELECT data_grp_ctl_num, SUM(aval_bal), SUM(led_bal)
FROM t2
WHERE file_id = '2006010102'
GROUP BY data_grp_ctl_num
)
WHERE EXISTS ( SELECT a.data_grp_ctl_num
FROM t2 a
WHERE a.data_grp_ctl_num = b.data_grp_ctl_num
AND a.file_id = '2006010102'
)
-----------------------------------------------------------

Tom Kyte
December 18, 2006 - 8:50 am UTC

man oh man. 

Can you guys spell "number"?  Well, you can, you used it once.  Why would you do that to account balances???  

stop it.  fix it.  you are totally broken, use a number to store a number - PERIOD!!!!!!!!!!!!!!!!!!!!


ops$tkyte%ORA10GR2> select DATA_GRP_CTL_NUM, aval_bal_summ, led_bal_summ from t1;

DATA_GRP_CTL_NUM AVAL_BAL LED_BAL_
---------------- -------- --------
            1111 200      180
            1112 160      170
            1113 50       50

ops$tkyte%ORA10GR2> merge into t1
  2  using
  3  (select DATA_GRP_CTL_NUM,
  4          sum(aval_bal) aval_bal,
  5                  sum(led_bal) led_bal
  6     from t2
  7    where file_id = '2006010102'
  8    group by data_grp_ctl_num
  9  ) t2
 10  on ( t1.DATA_GRP_CTL_NUM = t2.DATA_GRP_CTL_NUM )
 11  when matched then update set aval_bal_summ = t2.aval_bal, led_bal_summ = t2.led_bal
 12  /

2 rows merged.

ops$tkyte%ORA10GR2> select DATA_GRP_CTL_NUM, aval_bal_summ, led_bal_summ from t1;

DATA_GRP_CTL_NUM AVAL_BAL LED_BAL_
---------------- -------- --------
            1111 200      180
            1112 150      150
            1113 50       50

sigh, never a version in site, so I'll assume current software. 

update columns of a table based on rows of another table

A reader, December 18, 2006 - 10:31 am UTC

Hi Tom,

Not sure whether I should be asking this here or should be asking this as a new question altogether

I need to update the various columns of a table - tab2 , based on the values of another table - tab1. Some of the values of table tab1 are actual columns in table tab2

create table tab1 (cname varchar2(10),fvalue number,tvalue number);

insert into tab1 values ('COL1',5,10);
insert into tab1 values ('COL2',1,10);
insert into tab1 values ('COL3',2,10)
insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',5,20);

create table tab2 (col1 number,col2 number,col3 number,col20 number,col25 number);

My requirement is to update tab2 columns - which are present as data in column cname of tab1 - from fvalue to tvalue

Something like this :

update tab2 set col1=10 where col1=5
update tab2 set col2=10 where col2=1
update tab2 set col3=10 where col3=2
update tab2 set col4=1 where col4=5
update tab2 set col4=20 where col4=5

I know how to do this in a cursor loop . My question is -
Can we do this 1 or 2 SQL statments so that performance wise it is better than doing it in a cursor loop. The above is only sample data. The actual tab1,tab2 tables contain much more data

thanks
Anto

Tom Kyte
December 18, 2006 - 10:38 am UTC

all you need to do is read the original answer - you have the same question they did.

A reader, December 18, 2006 - 10:36 am UTC

Version is 8174

thanks
Anto

A reader, December 18, 2006 - 11:07 am UTC

No - I think that is not what I was looking for

I will be updating multiple columns of tab2, based on the values present in cname column of tab1 - we won't be knowing until run time, the number of columns to be updated

thanks
Anto

Tom Kyte
December 18, 2006 - 11:14 am UTC

provide full example. there is nothing to update so far.

update

A reader, December 18, 2006 - 11:17 am UTC

Sorry for not provide the version.
I'm on 8.1.7.

And we are storing those balances in NUMBE only..
this was just a test table created mistakenly with VARCHAR datatypes.
I apprecitae your concern.

Can we do it in 1 query with 8I version?

Tom Kyte
December 18, 2006 - 2:35 pm UTC

1) create a global temporary table primary key on data grp ctl num
2) run the t2 inline view from my merge into it
3)

update (select aval_bal, led_bal, aval_bal_summ, led_bal_summ
from t1, gtt
where t1.data_grp-ctl_num = gtt.data_grp_ctl_num )
set aval_bal_summ = aval_bal, led_bal_summ = led_bal;



A reader, December 18, 2006 - 11:33 am UTC

Sorry - here it is

create table tab1 (cname varchar2(10),fvalue number,tvalue number);

insert into tab1 values ('COL1',5,10);
insert into tab1 values ('COL2',1,10);
insert into tab1 values ('COL3',2,10)
insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',10,100);
insert into tab1 values ('COL4',100,500);

create table tab2 (col1 number,col2 number,col3 number,col20 number,col25 number);



insert into tab2 (col1) values(5);
insert into tab2 (col2) values(1);
insert into tab2 (col3) values(2);
insert into tab2 (col4) values(5);
insert into tab2 (col4) values(10);
insert into tab2 (col4) values(50);
insert into tab2 (col4) values(500);
insert into tab2 (col1) values(500);
insert into tab2 (col1) values(200);
insert into tab2 (col1) values(300);


Updates I am looking for :

update tab2 set col1=10 where col1=5;
update tab2 set col2=10 where col2=1;
update tab2 set col3=10 where col3=2;
update tab2 set col4=1 where col4=5;
update tab2 set col4=100 where col4=10;

thanks
Anto

Tom Kyte
December 18, 2006 - 2:51 pm UTC

nope, not going to happen.

rows do not have any order, you are updating col4 many times. There is no way to know what order:

insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',10,100);
insert into tab1 values ('COL4',100,500);

those rows are processed in. This is a horrible idea.

slight mistake

A reader, December 18, 2006 - 11:40 am UTC

create table tab2 (col1 number,col2 number,col3 number,col4 number,col25 number);

forgot to include col4 since the example uses that

Anto

A reader, December 18, 2006 - 3:13 pm UTC

ok fine - so processing in a cursor loop, as we are doing currently, might be the only way out here

thanks
Anto

Tom Kyte
December 18, 2006 - 3:46 pm UTC

dude, in a loop, in a sloop, in a jam you are.

Unless and until your schema changes so that you can apply an order by - you have a "non-deterministic outcome here"

meaning, same precise data - different answers after your code runs...

all because rows in tables have no order, result sets might, but you need an order by.

and you have nothing to order by.


You are dead in the water, you have a big old bug, if you are processing data like this already - better stop - it is broken.

A reader, December 18, 2006 - 4:14 pm UTC

Maybe you have not understood it fully. The question was pertaining to just updating the relevant columns of tab2 - it is working fine using a cursor loop - even though it is a horrible idea - it is an existing system

Tab1 and tab2 are inserted using some other logic, before this update of tab2. The cursor loop is used only for updating tab2 columns based on rows(data) already present in tab1.

Anto

Tom Kyte
December 18, 2006 - 7:09 pm UTC

you are missing my point,

insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',10,100);
insert into tab1 values ('COL4',100,500);

what order do those rows come out of that table in?


and if they come out differently tomorrow, when applied to the same data set - well, you have a very big problem don't you.


You are MISSING something here.

A reader, December 18, 2006 - 4:29 pm UTC

create table tab1 (cname varchar2(10),fvalue number,tvalue number);

combination of (cname,fvalue) is unique - if that is what you were pointing before

Anto

Tom Kyte
December 18, 2006 - 7:11 pm UTC

insufficient, you need something to ORDER BY and I don't see it.

got your point

A reader, December 19, 2006 - 9:10 am UTC

Ah - I see what you are pointing at,

insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',10,100);
insert into tab1 values ('COL4',200,500); <---

I should have mentioned fvalue and tvalue will never be same for any given cname - my mistake. I entered some random values for fvalue,tvalue without realizing the basic error .The actual values for fvalue,tvalue are in fact different and they are in fact varchar2 Sorry about that. Just wanted to know whether this was possible without a cursor loop

Anto

Tom Kyte
December 19, 2006 - 10:15 am UTC

nope, you are missing my point.

YOU ASSUME THAT ROWS COME OUT IN THE ORDER OF INSERTION.


this is false.

insert into tab1 values ('COL4',5,1);
insert into tab1 values ('COL4',10,100);
insert into tab1 values ('COL4',200,500);


select * from tab1

col4 200 500
col4 10 100
col4 5 1

what do you do when those rows come out in that order - because the CAN and the WILL - and the end result in your database will be DIFFERENT then if they come out in some other order.

1 last try before I give up

A reader, December 19, 2006 - 10:46 am UTC

col4 200 500
col4 10 100
col4 5 1
...
...

When rows come up in that order or whatever order - it should do the following for tab2 in a separate pl/sql batch proc

update tab2 set col4=500 where col4=200
update tab2 set col4=100 where col4=10
update tab2 set col4=1 where col4=5
...
...

I guess I will have to go back to the basics again

Anto




Tom Kyte
December 19, 2006 - 10:51 am UTC

do you understand this:


given the same exact set of INPUTS

you will result in different OUTPUTS

depending on the sort of random order the rows will be retrieved in.


How can this even begin to "make sense to do". Why bother with the updates at all in this case? You have garbage in and utter garbage out.


same inputs, different outputs. Not sensible.

Raj, December 26, 2006 - 8:52 pm UTC

Thanks Tom for your all help to oracle community.

I have following code.
BEGIN
DECLARE
CURSOR C1 IS
SELECT /*+ PARALLEL(EVA, 4) PARALLEL(POP, 4) */
SUBSTR(POP.MARKET_SEGMENT_NM, 1, 10) MARKET_SEGMENT_CD,
SUBSTR(POP.MARKET_SEGMENT_NM, 1, 3) AFFINITY_CD,
EVA.APPLICATION_ID
FROM
DTSTG.POPULATION@ODSP POP,
ADM.EVALUATION EVA
WHERE
POP.DIRECTOR_ID = EVA.UNDERWRITING_CELL_CD AND
(EVA.DW_CREATE_DTTM >= d_last_run_date - 1 OR
EVA.DW_UPDATE_DTTM >= d_last_run_date - 1);
BEGIN
n_changed_rows := 0;
nctr := 0;
-- Begin Loop
FOR CUR_POP IN C1
LOOP
EXIT WHEN C1%NOTFOUND;
UPDATE ADM.EVALUATION
SET
MARKET_SEGMENT_CD = CUR_POP.MARKET_SEGMENT_CD,
AFFINITY_CD = CUR_POP.AFFINITY_CD,
DW_POPULATION_UPDATE_DTTM = SYSDATE
WHERE
APPLICATION_ID = CUR_POP.APPLICATION_ID;
-- Increment the counter for changed rows from Source.
--------------------------------------------------------------------------
n_changed_rows := n_changed_rows + 1;
nctr := nctr + 1;
IF nctr = 10000 THEN
nctr := 0;
COMMIT;
END IF;
END LOOP;
COMMIT;
END;
END;


Here the update is running in loop.The cursor returns almost 425 million rows.This update takes almost 2 hrs to complete since its in loop.

How can I optimize it? Merge won't be useful since I want to use only update. If I use FORALL then still update will execute each time.

Appreaciate your suggestion. MY database is in 9.2.0.7.

Regds...Raj N
San Fransisco,CA
Tom Kyte
December 26, 2006 - 9:33 pm UTC

just use a single update statement, no code

that will be the most optimal. it will generate the least amount of undo, redo, use the least amount of processing resources and since there is no code... will result in less bugs.

Updating millions of rows..

A reader, December 27, 2006 - 12:10 am UTC

Yes Tom.That is what I'm thinking.But is it possible to write it in this case.

Really appreciate if you can show in this case.

Regds...Rajesh
Tom Kyte
December 28, 2006 - 8:33 am UTC

merge into evaluation e
using (your_select) x
on ( e.applicationd_id = x.application_id)
when matched then
update set market_segment_cd = x.market_segment_code .....

A reader, December 28, 2006 - 3:03 pm UTC

I tried that before.But explain plan gives below error.

ADMP> explain plan for
2 MERGE INTO EVALUATION E
3 USING (SELECT /*+ PARALLEL(EVA, 4) PARALLEL(POP, 4) */
4 SUBSTR(POP.MARKET_SEGMENT_NM, 1, 10) MARKET_SEGMENT_CD,
5 SUBSTR(POP.MARKET_SEGMENT_NM, 1, 3) AFFINITY_CD,
6 EVA.APPLICATION_ID
7 FROM
DTSTG.POPULATION@ODSP POP,
8 9 ADM.EVALUATION EVA
WHERE
10 11 POP.DIRECTOR_ID = EVA.UNDERWRITING_CELL_CD AND
12 (EVA.DW_CREATE_DTTM >= d_last_run_date - 1 OR
13 EVA.DW_UPDATE_DTTM >= d_last_run_date - 1)) C
ON (E.APPLICATION_ID=C.APPLICATION_ID)
14 15 WHEN MATCHED THEN
16 UPDATE SET MARKET_SEGMENT_CD = C.MARKET_SEGMENT_CD,
17 AFFINITY_CD = C.AFFINITY_CD,
18 DW_POPULATION_UPDATE_DTTM = SYSDATE;
DW_POPULATION_UPDATE_DTTM = SYSDATE
*
ERROR at line 18:
ORA-00905: missing keyword


Probabal its looking for WHEN NOT MATCHED condition.

Please advise how to fix this.

Regds...Rajesh
Tom Kyte
December 29, 2006 - 9:23 am UTC

well, without your tables - I cannot even begin to reproduce (we need, well, something we can run)

Also, your references to plsql variables won't fly, you need to use :bind_name

Update Hanging for ever

vidyanath, August 02, 2007 - 5:12 pm UTC

My update is hanging forever

update table1 t1 set t1.link_1 = (Select t2.link_2 from
table2 t2 where t1.id=t2.id_2);

There is PK index on t1.id but its doing a full tablescan. It is using the index on partitioned table-t2. Staticstics are upto date with CBO. T1 had 3 million records where as t2 as 30 million. The job is hanging for 10 hours.. Tried to use hints but still not using the pk index on t1.
Any clues. We are on 10.2.0.2.
Tom Kyte
August 05, 2007 - 12:51 pm UTC

arg - this is a semantic annoyance.

No, your update is not "hanging forever".

Your update is NOT HANGING

Your update is going slower than you anticipated, but it is not hanging.


It might be blocked.
It might be taking a long time.
But it is not hanging.


You give nothing to go on - I would hate for this to use an index. And of what possible use is an index on T1??? you are UPDATING EVERY SINGLE ROW IN T1 - it would be an utter and complete waste to read the index (which you are making us update anyway, that "is not going to be fast" - likely you mean to disable the index, update t1, rebuild the index)


update( select t1.link_1, t2.link_2 from table1 t1, table2 t2 where t1.id = t2.id_2 )
set link_1 = link_2;


I would pray for two full scans and a hash join, no way you want to do 3 million index range scans!!!!!!!


Use an outer join if necessary from T1 to T2 to set NULL values for missing values in T2.

Achieve in One Sql thru Update Statement

Raj, August 15, 2007 - 8:57 am UTC

Hi Tom,

I trying to achieve the following output through one update statement. I am not sure whether it is feasible.
The environment is oracle 9.2.0.8 on Linux Operating System.

Moreover the number of rows in actual table is 14 Million rows. If the update statement is very high cost what is the better option I have to achieve this result output.

What really needed is taking the current output of max(state_key) and grouping by bundle_key, term_key, substr(offer_id,1,8), substr(offer_id,10,7).. So that only the third record would get updated which is the expected output result.

Reply is appreciated..

SQL> desc atest
 Name                                      Null?    Type
 ----------------------------------------- -------- -----
 BUNDLE_KEY                                         NUMBER
 TERM_KEY                                           NUMBER
 STATE_KEY                                          NUMBER
 OFFER_ID                                      VARCHAR2(50)
 
 current output
 
 BUNDLE_KEY TERM_KEY STATE_KEY OFFER_ID
---------- -------- --------- -----------------------------
       101        1        41 12345678:1234567:200601KMA01
       101        1        51 12345678:1234567:200601KMA01
       101        1        66 12345678:1234567:200601KMA01
       101        2        51 12345678:1234567:200601KMA01
       101        2        41 12345678:1234567:200601KMA01
       101        2        66 12345678:1234567:200601KMA01
       
expected output as       

 BUNDLE_KEY TERM_KEY STATE_KEY OFFER_ID
---------- -------- --------- -----------------------------
       101        1        41 12345678:1234567:200601KMA01
       101        1        51 12345678:1234567:200601KMA01
       101        1        66 123456781234567
       101        2        51 12345678:1234567:200601KMA01
       101        2        41 12345678:1234567:200601KMA01
       101        2        66 123456781234567

Tom Kyte
August 20, 2007 - 1:56 pm UTC

no create, no inserts.....

no look.

update

sam, August 20, 2007 - 4:10 pm UTC

excellent example.

Raj, August 22, 2007 - 1:29 pm UTC

Thanks for your reply. Here is what I did in two steps.

But I am wondering whether this can be achieved in one step by doing directly through an update statement.

insert /*+append*/ into offer_temp 
(
state_key,
term_key,
bundle_key,
offer_id_conc
)
select state_key, term_key, bundle_key, offer_id_conc from 
(select max(State_key) state_key, term_key, bundle_key, substr(offer_id,1,8)||substr(offer_id,10,7) offer_id_conc
                                    from offer_xref b 
                                    group by term_key, bundle_key,substr(offer_id,1,8), substr(offer_id,10,7)) 


update offer_xref a set offer_id_upd = (
select offer_id_conc from offer_temp b where a.BUNDLE_KEY = b.BUNDLE_KEY
and a.STATE_KEY = b.state_key
and a.TERM_KEY  = b.term_key)

raj, August 22, 2007 - 1:37 pm UTC

Ignore the Previous code because of the table name..

Here is the right table name matching with the example what I posted already.

I have done in two steps. I am wondering whether this can be done in one update statement.. Let me know your comments.

insert /*+append*/ into offer_temp 
(
state_key,
term_key,
bundle_key,
offer_id_conc
)
select state_key, term_key, bundle_key, offer_id_conc from 
(select max(State_key) state_key, term_key, bundle_key, substr(offer_id,1,8)||substr(offer_id,10,7) 
 offer_id_conc
 from atest b    
 group by term_key, bundle_key,substr(offer_id,1,8), 
 substr(offer_id,10,7)
)    

update atest a set offer_id_upd = (
select offer_id_conc from offer_temp b where a.BUNDLE_KEY = b.BUNDLE_KEY
and a.STATE_KEY = b.state_key
and a.TERM_KEY  = b.term_key)

raj, August 22, 2007 - 1:50 pm UTC

Sorry, again I missed the create and insert script..

Here is the scripts..

CREATE TABLE ATEST
(
  BUNDLE_KEY  NUMBER,
  TERM_KEY    NUMBER,
  STATE_KEY   NUMBER,
  OFFER_ID    VARCHAR2(50 BYTE)
)

INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 1, 41, '12345678:1234567:200601KMA01'); 
INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 1, 51, '12345678:1234567:200601KMA01'); 
INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 1, 66, '12345678:1234567:200601KMA01'); 
INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 2, 41, '12345678:1234567:200601KMA01'); 
INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 2, 51, '12345678:1234567:200601KMA01'); 
INSERT INTO ATEST ( BUNDLE_KEY, TERM_KEY, STATE_KEY, OFFER_ID ) VALUES ( 
101, 2, 66, '12345678:1234567:200601KMA01'); 
COMMIT;

CREATE TABLE OFFER_TEMP
(
  BUNDLE_KEY        INTEGER,
  TERM_KEY          INTEGER,
  STATE_KEY         NUMBER(10),
  OFFER_ID_CONC     VARCHAR2(50 BYTE)
 )

Tom Kyte
August 22, 2007 - 2:43 pm UTC

ops$tkyte%ORA10GR2> merge into atest a
2 using
3 (
4 select state_key, term_key, bundle_key, offer_id_conc from
5 (select max(State_key) state_key, term_key, bundle_key, substr(offer_id,1,8)||substr(offer_id,10,7)
6 offer_id_conc
7 from atest b
8 group by term_key, bundle_key,substr(offer_id,1,8),
9 substr(offer_id,10,7)
10 )
11 ) b
12 on (
13 a.BUNDLE_KEY = b.BUNDLE_KEY
14 and a.STATE_KEY = b.state_key
15 and a.TERM_KEY = b.term_key)
16 when matched then update set offer_id = offer_id_conc
17 /

2 rows merged.

ops$tkyte%ORA10GR2> select * from atest;

BUNDLE_KEY TERM_KEY STATE_KEY OFFER_ID
---------- ---------- ---------- --------------------------------------------------
101 1 41 12345678:1234567:200601KMA01
101 1 51 12345678:1234567:200601KMA01
101 1 66 123456781234567
101 2 41 12345678:1234567:200601KMA01
101 2 51 12345678:1234567:200601KMA01
101 2 66 123456781234567

6 rows selected.

insert select

A reader, August 23, 2007 - 10:39 am UTC

Dear Tom,

I have the following situation

CREATE TABLE T
(
ID NUMBER,
TYP VARCHAR2(3),
VAL VARCHAR2(5)
)

CREATE TABLE T_DEN
(
ID NUMBER,
VAL_TYP_01 VARCHAR2(5),
VAL_TYP_02 VARCHAR2(5),
VAL_TYP_03 VARCHAR2(5),
VAL_TYP_04 VARCHAR2(5)
)

insert into t values ( 15, '01', 'XX1');

insert into t values ( 15, '02', 'XX2');

insert into t values ( 15, '03', 'XX3');

insert into t values ( 15, '04', 'XX4');


insert into t values ( 30, '01', 'YY1');

insert into t values ( 30, '02', 'YY2');

insert into t values ( 30, '03', 'YY3');

insert into t values ( 30, '04', 'YY4');

commit;

select * from t;

ID TYP VAL
--------- --- -----
15 01 XX1
15 02 XX2
15 03 XX3
15 04 XX4
30 01 YY1
30 02 YY2
30 03 YY3
30 04 YY4

I would like to have the t_den table filled up with the t table in order to obtain the following results

select * from t_den;

ID VAL_T VAL_T VAL_T VAL_T
-- ----- ----- ----- -----
15 XX1 XX2 XX3 XX4
30 YY1 YY2 YY3 YY4

is it possible to obtain this result by using a select/insert

Thanks a lot

PS : in real life I have 1 million rows
Tom Kyte
August 23, 2007 - 1:13 pm UTC

ops$tkyte%ORA10GR2> select id,
  2         max( case when typ='01' then val end ) v1,
  3         max( case when typ='02' then val end ) v2,
  4         max( case when typ='03' then val end ) v3,
  5         max( case when typ='04' then val end ) v4
  6    from t
  7   group by id;

        ID V1    V2    V3    V4
---------- ----- ----- ----- -----
        30 YY1   YY2   YY3   YY4
        15 XX1   XX2   XX3   XX4

How about 11g?

S, August 23, 2007 - 3:44 pm UTC

Is it too early for an 11g example :)?

Raj, August 24, 2007 - 9:03 am UTC

Hi,

As I already mentioned, the environment is 9.2.0.8

Your merge statement works only for 10gR2 due to only update.

Thanks
Tom Kyte
August 27, 2007 - 3:18 pm UTC

well, since you are merging into the table, USING the table itself, you know for a fact nothing will be inserted right?

so, use a dummy insert
merge ....
/* this line never happens */
when not matched then insert (any_column_you_want_to_reference) values (null;



and you must realize that I do not reconstruct everything you've ever said review upon review ago...

Updating one table from another

Dheeraj, November 22, 2007 - 7:38 am UTC

Hi Tom,

I have a requirement for which steps are as mentioned below:

CREATE TABLE test1
(id NUMBER, description VARCHAR2(100));

INSERT INTO test1
VALUES(1,'a');

INSERT INTO test1
VALUES(1,'b');

INSERT INTO test1
VALUES(1,'c');

INSERT INTO test1
VALUES(2,'b');

INSERT INTO test1
VALUES(2,'c');

CREATE TABLE test2
(id NUMBER,description VARCHAR2(100));

INSERT INTO test2
VALUES(1,'xxx');

INSERT INTO test2
VALUES(1,'xxx');

INSERT INTO test2
VALUES(2,'xxx');

COMMIT;

Now, my requirement is to update table test2 with expected output as:

TEST2

id description
1 a
1 a
2 b

i.e. the hierarchy/priority of description column in test1 is a > b > c,
i.e. If description value = 'a' exists for a particular id in test1,
then all the records of that ID in test2 should be updated with the same description.

Above logic holds good for the remaining description values, namely, b & c.

I want to see above output in a single update statement.

Many thanks,

Dheeraj
Tom Kyte
November 26, 2007 - 10:57 am UTC

well, if a>b>c and a,b,c are really strings that sort like that, then:

ops$tkyte%ORA10GR2> select id, min(description)  d
  2    from test1
  3   group by id;

        ID D
---------- -----
         1 a
         2 b


if a,b,c were "concepts" here - and the details are much more complex, you still need a way to get the data - so you can use row_number and a custom sort:


ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select id, description d
  2    from (select id, description, row_number() over (partition by id order by decode( description, 'a', 1, 'b', 2, 'c', 3 )) rn
  3            from test1
  4         )
  5   where rn = 1
  6  /

        ID D
---------- -----
         1 a
         2 b


either way, once you write the query against Test1 to get the right data, you use merge:




ops$tkyte%ORA10GR2> select * from test2;

        ID DESCR
---------- -----
         1 xxx
         1 xxx
         2 xxx

ops$tkyte%ORA10GR2> merge into test2
  2  using (
  3  select id, min(description)  d
  4    from test1
  5   group by id ) test1
  6  on (test2.id = test1.id)
  7  when matched then update set description = test1.d;

3 rows merged.

ops$tkyte%ORA10GR2> select * from test2;

        ID DESCR
---------- -----
         1 a
         1 a
         2 b


and if you are on 9i - you need a when not matched, so we just change the using query to ensure ONLY rows that exist in test2 are represented in test1's output:

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> merge into test2
  2  using (
  3  select id, min(description)  d
  4    from test1
  5   where id IN ( select id from test2 )
  6   group by id ) test1
  7  on (test2.id = test1.id)
  8  when matched then update set description = test1.d
  9  when not matched /* never happens! */ then insert(id) values ( null) ;

3 rows merged.


Facing Same Problem

Taral Desai, October 31, 2008 - 5:53 pm UTC

Hello Sir,

We are trying to user this update. But it's taking long long time around 26 hrs. Any thought on this

UPDATE /*+ parallel(joinview) */ (SELECT /*+  PARALLEL(fs) PARALLEL(fa)  */
       FS.NOT_FINAL_ID AS UPDATE_THIS,
       FA.NOT_FINAL_ID AS UPDATE_WITH
        FROM PART_D.FINAL_ACTION      FA,
             PART_D.PART_D_FACT_STAGE FS
       WHERE FS.PART_D_ID = FA.PART_D_ID) JOINVIEW
   SET UPDATE_THIS = UPDATE_WITH;
 
Plan hash value: 2253934120
 
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                     |                    |    18M|   399M|  1234   (7)| 00:00:18 |       |       |        |      |            |
|   1 |  UPDATE                              | PART_D_FACT_STAGE  |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                     |                    |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)               | :TQ10000           |     1 |    11 |     0   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| PART_D_FACT_STAGE  |     1 |    11 |     0   (0)| 00:00:01 |       |       |  Q1,00 | PCWC |            |
|   5 |      NESTED LOOPS                    |                    |    18M|   399M|  1234   (7)| 00:00:18 |       |       |  Q1,00 | PCWP |            |
|   6 |       PX BLOCK ITERATOR              |                    |       |       |            |          |       |       |  Q1,00 | PCWC |            |
|   7 |        TABLE ACCESS FULL             | FINAL_ACTION       |    18M|   208M|  1211   (5)| 00:00:17 |       |       |  Q1,00 | PCWP |            |
|   8 |       PARTITION RANGE ALL            |                    |     1 |       |     0   (0)| 00:00:01 |     1 |    37 |  Q1,00 | PCWP |            |
|   9 |        PARTITION LIST ALL            |                    |     1 |       |     0   (0)| 00:00:01 |     1 |  LAST |  Q1,00 | PCWP |            |
|* 10 |         INDEX RANGE SCAN             | IDX_PDFS_PART_D_ID |     1 |       |     0   (0)| 00:00:01 |     1 |   180 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
   1 - SEL$DA9F4B51
   4 - SEL$DA9F4B51 / FS@SEL$1
   7 - SEL$DA9F4B51 / FA@SEL$1
  10 - SEL$DA9F4B51 / FS@SEL$1
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
  10 - access("FS"."PART_D_ID"="FA"."PART_D_ID")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
   2 - (upd=5; cmp=4,5; cpy=5) "FA"."NOT_FINAL_ID"[NUMBER,22], "FA"."PART_D_ID"[NUMBER,22], "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22], 
       "FS"."NOT_FINAL_ID"[NUMBER,22]
   3 - (#keys=0) "FA"."NOT_FINAL_ID"[NUMBER,22], "FA"."PART_D_ID"[NUMBER,22], "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22], 
       "FS"."NOT_FINAL_ID"[NUMBER,22]
   4 - "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22], "FS"."NOT_FINAL_ID"[NUMBER,22]
   5 - (#keys=0) "FA"."NOT_FINAL_ID"[NUMBER,22], "FA"."PART_D_ID"[NUMBER,22], "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22]
   6 - "FA"."NOT_FINAL_ID"[NUMBER,22], "FA"."PART_D_ID"[NUMBER,22]
   7 - "FA"."NOT_FINAL_ID"[NUMBER,22], "FA"."PART_D_ID"[NUMBER,22]
   8 - "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22]
   9 - "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22]
  10 - "FS".ROWID[ROWID,10], "FS"."PART_D_ID"[NUMBER,22]



Updating a table from another table

Maverick, March 24, 2009 - 9:13 am UTC

Tom, When I try to update table B joining table a like
Update (select a.id,b.id,...
from table a,table b
where a.id=b.id)
set a.name=b.name

I am getting Insufficient privileges on table B, eventhough I am updating only table a.
I have all the privileges on table a but only select privileges on table b.
I am using Oracle 10g rel 2 ver and not sure why it needs update privilege on my reference table [??].

Can you throw some light on this?
Thanks,
Tom Kyte
March 29, 2009 - 10:34 am UTC

that is the way it works with update, you can use MERGE which makes it entirely unambiguous

ops$tkyte%ORA11GR1> create table t1 ( x int, y int );

Table created.

ops$tkyte%ORA11GR1> create table t2 ( x int primary key, z int );

Table created.

ops$tkyte%ORA11GR1> insert into t1 values(1,null);

1 row created.

ops$tkyte%ORA11GR1> insert into t2 values(1,42);

1 row created.

ops$tkyte%ORA11GR1> grant select, update on t1 to scott;

Grant succeeded.

ops$tkyte%ORA11GR1> grant select on t2 to scott;

Grant succeeded.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> connect scott/tiger;
Connected.
scott%ORA11GR1>
scott%ORA11GR1> select * from ops$tkyte.t1;

         X          Y
---------- ----------
         1

scott%ORA11GR1>
scott%ORA11GR1> update (select t1.y, t2.z from ops$tkyte.t1 t1, ops$tkyte.t2 t2 where t1.x = t2.x )
  2  set y = z;
update (select t1.y, t2.z from ops$tkyte.t1 t1, ops$tkyte.t2 t2 where t1.x = t2.x )
                                                          *
ERROR at line 1:
ORA-01031: insufficient privileges


scott%ORA11GR1>
scott%ORA11GR1> select * from ops$tkyte.t1;

         X          Y
---------- ----------
         1

scott%ORA11GR1>
scott%ORA11GR1> merge into ops$tkyte.t1 t1
  2  using ops$tkyte.t2 t2
  3  on (t1.x = t2.x)
  4  when matched then update set y = z;

1 row merged.

scott%ORA11GR1>
scott%ORA11GR1> select * from ops$tkyte.t1;

         X          Y
---------- ----------
         1         42


Update or Merge statement

A reader, July 18, 2012 - 2:33 pm UTC

I believe this is MS Access code and I have been asked to make it work on an Oracle database.
I have taken over an Oracle database with an Access front end. I have review suggestions on
your website but nothing I have tried (Update or Merge statements) has worked. We are trying
to update t_host_resources with a value from vw_ext_host_load.Other updates to t_host_resources
are hard coded values. Hope you can help and Thank you!


update t_host_resources
inner join vw_ext_host_load
on (vw_ext_host_load.host_id = t_host_resoruces.hst_id)
and
(t_host_resources.hstres_id = vw_ext_host_load.hstres_id)
set t_host_resources.resource_size =
[vw_ext_host_load].[ext_ram], t_host_resources.resource_size_unit = 1008,
t_host_resources.changedate=sysdate,
t_host_resources.changeby='ADDM',
t_host_resources.hstres_attributes = 'RAM - '
where (((t_host_resources.resource_type)=1028)
AND ((t_host_resources.ISACTIVE)=1)
AND ((t_host_resources.ISDELETED)=0));


vw_ext_host_load--data-------------t_host_resources---data-------------------------
host_id hstres_id EXT_RAM hst_id hstres_id resource_size hstres_attributes
1000421 1003560 896 1000421 1003560 896 RAM -
1000420 1003561 512 1000420 1003561 512 RAM -
1000422 1003562 2048 1000422 1003562 2048 RAM -
964 1003563 512 964 1003563 512 RAM -
1000540 1000482 8192 1000540 1000482 8 RAM -
1000541 1000487 8192 1000541 1000487 8 RAM -
1000543 1000494 8192 1000543 1000494 8 RAM -
1000269 1003564 65536 1000269 1003564 65536 RAM -
1000418 1003565 32768 1000418 1003565 32768 RAM -
1000271 1003566 4096 1000271 1003566 4096 RAM -
1 1003559 16384 1 1003559 16384 RAM -
Tom Kyte
July 18, 2012 - 3:30 pm UTC

no creates
no inserts
no look

i cannot write a sql statement for you without them.

update based on 2 tables, suppress ora-01779?

Marco, March 07, 2013 - 7:06 am UTC

Hello Tom,

I was happy to read in this thread how to update a table based on another one. Unfortunately I run into the ORA-01779 what you also explained. But I have an uncomfortable situation: I am not able to create a unique constraint on the referencing table, because it is a materialized view. So we know the column is unique, because it is on the master, but we cannot use that knowledge to do the update. Is there a way to bypass this, maybe tell Oracle to ignore the ORA-01779? Just because we know better.

Best ragards,
Marco
Tom Kyte
March 07, 2013 - 9:01 am UTC

use merge.


merge into table
using (whatever query you want)
on (join condition)
when matched then update set....

update based on multiple tables, referencing column

A reader, March 08, 2013 - 3:31 am UTC

Hello Tom,

Thanks for your reply! It was interesting for me to try the merge into statement. My ORA-01779 is solved for now, but we exchanged it for an ORA-38104 (Columns referenced in the ON Clause cannot be updated) now :-)

Time to give some additional info. My initial update statement.

update (select huis.vnr_decl as huis_vnr, sxi.slt_vecozo as sxi_slt from ft_huisartsen huis, sxivsa sxi where huis.vnr_decl=sxi.vnr_decl and huis.id_vecozo is null and huis.srt_ei='XI')
set huis_vnr = sxi_slt;

and the merge into statement I created.

merge into ft_huisartsen huis
using (select vnr_decl,slt_vecozo from sxivsa) sxi
on (huis.vnr_decl=sxi.vnr_decl and huis.id_vecozo is null and huis.srt_ei='XI')
when matched then update set huis.vnr_decl=sxi.vnr_decl;

So I (my developer) want to update a column that is referenced, based on a materialized view of which the referenced column is unique but is not allowed to have unique constraints.

Do you have any suggestions here?

Tom Kyte
March 11, 2013 - 8:10 am UTC

no creates
no inserts
no look

Mr. SQL

MrSQL, March 26, 2013 - 8:05 pm UTC

Hi Tom,
In your very first response (please refer to the SQL below) what happens if "where a.id in ( select id from b )" clause is taken out (I mean commented or deleted)? Will the result of update be correct?

tkyte@ORACLE> update a
2 set a1 = ( select b1 from b where b.id = a.id )
3 where a.id in ( select id from b )
4 /
Tom Kyte
March 27, 2013 - 5:06 pm UTC

it would be wrong in general.

the assumption is that there is NOT a row in B for every row in A.


If there were an A.ID value that is not in B.ID, then your update without the where clause would set that rows value to NULL, whereas the one with the where clause would prevent it from being modified.


ops$tkyte%ORA11GR2> create table a ( id int, a1 varchar2(30) );

Table created.

ops$tkyte%ORA11GR2> create table b ( id int, b1 varchar2(30) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into a values ( 1 , 'hello' );

1 row created.

ops$tkyte%ORA11GR2> insert into a values ( 2 , 'world' );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into b values ( 1, 'goodbye' );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> update a
  2     set a1 = ( select b1 from b where b.id = a.id )
  3   where a.id in ( select id from b );

1 row updated.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from a;

        ID A1
---------- ------------------------------
         1 goodbye
         2 world

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> rollback;

Rollback complete.

ops$tkyte%ORA11GR2> update a
  2     set a1 = ( select b1 from b where b.id = a.id );

2 rows updated.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from a;

        ID A1
---------- ------------------------------
         1 goodbye
         2

ops$tkyte%ORA11GR2> 


see the different outcomes?

MrSQL

A reader, April 29, 2013 - 9:36 am UTC

Thanks a lot for the detailed response Tom!

what if updating from another query?

thenewbee, July 10, 2013 - 2:07 pm UTC

Hi,
Im from SqlServer background, consider the scenario like I need to update a table based on a query that joins several other tables, do you mean that we should follow some thing like

update tabA A
Set name = ( select value from
( select xx from tabB
join
tabC
// join continues..
)val
) where A.ID =val.ID
Is this the approach ?


Tom Kyte
July 16, 2013 - 3:38 pm UTC

you can use that approach but it is typically very inefficient.

better to update a join


update (select ....
from A join B join C .... )
set a_col = xx;


or to merge if necessary.

Update Column

Sanjay, November 17, 2013 - 2:02 pm UTC

Hi Tom,

I have two tables CONTROL and CONTROL_Hist and in CONTROL table I have column like Name, type and Owner and same column with additional column Set_time and Set_userId in CONTROL_Hist.

Now i have Added Set_time and Set_userId in CONTROL table also and want to populate Set_time and Set_userId from CONTROL_Hist. And in the CONTROL table have UNIQUE INDEX for "Name", "TYPES", "OWNERS".

But in CONTROL_Hist table can having multiple records with different Set_Time and Set_userID.

I want only latest set_time to be populated with Set_userID in CONTROL table.

Can you please me to get the query for above requirement?

Thanks a lot in advance.
Sanju

how to update when the mapping is not one-to-one

Lou, November 20, 2013 - 5:06 am UTC

Tom,

I have been looking for an update when the mapping is not one to one. like the example below. Can this be done in sql?

table 1 = code , old_values
table 2 = code, new values

I would like to update table 1 with the values of table 2


code old_values code new_values
-------------------------------------------
ADPU 107 ADPU 149
BOMB 107 BOMB 149
DIEL 107 DIEL 135
ELEC 107 ELEC 135
EMES 107 EMES 149
FUAR 107 FUAR 149
GASN 107 GASN 135
GEEL 107 GEEL 135
IDBP 107 IDBP 17
ORAU 107 ORAU 149
POEJ 107 POEJ 149
POJU 107 POJU 149
POLE 107 POLE 149
POLI 107 POLI 149
PREL 107 PREL 135
PRGA 107 PRGA 135


More to Explore

Performance

Get all the information about database performance in the Database Performance guide.