please take the comment in the context of your entire paragraph:
...
I accept the argument, but then the redundant storage provides the fail-over part. The designer can
still design projections without duplicates. Nevertheless, if Oracle provides this, its an optional
feature, which might come handy in circumstances.
.....
you seem to be alluding "but they use redundancy for failover (first time mentioning this fact - first time failover is mentioned)". They do not use redundancy for failover, they use it for the same reason we use materialized views - to avoid WORK. Further, now you write:
...then the optimizer
might be wise enough to provide the query results from the
other projection....
meaning - it might not be, in fact probably isn't. They do not store redundant copies for failover, they do it for performance. If there is some higher available feature by accident there - it is by accident, it was not their GOAL (if it were their goal, the redundant copy would be a mirror image - so that when you failover, performance doesn't go down the tubes)
... Yet their joining mechanism is efficient.
...
unfounded, where is the science, the numbers (and what does "store each in a different file" have to do with anything?)
... My last argument was that
time-travel can't be achieved in the index through undo log ....
you are wrong there. You are entirely, utterly, completely wrong there.
Undo is used for one thing - rolling back. (introducing a fancy term 'time travel' will just confuse the issue). Any structure can use undo generated for it to put it back the way it was.
stop visualizing how you think it has to be done, it doesn't work that way. Just because something could have been done the way you think it could be does not mean that it does.
Oracle does not work the way you describe at all.
....
"You need to learn about Oracle then - multi-versioning, read consistency."
I have a fair idea on this. I just say that this can't work for index and it
can only work for table.
......You have a far distance to cover yet, you persist in thinking it must work the way "i (meaning you)" think it should work. Until you lose that perspective - you will get it wrong.
....
If you can think of how b-tree is getting navigated,
you can easily get my point.
........
you need to rethink that yourself.
ops$tkyte%ORA9IR2> create table t tablespace USERS as select * from all_objects;
Table created.
ops$tkyte%ORA9IR2> create index t_idx on t(object_id) tablespace TOOLS;
Index created.
ops$tkyte%ORA9IR2> exec dbms_stats.gather_table_stats(user,'T');
PL/SQL procedure successfully completed.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> set autotrace traceonly explain
ops$tkyte%ORA9IR2> select count(*) from t;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=4 Card=30755)
ops$tkyte%ORA9IR2> set autotrace off
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> delete from t;
30755 rows deleted.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> declare
2 pragma autonomous_transaction;
3 begin
4 execute immediate 'alter tablespace users offline';
5 end;
6 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select count(*) from t;
COUNT(*)
----------
0
ops$tkyte%ORA9IR2> select * from t;
select * from t
*
ERROR at line 1:
ORA-00376: file 9 cannot be read at this time
ORA-01110: data file 9: '/home/ora9ir2/oradata/ora9ir2/users01.dbf'
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> declare
2 pragma autonomous_transaction;
3 n number;
4 begin
5 select count(*) into n from t;
6 dbms_output.put_line( n );
7 end;
8 /
30755
PL/SQL procedure successfully completed.
ops$tkyte%ORA9IR2> declare
2 pragma autonomous_transaction;
3 begin
4 execute immediate 'alter tablespace users online';
5 end;
6 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA9IR2> rollback;
Rollback complete.
ops$tkyte%ORA9IR2> select count(*) from t;
COUNT(*)
----------
30755
think about that for a while.
delete all records from t - NO COMMIT
offline the datafiles for the table (but not the index)
queries that hit index are OK, query that hits table fail - cannot see the table.
so, we use an autonomous transaction - and hit the index and the index is as it was BEFORE the delete (we used UNDO for that....)
we online the datafiles and rollback - and do the count again, it is greater than zero - showing that undo was applied to the table - the transaction hadn't closed ever - we used UNDO on the index to roll it back.
I have other ways to show this using tkprof as well - but - undo does not work the way you ENVISION it should.
You are missing the point of how multiversioning (which works at the block level) *WORKS*
Stop thinking "rows", start thinking "traverse all of the blocks as of the same consistent committed point in time"