1) we use db file scattered reads when we know we are going to read more than one block. we use db file sequential reads when we know we are going to read only one block.
I'm not really sure how to "compare" them other than we use them in different places, under entirely different circumstances.
If we were decided whether to use sequential versus scattered reads given the same circumstances - it would make sense - however, since the code is like:
if (blocks to read = 1)
then
read that single block
add time to the wait event db file sequential read
elsif (blocks to read > 1 )
then
read those blocks
add time to the wait even db file scattered read
end if;
I don't see the relevance of comparing them, in fact I don't know how to compare them.
2) for that delete we addressed that - a delete will full scan the table - but still has to maintain the indexes! In order to delete a row, we modify the block in the table and we have to index range scan all of the indexes on that table one after the other in order to remove the key entries as well.
but for a normal full table scan - what if the max IO size was 32 blocks. And what if you have 33 blocks to read?
Or what if you have blocks 2, 4, 6, 8, ... (all of the even ones) in the buffer cache and start full scanning? We'll have to read the odd blocks in from disk, a block at a time.