Hi , thks for all your works here,i've learnd much a lot!
now, i have some question on
SQL fetch phase and
library cache lock/pin which stuck me for days.
here is something i find from some website,
SELECT statement follows following steps
1. parse : acquires library cache lock(aka DDL lock) and downgrade it to
null mode(aka breakable parse lock)
2. execute : acquires library cache pin and release it
3. fetch: breakable parse lock
4. close: release breakable parse lockalso from another site,
https://www.oreilly.com/library/view/oracle-internals-an/156592598X/ch04s05.html Library cache locks can only be broken, however, when the library cache object is not also pinned. A pin is applied to the library cache object for a PL/SQL program unit or SQL statement while it is being compiled, parsed, or executed. (no fetch) from aboves,what i got is fetch phase does't
REALLY NEED a
library cache pin, from your site i also got that lets say A simple select query 'select * from a big_table' (1million-rows in big_table),does not read all rows at once, every fetch call return a slice rows ,use it,then next fetch call,next slice. etc..
my question is :
1> when an fetch call executing,isn't really need a library cache pin,to pin the execution plan in the shared pool?cascade to pin(share) the big_table too? if not,how can it be sure when executing but prevent any DDL on the big_table? will be a mess?
2> what if after the first fetch call then i drop the table big_table, will the second fetch call can aware that(invalidation occurred? )? continue fetch/return rows or return some errors?
long to short,
* from the parse,execution steps ,i can realize how library cache lock/pin works.
* but,in the FETCH step, will this step need library cache lock/pin or not? did every fetch call need get library cache lock(null) and library cache pin(share) and library cache lock/pin(depended objects) , if so, will that operation(fetch call) bring too much load(costs) to the database?
We don't *guarantee* that a fetch will work for the duration of the query, because you might do a fetch, walk away for a week, and then come back. We can't really hold the table "locked" for an unknown duration. That's why we can allow things to happen to the table while you are fetching. You *might* get lucky...you might not. Here's a couple of examples:
--
-- Session 1
--
-- tine tablespace with room for just one table
SQL> create tablespace ROOM_FOR_ONE datafile 'X:\ORADATA\DB18\PDB1\ROOM_FOR_ONE.DBF' size 2m;
Tablespace created.
SQL> create table t tablespace room_for_one
2 as select * from dba_objects
3 where rownum <= 1000;
Table created.
SQL>
SQL>
SQL> set arraysize 50
SQL> set pagesize 55
--
-- pausing so we will fetch approx 50 rows at a time
--
SQL> set pause on
SQL> select owner, object_name from t;
OWNER OBJECT_NAME
------------------------------ -------------------------------------
SYS I_FILE#_BLOCK#
SYS I_OBJ3
SYS I_TS1
SYS I_CON1
SYS IND$
SYS CDEF$
SYS C_TS#
SYS I_CCOL2
SYS I_PROXY_DATA$
SYS I_CDEF4
SYS I_TAB1
SYS CLU$
SYS I_PROXY_ROLE_DATA$_1
SYS I_OBJ1
SYS UNDO$
SYS I_UNDO2
SYS I_TS#
SYS I_FILE1
SYS I_COL2
SYS I_OBJ#
SYS C_OBJ#
SYS I_CDEF3
SYS C_COBJ#
--
-- Session 2
--
SQL> drop table t purge;
Table dropped.
--
-- Session 1
--
SYS I_CON2
SYS I_OBJ4
SYS CON$
SYS I_CDEF2
SYS ICOL$
SYS I_COL3
SYS I_CCOL1
SYS COL$
SYS I_ICOL1
...
SYS STREAMS$_DEST_OBJS_I
SYS STREAMS$_DEST_OBJ_COLS
SYS STREAMS$_DEST_OBJ_COLS_I
SYS STREAMS$_PRIVILEGED_USER
1000 rows selected.
So you can that even though we *dropped* the table, we could still complete our query because the table data was still there. But let's repeat that now, and you'll see why I used a tiny tablespace:
SQL> create table t tablespace room_for_one
2 as select * from dba_objects
3 where rownum <= 1000;
Table created.
SQL>
SQL>
SQL> set arraysize 50
SQL> set pagesize 55
--
-- pausing so we will fetch approx 50 rows at a time
--
SQL> set pause on
SQL> select owner, object_name from t;
OWNER OBJECT_NAME
------------------------------ -------------------------------------
SYS I_FILE#_BLOCK#
SYS I_OBJ3
SYS I_TS1
SYS I_CON1
SYS IND$
SYS CDEF$
SYS C_TS#
SYS I_CCOL2
SYS I_PROXY_DATA$
SYS I_CDEF4
SYS I_TAB1
SYS CLU$
SYS I_PROXY_ROLE_DATA$_1
SYS I_OBJ1
SYS UNDO$
SYS I_UNDO2
SYS I_TS#
SYS I_FILE1
SYS I_COL2
SYS I_OBJ#
SYS C_OBJ#
SYS I_CDEF3
SYS C_COBJ#
--
-- Session 2
--
SQL> drop table t purge;
Table dropped.
SQL> create table t1 tablespace room_for_one
2 as select * from dba_objects
3 where rownum <= 1000;
Table created.
SQL>
SQL> insert into t1 select * from t1;
1000 rows created.
SQL> commit;
Commit complete.
SQL> insert into t1 select * from t1;
2000 rows created.
SQL> commit;
Commit complete.
SQL> insert into t1 select * from t1;
insert into t1 select * from t1
*
ERROR at line 1:
ORA-01653: unable to extend table T1 by 128 in tablespace ROOM_FOR_ONE
--
-- Session 1
--
(hit Enter to get the next set of rows)
ERROR:
ORA-08103: object no longer exists
150 rows selected.
When I created T1, because the tablespace was so small, I reused the space that T originally had occupied. The next fetch gets "garbage" back from the disk, and we know that our T no longer exists.
There's other examples where this can happen, eg