v$sql is *cumulative* data, so you need to check the EXECUTION columns as well.
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects;
Table created.
SQL>
SQL>
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL> select executions, buffer_gets from v$sql
2 where sql_text like 'select max(created)%';
EXECUTIONS BUFFER_GETS
---------- -----------
1 1529
1 row selected.
SQL>
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL> select max(created) from t;
MAX(CREAT
---------
23-MAR-17
1 row selected.
SQL>
SQL> select executions, buffer_gets from v$sql
2 where sql_text like 'select max(created)%';
EXECUTIONS BUFFER_GETS
---------- -----------
6 9129
1 row selected.
SQL>
SQL>
SQL>
To answer your questions
1) The number of rows *returned* doesnt really have a bearing here. If I query a 1 billion row table, and get no rows back...I still scanned 1 billion rows.
Large I/O typically does not consume the buffer cache, because if we think a large scan is needed, we'll do what is called a "direct read" which comes from the files directly to the private area for your session and bypasses the buffer cache. Even if its a small object (but still "largish") blocks that are read rarely are quickly aged out of the buffer cache, whilst more active blocks are retained.
2) There isnt really a direct link except in terms of a more simple one, namely "effort". If you have to consume a lot of resources (physical IO, or buffer gets, or CPU or anything else that is a limited resource) when running SQL, then whoever or whatever is running that SQL will probably experience this as slowness.