We use Business Objects against a database setup just for generating reports. This is an Exadata RAC with 2 nodes and ASM storage and all of the BO sessions login/connect to the same oracle user.
During our last month-end, which coincided with quarter-end, we saw many session with "env: SS - contention" wait event. Also intermittently saw "buffer busy waits" as they all wait for access to the shared temporary tablespace, as indicated by the P1 Value.
Searching for answers on how to reduce these wait events led us to Local Temporary Tablespaces. So we setup a Local Temp Tablespace in our development environment...
CREATE LOCAL TEMPORARY TABLESPACE FOR ALL temp_reporting_local
TEMPFILE '+DTADVQ1/.../TEMPFILE/temp_reporting_local.dbf'
SIZE 10G AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M;
Assigned it to the REPORT_USER as it's default Local Temp Tablespace...
ALTER USER report_user LOCAL TEMPORARY TABLESPACE temp_reporting_local;
SELECT username, default_tablespace, temporary_tablespace, local_temp_tablespace
FROM DBA_USERS
WHERE username = 'REPORT_USER';
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE
REPORT_USER TBE_REPORT_USER_01 TEMP_REPORTING TEMP_REPORTING_LOCAL
Then ran some large queries while logged in as REPORT_USER. The query fails with same error message as before: "ORA-01652: unable to extend temp segment by 256 in tablespace TEMP_REPORTING".
Monitoring Free Space, the Local Temps do not appear to have been used at all.
SELECT tablespace_name, inst_id,
tablespace_size/1024/1024 AS total_mb,
allocated_space/1024/1024 AS allocated_mb,
free_space/1024/1024 AS free_mb
FROM dba_temp_free_space
WHERE tablespace_name LIKE 'TEMP_REPORTING%';
TABLESPACE_NAME INST_ID TOTAL_MB ALLOCATED_MB FREE_MB
TEMP_REPORTING 10240 10240 0 (assumed to be zero at instant report died)
TEMP_REPORTING_LOCAL 1 10240 2 10238
TEMP_REPORTING_LOCAL 2 10240 2 10238
A hash join exceeded the 10GB of shared temp but did not use any of the Local temp.
So, how can we get these queries to use Local Temp once Shared Temp "overflows"? I'm thinking it is because it cannot spit the hashed results between the two. Which makes me wonder how it will ever use local temp tablespaces.
Second question: why did they not set it up to use the Local Temp first and then overflow into the Shared Temp, if needed? Seems like a more logical approach if you want to mitigate these wait events.
This is as documented:
Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace. For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, will result in an error. https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/introduction-to-oracle-rac.html#GUID-E84E3BA2-7F36-406B-B630-CD651E0C856F__TEMPORARYTABLESPACEHIERARCHY-054A7B94 Spreading temp data across shared and local tablespaces introduces lots of complexity. You need to keep track of which segments are where which adds overhead. Worst case you need to move data from one to another, which adds lots of extra work. For example, imagine a parallel process that starts using local temp. It might need to co-ordinate with processes on other nodes. This would mean moving the local temp data to shared temp so the other nodes can see it; a costly operation.
So it's better to only allow a SQL statement to use one temporary tablespace.
Local temp tablespaces are designed for a different use case: read only instances. In this scenario they can't write to the shared temp.
Again, as the docs say:
Oracle uses local temporary tablespaces to write spill-overs to the local (non-shared) temporary tablespaces which are created on local disks on the reader nodes.(emphasis mine).