DBA

A Higher-Level Perspective on SQL Tuning, Part 2

Identifying and diagnosing poorly performing SQL

By Connor McDonald

May/June 2019

In my previous article on SQL tuning, I discussed the importance of stepping back from focusing on a purely technical approach to tuning SQL. Before tuning SQL statements, the practitioners charged with the task must first understand the business requirement of the application components that execute that SQL, so that they can validate the correctness of the SQL statements to meet that requirement.

Even when the SQL’s correctness is validated, it is still important to have the right focus when tuning an application. As I mentioned in the previous article, no business has a “Let’s make sure SQL runs fast” model; they have core functions that drive revenue. It just so happens that these core functions are often backed by IT applications that execute SQL statements.

Hence the processing cost of a SQL statement is not linked to business success unless that cost has an impact on the delivery of those core business functions, and the real task of a database developer is to tune the delivery of the core functions. For example, if a daily report takes two hours to execute and a consequence is that critical information is not available to management for making strategic decisions about business operations, then a “tuning” exercise might be simply to automate the running of the report before standard working hours, so that the report output is available when managers come into the office. Although it is seductive to focus solely on tuning code—after all, most developers chose their IT careers to become coders—the delivery of business functionality must be the driving force for tuning efforts. It can also be much more rewarding for you as a developer to think this way, because when you focus on ensuring that users of the application are successful in performing their business functions, the same users will sing your praises when you improve their experience with the applications you build.

Tuning the User Experience

The best way to identify the pain points for users of your application is also the simplest. Communicate directly with the users of your application. Nothing impresses business users more than taking the time to sit alongside them to get a firm understanding of where tuning efforts are best focused, and often this can even reduce the amount of effort required. For example, I had a client who detected huge response times on a key screen of its application, and I fully expected venting of anger about this performance when I visited. However, it turned out that the organization’s business workflow meant that the slow performance didn’t have any impact at all—the staff just launched the application each morning at the beginning of the departmental standup meeting. In fact, the most important enhancement request was simply to change the text on one of the buttons to make its function less ambiguous for the junior staff. A five-minute fix, and they were wowed by the quick turnaround time and the “personal touch” of having an IT person liaise with them directly. The other benefit of direct contact with your users is that even if a performance problem cannot be solved immediately, or perhaps ever, they normally appreciate that you are at least reaching out to them and are aware of the things that matter to them.

Finding Problematic SQL

If it is possible to have direct interaction with application users and have them demonstrate a performance issue on a particular business function, the task becomes identifying whether the issue is being caused by slow SQL execution and, if so, what those poorly performing SQL statements are. The definitive tool for performance capture and analysis is Oracle’s SQL trace capability. Tracing is enabled just before execution of the business function and disabled immediately afterward, thus capturing only the time frame and the SQL executions of interest. A detailed look at tracing is beyond the scope of this article, but Arup Nanda covers an introduction to tracing in this Oracle Magazine article.

As powerful as it is, there are some challenges with tracing:

  • In the modern world of stateless applications running in connection pools, tracing a single activity within an application may involve tracing multiple database sessions, each of which is also performing tasks for other application requests concurrently. Extracting the SQL statements pertaining to just the application activity of interest requires extensive instrumentation throughout the application.
  • The performance issue may not be easy to reproduce. For example, if a month-end task runs slowly, it may be another month before a trace can be performed to capture the issue again. Or it may be a performance issue that arises intermittently and thus cannot be reproduced on demand.
  • Sometimes an impatient user will not be pleased about being asked to repeat (potentially multiple times) an activity that is causing frustration.

If tracing cannot be successfully utilized, there are still other options available for homing in on the SQL statements that are directly impacting business functions.

Automatic Workload Repository. The Automatic Workload Repository feature of Oracle Database collects and analyzes performance statistics in the database to assist with problematic SQL detection. The gathered data is stored both in memory and in the database, and the data includes SQL execution and performance statistics. An Automatic Workload Repository snapshot is a capture of the state of a database at a point in time, so when you’re analyzing a performance issue encountered by a user, a delta of the snapshots before and after the time when the performance issue occurred can be used to discover problematic SQL statements that most probably were related to the issue. Much as with the trace methodology, the ideal scenario is to take snapshots immediately before and after replicating a performance problem.

Active Session History. The Active Session History feature of Oracle Database samples active database sessions each second, writing the data to memory and persistent storage. An active session is a session that is using CPU and is not sitting idle waiting for a request from an application and/or user. Slow-running SQL statements can also be detected by Active Session History, because a session running a SQL statement for, say, 10 seconds will have 10 consecutive entries in the V$ACTIVE_SESSION_HISTORY view for the same SQL_ID/SQL_EXEC_ID columns pairing.

Automatic Workload Repository and Active Session History are powerful tools, because they allow for after-the-fact problem analysis. A business user may report a performance problem that occurred a few hours beforehand. Even if that problem cannot be replicated, the Automatic Workload Repository and Active Session History data can often be used to detect the problematic SQL statements for that user.

However, both Active Session History and Automatic Workload Repository are additional license options of Oracle Database, Enterprise Edition, so they may not be available for your performance tuning analysis. If that’s the case, you can use the dynamic performance views to find poorly performing SQL statements in a database. Marrying this information with a business user’s performance issue is more difficult, but with a little luck, the most resource-intensive SQL statements will have some identifying characteristics that allow a basic level of confidence in mapping them to business functions. The V$SQL performance view lists SQL statements in the shared SQL area and presents numerous performance measurements alongside each, as shown in Listing 1.

Here are the most commonly used V$SQL columns for providing performance measurement on SQL statements:

  • DISK_READS discovers SQL statements performing large amounts of physical I/O.
  • BUFFER_GETS is a measure of logical I/O, which is the most common cause of high CPU costs for SQL statements.
  • EXECUTIONS determines whether a SQL statement is being run an excessive number of times.
  • ROWS_PROCESSED can be useful for INSERT, UPDATE, and DELETE statements to determine large redo costs.
  • ELAPSED_TIME is the cumulative elapsed time across all executions for a SQL statement, which maps most closely to user response time.

Those columns provide the performance characteristics of each SQL statement. They can be combined with other V$SQL columns to tie SQL statements back to their root business functions:

  • PARSING_USER_ID narrows the SQL statement down to the schema(s) your application is running under. This equates to the USER_ID column in the DBA_USERS dictionary view.
  • SERVICE is the database service your application is connecting to. Sadly, services are a dramatically underutilized feature of Oracle Database, so many applications simply use the default service name equal to the database name. See the documentation for details on how to take advantage of database services.
  • The MODULE and ACTION columns enable you to track back the level of granularity provided by the application developer in SQL statements—if your application is instrumented to use the MODULE and ACTION column values via the DBMS_APPLICATION_INFO package or via the end-to-end metrics capabilities in the various Java, Python, and other drivers.
  • The PROGRAM_ID and PROGRAM_LINE# columns will include the OBJECT_ID function and the program line number of the PL/SQL object in the database—if the SQL statement originated from a PL/SQL stored program unit. The sidebar “A Higher-Level Perspective on SQL Tuning, Part 2, PL/SQL” includes an example that uses these V$SQL columns to track SQL. (Refer to the SmartDB Resource Center for details on the benefits of controlling all database access via a secure PL/SQL shell.)

Although V$SQL is a very powerful view for discovering information about SQL statements currently in the shared SQL area, there is a risk to using it. Querying V$SQL occasionally for a particular performance issue that occurs rarely is fine, but if you are diagnosing a system that is under severe duress, then hammering away at V$SQL can create problems of its own. A system under duress is quite possibly experiencing extreme contention in its shared memory structures, and by aggressively querying V$SQL, you may be adding to that workload and/or contention. This appears to be a catch-22 situation: How do you diagnose a struggling system if diagnosing it via queries to V$SQL will make it struggle more? For such cases, there is an alternative performance view, V$SQLSTATS, that contains almost as much information as V$SQL. (Below is a summary, but see the documentation for a more detailed description of V$SQLSTATS.)

V$SQLSTATS displays basic performance statistics for SQL cursors and contains one row per SQL statement (that is, one row per unique value of SQL_ID). The column definitions for columns in V$SQLSTATS are identical to those in the V$SQL and V$SQLAREA views. However, the V$SQLSTATS view differs from V$SQL and V$SQLAREA in that it is faster, more scalable, and has a greater data retention (the statistics may still appear in this view, even after the cursor has been aged out of the shared pool). Note that V$SQLSTATS contains a subset of columns that appear in V$SQL and V$SQLAREA.

My general approach is to find poorly performing SQL statements initially by querying V$SQLSTATS and then if I need to determine more information about individual SQL statements such as the MODULE or ACTION column values, I will access V$SQL, using the SQL_ID of just those poorly performing SQL statements. Using V$SQLSTATS isn’t an invitation to smash away continuously at the database, but by avoiding V$SQL, you are steering clear of a critical shared-memory structure that many database sessions are typically competing for. V$SQLSTATS has been available since Oracle Database 10g.

The Real SQL

To recap, the developer should always focus on the specific business functions for the user and then use the trace methodology and tools such as Automatic Workload Repository, Active Session History, and the performance views to identify the SQL statements that are the cause of poor response times for those business functions.

Now that the SQL statements have been identified, I recommend performing one extra step before diving into tuning techniques, and that is to expand the SQL text. Expanding a SQL statement reveals what the database will truly be running when it is presented with the text of that SQL statement. For example, if I have a simple query

SQL> select * from EMP_LIST;

it is only through my prior knowledge that I created a synonym via

SQL> create synonym EMP_LIST as SCOTT.EMP;

that I know that I am actually querying the SCOTT.EMP table. Rather than taking each referenced object in the SQL statement and digging into the data dictionary to see if that object is a synonym or a view and recursively continuing that process until all objects are resolved, I can use the supplied DBMS_UTILITY database package to expand a SQL statement to reveal what the database will ultimately “see.” Listing 2 uses the above query as an example.

Listing 2: Expanding the SQL text of a statement

SQL> variable c clob
SQL> begin
  2    dbms_utility.expand_sql_text
  3     ( 'select * from EMP_LIST',:c);
  4  end;
  5  /

PL/SQL procedure successfully completed.

SQL> print c

C
—————————————————————————————————————————————————————————————————————
SELECT
  "A1"."EMPNO" "EMPNO","A1"."ENAME" "ENAME","A1"."JOB" "JOB",
  "A1"."MGR" "MGR","A1"."HIREDATE" "HIREDATE","A1"."SAL" "SAL",
  "A1"."COMM" "COMM","A1"."DEPTNO" "DEPTNO"
FROM "SCOTT"."EMP" "A1"

Expanding the SQL text may seem like a waste of time and effort, but here is another example of why looks can be deceiving when you’re faced with a SQL statement for tuning. Listing 3 shows a view that joins two database dictionary objects, appropriately named LOOKS_SO_INNOCENT.

Listing 3: View definition for LOOKS_SO_INNOCENT

SQL> create or replace
  2  view LOOKS_SO_INNOCENT as
  3  select
  4    o.owner,
  5    o.created,
  6    s.bytes,
  7    s.tablespace_name
  8  from
  9    dba_segments s,
 10    all_objects o
 11  where o.owner = s.owner
 12  and   o.object_name = s.segment_name;

View created.

When I expanded the SELECT * FROM LOOKS_SO_INNOCENT query via the DBMS_UTILITY package, the resulting formatted SQL statement was more than 1,000 lines, or approximately 25 pages of standard text on a printed page (and available in Listing 4). It can be puzzling to see a single-line query performing poorly—but not so much when that query is expanded to 1,000 lines.

Summary

In the previous article in this series, I explained the importance of validating that a SQL statement is correct in terms of its construction and is solving the intended business requirement before diving into the various tuning techniques at your disposal.

Now continuing in the vein of taking a holistic approach, I’ve explained the importance of focusing on tuning from the perspective of the business user rather than the SQL statements, because the worst-performing SQL in the database is not necessarily related to the business function that is causing the most frustration for a user. Remember that the goal is to tune the user experience, and that is what drives identification of the SQL statements to be tuned.

In the next article, I’ll explore how I proceed after a set of poorly performing SQL statements has been identified.

Next Steps

LEARN more about SQL tuning.

TRY more about Oracle Autonomous Transaction Processing.

DOWNLOAD Oracle Database 18c.

READ "A Higher-Level Perspective on SQL Tuning." 

Illustration by Wes Rowell

DISCLAIMER: We've captured these popular historical magazine articles for your reference. Links etc contained within these article possibly won't work. These articles are provided as-is with no guarantee that the information within them is the best advice for current versions of the Oracle Database.