SQLQC™ helps to Find, Analyze and Solve SQL Performance Issues


 
 
SQLQC competes with Apptune and DetectorSQLQC™ - SQL Performance Tool for Db2
A powerful tool to help improve SQL performance by finding, detecting and remedying costly SQL statements. Additionally, helps you understand performance over time.

Product Details

Increasingly complex Workloads require new approaches to SQL quality control and performance tuning. Many different SQL statements are continuously passed through the system. This makes it difficult to determine which statements have the most potential for optimization or whether their resource usage is justified. Applications today generate SQL statements on the fly. This calls for a new approach capable of measuring and evaluating these statements in an uncharted and constantly changing environment.

SQLQC stands for continuous SQL quality control. It continually analyzes the current Db2 workload and provides recommendations for improvement. It is not so much a question of whether an SQL statement is slow or uses excessive CPU – this information is provided reliably and adequately by existing monitoring tools – but rather to promptly determine whether this represents a problem. Is the statement executed often or only rarely? If it’s a problem, then what is the cause and how can it be solved? SQLQC converts performance data into meaningful information and makes specific proposals for improvement. SQLQC answers the frequently asked question: “Why did this statement run faster in the past than it does today?” Or, in other words: “What has changed since then?”

Figure 1 shows historical performance data for a specific statement. We have three measurement points with performance related data like CPU time, elapsed time, I/O, etc.

Bild1

Figure 1 – History of performance data

SQLQC also provides a history of access path information. This makes it easy to analyze, for example, which indexes were used by the optimizer in the past, which is particularly useful when diagnosing access path related performance problems of dynamic SQL statements.

The continuous combination of performance data and related access path information is a unique feature of SQLQC.

more»

Figure 2 is an example of how SQLQC presents historical access path information. Both current and previous access paths are displayed. It is easy to detect access path changes for any given SQL statement.

Bild2

Figure 2 – Access path history

SQLQC stands for quality control of dynamic and static SQL. It constantly compares optimization data with actual performance measurements and advises you, for example, if the run time for a particular statement or group of statements is x-times higher than the optimizer estimate, indicating that effectively the catalog statistics are out of date. SQLQC detects inefficient SQL and helps with optimization. It diagnoses problematic situations such as RID errors, cache thrashing, locking, as well as changes in the access paths, problems with outdated runstats, required reorgs, and more.

SQLQC compares the cost of retrieving rows over time. One of the most important criteria for finding candidates for improvement is the number of “getpages per processed rows”.

This information reveals performance problems that can be solved by reorganizing the object in question. At the same time, it helps identify objects where a reorg would not lead to performance improvements. This information can be used to avoid unnecessary reorganizations.

Figure 3 shows an example: a heavy increase of getpages per processed row. This might have various reasons: Maybe a reorg is overdue, the amount of data has increased considerably, or the access path has changed.

It is easy to detect if the amount of data has increased. Detecting that an access path has changed is more difficult without appropriate tools. The historical access path information that is readily available in SQLQC can identify these changes immediately.

SQLQC-Grafik

Figure 3 – Getpages per processed row

SQLQC’s INDEX ADVISOR component actively helps with SQL workload tuning. Appraisal of SQL can be made on the basis of, for example, highest elapsed time or highest CPU usage. Historical recording allows observation of the evolution of individual SQL statements over a specific period of time, thereby enabling detection of intermittent performance bottlenecks. Through SQLQC’s simulation feature it is possible to evaluate statement performance before applying changes in production. Even completely new statements can be easily assessed by developers.

Figure 4 is an example of an index recommendation. The index advisor is not only used for workload analysis. In this case, it detected that two fields should be part of an index, but they are not.

The user can modify and test SQL statements directly in the workbench of SQLQC and gets index advice immediately. This works for SQL statements that are part of the processed workload, and also for new statements which are created on the fly by the user.

SQLQC’s INDEX ADVISOR component actively helps with SQL workload tuning. Appraisal of SQL can be made on the basis of, for example, highest elapsed time or highest CPU usage. Historical recording allows observation of the evolution of individual SQL statements over a specific period of time, thereby enabling detection of intermittent performance bottlenecks. Through SQLQC’s simulation feature it is possible to evaluate statement performance before applying changes in production. Even completely new statements can be easily assessed by developers.

Figure 4 is an example of an index recommendation. The index advisor is not only used for workload analysis. In this case, it detected that two fields should be part of an index, but they are not.

The user can modify and test SQL statements directly in the workbench of SQLQC and gets index advice immediately. This works for SQL statements that are part of the processed workload, and also for new statements which are created on the fly by the user.

Figure 4 – index recommendation

Figure 4 – index recommendation

SQLQC analyzes the workload automatically. It checks SQL statements for missing or wrong indexes:

  • Missing indexes
  • Missing columns in indexes
  • Wrong sequence of columns in indexes
  • Stage 2 attributes
  • and others

Figure 5 is an example of how SQLQC determines the access path for a new statement, i. e., a statement for which no historical information is available. Index recommendations are generated in the same process, as seen in Figure 4. This information is available with a single click.

Figure 5 – Access path display

Figure 5 – Access path display

Any negative findings can immediately be investigated: Users can create virtual indexes within the SQLQC workbench and test them before going into production. Creating new virtual indexes is very easy. SQLQC suggests a new index definition, and users can accept it or modify it according to their requirements.

Figure (6) shows the process of creating a virtual index within SQLQC. The input fields are either pre-set with values that are recommended by SQLQC, or use the default value, which is indicated by the number -1. All attributes can be changed before creating the virtual index.

Figure 6

Figure 6

SQLQC and conventional monitors

SQLQC does not compete with existing monitoring tools. The objective is rather the unattended and continuous supervision, assessment, and improvement of application performance. We focus on identification, reporting, and guidance in problem solving. A classical monitor provides information about the behavior of threads – how long the thread runs, which activities occupy how much time, IO waits, CPU usage, lock waits, etc. One also gets some measurement data on SQL statements being processed, but they only flash by briefly. But the real questions remain: “Which statements are causing problems?” and, more importantly, “Why?” The causes can only be identified after switching on additional traces – if the monitor allows that. With classical monitors the assessment is made from the view of thread activity. The analysis is largely manual and requires concentrated effort on the behalf of experts.

SQLQC takes a different approach. The individual SQL statements are in the center of attention. SQLQC answers the question: “Where in my workload are the real problem cases?” This includes identifying inefficient access paths due to incorrect runstats, detection of retrieval problems resulting from inefficient indexes, detection of sudden changes in access paths after running runstats or after Db2 system maintenance, and comparison of optimization estimates with actual usage measurement data over a period of time. Relocation of synchronous IO to asynchronous IO helps to identify housekeeping problems, such as missing or overdue reorgs; bottlenecks in the RID pool which lead to tablespace scans that can be difficult to detect. Additionally, by automatic grouping of statements of similar type without parameter markers, the user can finally identify high CPU usage by seemingly harmless SQL.

Figure 7 shows the different types of analysis functions within SQLQC. Each report is using predefined criteria to find the “top” statements in the selected report. The top statements in a report are typically the statements that require attention.

Note that SQLQC does not show individual executions of a statement. Instead, SQLQC always builds groups of statements that are syntactically identical, even if they are running without parameter markers. SQLQC assigns a unique key to each statement, which allows it to track that statement over time.

Figure 7

Figure 7

In a typical Db2 subsystem, thousands of statements are executed every hour. The challenge is to identify those statements that can be improved. SQLQC provides a history of all statement executions. This information can be used in a number of reports:

  • Getpage per processed row
  • Inefficient search criteria
  • Lock time
  • RID issues
  • Bind problems, such as repeatable read
  • Missing indexes
  • and many others

These examples show the main focus of SQLQC: It automatically detects and classifies problematic SQL statements. Simple, clear criteria indicate where and what form of tuning potential exists in the form of accurate assessments. The clear classification simultaneously indicates possible solutions: runstats, reorg, access path, RID pool, etc.

Impact Analysis

“Virtual Index” allows changes to indexes to be simulated without the need to actually alter the system. Those who are familiar with the subject of SQL tuning know the problem: An index modification may speed up a specific query, but its impact on the system as a whole is unknown. An analysis of the impact when deleting supposedly superfluous indexes is even more complex. The work involved in regularly performing such analysis manually make them prohibitively expensive. The question of whether adding or removing an index will increase or reduce CPU usage can be answered directly by the Index Advisor without changing the system. The recommendations of the Index Advisor can instantly be simulated, thereby determining the impact on a single SQL statement or even the whole environment.

Figure 8 is an example for the “Impact Analysis”

The impact analysis simulates the behavior of an existing workload with virtual or modified indexes. For the full workload and for each individual statement, SQLQC can analyze the result without actually modifying any real objects.

Figure 8

Figure 8

The impact analysis is based on virtual indexes. Each index change can be simulated, whether it is the creation, modification or deletion of an index.

SQLQC will recalculate the setup with the new or modified indexes. This is a great help in estimating the final result of index changes. As an example, consider an index where a key column is moved from position 4 to position 2 according to a proposal made by the index advisor. SQLQC simulates the impact that this change will have on the workload and determines if the overall effect is positive or negative.

At its highest level, the impact analysis shows the estimated total CPU cost for the selected workload. It is possible to drill down to statement level, where SQLQC makes a prediction of how the index modification will change the CPU consumption of each statement.

SQL History

SQLQC is a tuning tool which not only supports DBAs in their daily work, but can also be of interest to application developers. With SQLQC, developers have a tool that allows them to check the quality of their SQL on a daily basis. SQLQC can compile a collection of all executed SQL over an arbitrarily period of time, which helps in analyzing SQL performance over time (CPU, run time, IO, etc.). Changes in SQL access paths can be identified even when there are months between runs. The data collection process does not require expensive trace logs, so there is no additional overhead. The SQL is also itemized according to its class 3 wait time.

The history function is available in reports and in graphical form:

Figure 9 shows a sample history report. These reports can be downloaded in PDF format and archived for later comparison. There a different types of historical comparison available: CPU Usage, getpage, index usage (as shown in figure 9), and many others.

Figure 9

Figure 9

Reports show which indexes were used by Db2, while graphical charts show the changes in elapsed time for any given statement over time (Figure 10)

The combination of the SQLQC reports in both text and chart form (Figure 10) provides a full picture of the history of a statement.

Figure 10 (graphical display of performance data)

Figure 10 (graphical display of performance data)

This information is always combined with access path history. This means that complete information is available for:

  • The statement to be analyzed
  • The statement’s historical data (CPU time, elapsed time, lock , I/O, . ….)
  • Historical access path information

SQL Grouping

SQLQC analyzes thousands of SQL statements and groups them according to specific criteria. It reports on those groups with comparable profile. Thereby short, fast SQL can be included in optimization planning.

Figure 11 depicts the grouping technique in SQLQC. Statements are normalized, which means that constants are removed – in this case the string literals in the WHERE condition.

The remaining part of the statement is used to create a hash key, which allows SQLQC to identify the statement over time. In other words, when the statement is executed again in the future, it will get the same hash key, even if different string literals are used.

Figure 11 - Statement Grouping

Figure 11 – Statement Grouping

From the point of view of Db2, these are 2 different statements. The grouping function of SQLQC removes constants and normalizes the statement. This allows SQLQC to track statements over time.

Zoom Function

The zoom function of the GUI simplifies the analysis of relationships by revealing additional information. The user is provided with the ability to determine which tables are accessed by which SQL statements on an adhoc basis without using extra resources. The workstation component lists all relevant information in a clearly laid out format.

The automated and continuous examination of static and dynamic SQL in various environments distinguishes SQLQC from conventional performance monitors. The tool simplifies performance tuning and saves on the time-consuming analysis of performance traces, runstats, index inspection, and so on. Complex questions and hidden problems can be solved with a few mouse clicks. SQLQC supports database administrators and application developers in guaranteeing the quality of their work in all development and roll out cycles as well as the continuing oversight of production.

Figure 12 is an example for a specific type of zooming in SQLQC. Other monitors typically present SQL statements, users, or packages. With SQLQC it is possible to analyze statements on table level. It provides insight into questions like which tables are responsible for the highest share of CPU consumption.

Figure 12 - Zooming

Figure 12 – Zooming

The zoom function allows the user to start reporting on any object type:

  • Users
  • Tables
  • Statements
  • Packages

This can be combined with filters. For example, it is possible to find out which users accessed any given table within a specific time frame, and which statements were executed against the selected table.

A simple double click will bring more details on the selected object.

Contact us to schedule a Q&A, Demo, or free trial.

Click for demo or trial info

 

 
 
 

Our Track Record

Customer satisfaction is our goal. Listening to our customers has led to improvements in our products and increased customer loyalty with personal, responsive support.  In these days of impersonal mega-vendors, we strive to provide constantly enhanced products that meet new technological advances and help our customers meet their technology goals.