Documentos de Académico
Documentos de Profesional
Documentos de Cultura
SQL Profiling
Optimizer
SQL Tuning Advisor
Dictionary
SQL Profile
Use SQL Profile
Store
SQL Profile
Execute
Well-tuned
Execution
Plan
High-load SQL Statement
1102
statement oI a SQL workload. and consolidates them into
a global advice Ior the entire SQL workload. The SQL
Access Advisor takes into account the level oI DML
activity on the related obiects in its global
recommendations. It also recommends other types oI
access structures like materialized views. as well as
indexes on the recommended materialized views.
5. SQL Structure Analysis
OIten a SQL statement can be a high load SQL statement
simply because it is badly written. This usually happens
when there are diIIerent. but not semantically equivalent.
ways to write a statement to produce same result.
Knowing which oI these alternate Iorms is most eIIicient
is a diIIicult and daunting task Ior application developers
since it requires both a deep knowledge about the
properties oI data they are querying as well as a very
good understanding oI the semantics and perIormance oI
SQL constructs. Besides. during the development cycle
oI an application. developers are generally more Iocused
on how to write SQL statements that produce desired
results than improving their perIormance.
It is important to note that the Oracle query optimizer
perIorms extensive query transIormations while
preserving the semantics oI the original query. Some oI
the transIormations are based on heuristics (i.e. internal
rules). but many others are based on cost-based selection.
Examples oI query transIormations include subquery
unnesting. materialized view (MV) rewrite. simple and
complex view merging. rewrite oI grouping sets into
UNIONs. and other types oI transIormations. SQL
proIiling improves the outcome oI this process by
reducing the errors in various cost estimates. thereby
improving the cost-based selection oI query
transIormations.
However. the query optimizer applies a transIormation
only when the query can be rewritten into a semantically
equivalent Iorm. Semantic equivalence can be
established when certain conditions are met; Ior
example. a particular column in a table has the non-null
property. However. these conditions may not exist in the
database but enIorced by the application. The Automatic
Tuning Optimizer perIorms what-iI analysis to recognize
missed query rewrite opportunities and makes
recommendations Ior the user to undertake.
There are various reasons related to the structure oI a
SQL statement that can cause poor perIormance. Some
reasons are syntax-based. some are semantics-based. and
some are purely design issues.
1. Syntax-based constructs: Most oI these are related
to how predicates are speciIied in a SQL statement.
For example. a predicate involving a Iunction or
expression (e.g. Iunc(col) :bnd. col1 col2
:bnd) on an indexed column prevents the query
optimizer Irom using an index as an access path.
ThereIore. rewriting the statement by simpliIying
such complex predicates can enable index access
paths leading to a better execution plan.
2. Semantic-based constructs: A SQL construct such
as UNION. when replaced by a corresponding but
not semantically equivalent UNION-ALL construct
can result in a signiIicant perIormance improvement.
However. this replacement is possible only iI there
is no possibility oI duplicate rows (e.g.. a unique
constraint is maintained in the application). or
duplicate rows when produced do not matter to the
application. II this is the case. it is better to use
UNION-ALL instead thus eliminating an expensive
duplicate elimination operation Irom the execution
plan. Another example is the use oI NOT IN sub-
query while a NOT EXIST sub-query could have
produced same result much more eIIiciently.
3. Design issues: An accidental use oI a cartesian
product. Ior example. occurs when one oI the tables
is not ioined to any oI the other tables in a SQL
statement. This can happen especially when the
query involves a large number oI tables and the
application developer is not very careIul in checking
all ioin conditions. Another example is the use oI an
outer-ioin instead oI an inner-ioin when the
reIerential integrity together with non-null property
oI the ioin key is maintained in the application.
The SQL structure what-iI analysis is perIormed by
the Automatic Tuning Optimizer to detect poor SQL
constructs Ialling in one or more categories listed above.
This analysis is perIormed in two steps.
In the Iirst step. the Automatic Tuning Optimizer
generates internal annotations to remember the reasons
why a particular rewrite was not possible. The
annotations include the necessary conditions that were
not met. as well as various choices that were available at
that time. For example. when the Automatic Tuning
Optimizer explores the possibility oI merging a view. it
will check necessary conditions to see iI it is logically
possible to merge the view. II not possible. it will record
the reasons Ior not merging the view. It will also record
other alternatives that were available. such as pushing
ioin predicates inside oI the view to make it into a
LATERAL view.
The second step oI the analysis takes place aIter the
best execution plan has been built. The Automatic
Tuning Optimizer examines the annotations associated
with costly operators in the execution plan. A costly
operator can be deIined as one whose individual cost is
more than 10 oI the total plan cost. Using the
annotations associated to expensive plan operators. it
produces appropriate recommendations. For example. iI
it was not possible to merge a view because oI rownum
predicate (i.e.. a limit to clause) present in the view. the
recommendation would be to move rownum predicate
outside oI the view. With each recommendation. a
rationale is given in terms oI cost improvement.
1103
Since the implementation oI SQL structure
recommendations requires rewriting the problematic
SQL statements. the SQL structure analysis is much
more suited Ior SQL statements that are being developed
but not yet deployed into a production system or
packaged application. Another important beneIit oI the
SQL structure recommendations is that it can help
educate the developers in writing well-Iormed SQL.
6. Automatic SQL Tuning in the Oracle10g
Self Managing Database
As shown in the previous sections. the main Iocus oI the
automatic SQL Tuning Ieature is to tune a SQL
statement by proIiling it and by recommending other
tuning actions to the end user. However. the scope oI
SQL tuning goes Iar beyond tuning a single statement.
Indeed. the SQL tuning task usually starts by identiIying
high-load SQL. High-load SQL typically represents a
small subset oI SQL statements (generally a small
Iraction) that are either consuming a large share oI
system resources (e.g.. more than 80 percent) or account
Ior a large portion oI the time spent by a database
application to perIorm one oI its essential Iunctions.
In Oracle10g. a substantial amount oI development
eIIort and Iocus has been put into making the database
selI-managing. Automatic SQL Tuning is an integral part
oI the manageability Iramework that was developed Ior
this purpose. The goal is to provide an end-to-end
solution to the many SQL tuning challenges Iaced by the
database administrators and application developers.
Figure 3 represents a typical illustration oI the SQL
tuning liIe cycle as it is now perIormed in Oracle10g. It
includes Iour key manageability components: AWR
(Automatic Workload Repository). ADDM (Automatic
Database Diagnostic Monitor). STS (SQL Tuning Set).
and STB (SQL Tuning Base). These components are
described in detail below.
Figure 3. SQL Tuning Life Cycle in Oracle10g
The SQL tuning liIe cycle Iollows the three phases oI
the Oracle10g selI-managing loop: Observe. Diagnose.
and Resolve. Each oI the components oI the selI-
managing Iramework (labeled AWR. ADDM. STS and
STB in Figure 3) plays a key role in one or more oI these
three phases.
6.1 Observe Phase
This phase is automatic and continuous in Oracle10g. It
provides the data needed Ior analysis. To enable accurate
system perIormance monitoring and tuning. it is
imperative that the system under consideration expose
relevant perIormance measurements. The manageability
Iramework allows Ior instrumentation oI the code to
obtain precise timing inIormation. and provides a
lightweight comprehensive data collection mechanism to
store these measurements Ior Iurther online or oIIline
analysis.
The chieI component oI the observe phase is the
Automatic Workload Repository (AWR). The AWR is
a persistent store oI perIormance and system data Ior
Oracle10g. The database collects perIormance data Irom
in-memory views every hour and stores it in AWR. Each
collection is reIerred to as a snapshot. A snapshot
provides a consistent view oI the system Ior its
respective time period. For example. among other things.
the AWR identiIies and captures top SQL statements that
are resource intensive in terms oI CPU consumption.
disk reads. parse calls. memory usage. etc. Ior each time
interval.
AWR is selI-managing. and based on internal
measurements its overhead is less than 2 percent oI the
system load. AWR has standard policy Ior data retention
but also accepts user input and. iI required. proactively
purges data should it encounter space pressure.
6.2 Diagnose Phase
The activities in this phase reIer to the analyses oI
various parts oI the database system using the data in
AWR or in-memory perIormance views. Oracle10g
introduces a Iramework Ior analyzing and optimizing the
perIormance oI its respective sub-components. such as
the buIIer cache. SQL execution. undo management. etc.
At the heart oI the diagnose phase is the Automatic
Database Diagnostic Monitor (ADDM). ADDM is a
central database-wide perIormance diagnostic engine
that optimizes Ior system throughput by taking a holistic
view oI the entire database system Ior a given analysis
period. It runs automatically and identiIies the root
causes oI the top perIormance bottlenecks and excessive
resource consumption along with the exact impact on the
workload in terms oI time. It also provides a set oI
recommendations to alleviate the problems detected.
In the case oI SQL statements consuming excessive
resources. ADDM will recommend the invocation oI the
SQL Tuning Advisor Ior those high-load SQL
statements. Besides the automatic selection perIormed by
High-load
SQL
Automatic
SeIection
AWR
n-memory
Statistics
Accept SQL
Profile
STB
STS
Automatic
SQL Tuning
ADDM USER
Custom
SQL
ManuaI
SeIection
Filter/Rank
Query
Optimizer
mproved
Execution Plan
1104
ADDM. Oracle10g also provides a user driven
mechanism to manually select the set oI SQL statements
to tune. This manual path (illustrated by downward
arrows on the right side oI Figure 3 exists because the
user - generally the application developer or the DBA -
might have to tune the response time oI a subset oI SQL
statements involved in a critical Iunction oI the database
application. even iI that Iunction accounts Ior a small
percentage oI the overall load.
The SQL Tuning Set (STS) Ieature is introduced in
Oracle10g Ior the user to create and manage the SQL
workload to tune. A SQL Tuning Set is a database obiect
that persistently stores one or more SQL statements
along with their execution statistics and execution
context. The execution context stored with each SQL
statement includes the parsing schema name. application
module name. list oI bind values. and compilation
parameters. This enables the system to replicate the
runtime environment under which the SQL statement
was detected. The execution statistics include elapsed
time. CPU time. disk reads. rows processed. statement
Ietches. etc.
SQL statements can be loaded into a SQL Tuning Set
Irom diIIerent SQL sources. The SQL sources include
the Automatic Workload Repository. the statement
cache. and custom SQL statements supplied by the user.
The capability to speciIy complex Iilters and rankings Ior
the SQL statements are provided while loading into or
reading data Irom the STS. For example. the user can
create a STS storing the top N SQL statements issued by
application module 'order entry. where top N is based
on the cumulative elapsed time oI each statement.
Once created and populated. a SQL Tuning Set
becomes the main input oI the SQL Tuning Advisor.
6.3 Resolve Phase
The various advisors. aIter having perIormed their
analyses. provide as output a set oI recommendations
that need to be implemented or applied to the database.
The recommendations may be automatically applied by
the database itselI or be initiated manually. This is
reIerred to as the Resolve Phase.
In the context oI SQL tuning. the action part includes
accepting SQL ProIiles recommended by the SQL
Tuning Advisor. When a SQL ProIile is accepted. it is
stored in the SQL Tuning Base (STB). The SQL Tuning
Base is an extension oI the Oracle dictionary that stores
and manages all the tuning actions targeting speciIic
SQL statements.
Accepting SQL proIile recommendations closes an
iteration oI the SQL tuning loop; SQL ProIiles will most
likely improve the execution plan oI the targeted set oI
SQL statements. hence reducing their overall
perIormance impact on the system. This will be reIlected
in the perIormance measurements being collected. The
next tuning cycle can then begin with a diIIerent set oI
high-load SQL statements. The process can be repeated
several times until the desired perIormance level is
achieved.
7. Experimental Results
The Automatic SQL Tuning Ieature was evaluated
using a decision support workload obtained Irom one oI
Oracle`s customers. a market research Iirm. Even though
we do not demonstrate it in this paper. Automatic SQL
Tuning can also tune OLTP queries equally well. In Iact.
we used it successIully on several queries Irom our
internal OLTP systems. It is commonly assumed that
OLTP queries are very simple with obvious execution
plans. and thus do not oIIer many optimization
opportunities. However. this is generally not true. Some
OLTP queries can be very complex. ioining more than
20 tables with multiple sub-queries and predicates. In
this type oI environment. the optimizer can Iail to Iind
optimal execution plans. Additionally. most OLTP
applications run complex batch and reporting queries.
Our SQL tuning methodology can be very eIIective in
tuning OLTP queries.
For this experiment. we chose 73 decision support
queries that had the highest impact on the perIormance
oI the customer`s database system. As a result. the
customer and an Oracle consulting team spent a
signiIicant amount oI time to manually tune each oI
these queries. Figure 4 shows the response time oI all 73
queries prior to tuning. Throughout this section. graphs
show response time in ascending order (i.e. Irom Iastest
to slowest) using a logarithmic scale to improve
readability. One can observe that without tuning. most
queries perIorm very poorly. The worst response time Ior
a query is almost 2 hours (5.751s) with an average
response time oI 817s and a cumulative response time
(time to run the entire workload sequentially) close to 16
hours. This was unacceptable to the customer who had to
resort to manual tuning oI these SQL statements.
Time (s)
1
10
100
1000
10000
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69
Queries
Figure 4. Response Time Without Tuning
Most statements were manually tuned using
optimizer hints to improve their execution plans. In this
particular instance. the Oracle query optimizer was
unable to Iind the most optimal execution plan because
these SQL statements used complex ioin predicates (e.g.
inequality ioin predicates like 'T1.C1 between T2.C1
1105
and T2.C2) and had Iilters on highly correlated columns
originating Irom diIIerent tables being ioined (i.e. inter-
table correlation). The combination oI these two Iactors
made it very hard Ior the Oracle query optimizer to
properly estimate the cardinality oI some intermediate
ioins. Hence. the optimizer would sometime Iail to
produce a good ioin order. leading to a sub-optimal
execution plan and poor query perIormance.
Figure 5 below graphs the response time aIter
perIorming manual tuning. As one can see. manual
tuning was able to dramatically improve the response
time oI most queries in the set. The worst response time
was reduced to 275s - instead oI the initial 5.751s - with
an average response time oI 30s - instead oI the initial
817s - and a cumulative response time oI 2131s. instead
oI the initial 16 hours.
Time (s)
1
10
100
1000
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69
Queries
Figure 5. Response Time after Manual Tuning
The initial set oI 73 queries was then stored in a SQL
Tuning Set. which was then tuned using Oracle 10g
Automatic SQL Tuning Ieature. For the purpose oI this
particular test. we decided to implement only SQL
proIile recommendations since our goal was to show
how the execution plans Ior these statements could be
improved without perIorming any SQL rewrite (i.e..
altering SQL source code). and without modiIying the
underlying database schema. Figure 6 presents the new
response time aIter the SQL ProIiles recommended by
the Automatic SQL Tuning were all accepted.
Time (s)
1
10
100
1000
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69
Queries
Figure 6. Response Time after Automatic Tuning
Overall. the results show dramatic improvements
over manual tuning. The maximum response time was
reduced Irom 275s to 59s. The average response time
reduced Irom 30s to 13s. The cumulative SQL workload
response time was less than 15 minutes instead oI the 16
hours beIore tuning or the 35 minutes aIter manual
tuning. Table 1 below summarizes these results.
Average
Response
Time
Maximum
Response
Time
Cumulative
Response
Time
No Tuning
817s 5.751s 58.821s
Manual Tuning
30s 275s 2131s
Auto Tuning
13s 59s 929s
Table 1. Result Summary
These Iirst results are very encouraging and
demonstrate that SQL proIiling represents a very
eIIective way to empower the query optimizer in Iinding
better execution plans. Overall. SQL proIiling even
surpassed manual tuning.
The last aspect oI the benchmark is the perIormance
oI the Automatic SQL Tuning process itselI. Internally.
the goal oI SQL proIiling is to regulate its time such that.
at worst. the time spent to tune a query is no more than
the response time oI that query beIore tuning. To achieve
this goal. a cost-based and bottom-up tuning approach is
used to determine which internal optimizer estimates are
worth veriIying. This. combined with the use oI dynamic
and iterative sampling techniques. makes Automatic
SQL Tuning very eIIicient.
Figure 7 validates this goal. On average. the time to
tune a query ranged Irom less than a minute to a
maximum oI less than two minutes. The entire workload
was tuned in a little more than an hour (74 minutes)
versus 16 hours to run the set oI queries beIore tuning.
This should be contrasted with the signiIicant man-hours
spent by domain experts to perIorm the manual tuning
task. making Automatic SQL Tuning a very cost-
eIIective solution.
Time (s)
1
10
100
1000
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69
Queries
Figure 7. Automatic Tuning Time
8. Related Work
Several research groups. commercial databases and tools
vendors have tried to solve the SQL tuning problem.
Their solutions have concentrated on one oI many areas.
These include improving the optimizer itselI; providing
novel data statistics; rewriting the SQL statements into
semantically equivalent Iorms; and making
recommendations Ior new indexes to improve
perIormance. However. none oI the commercially
1106
available query optimizers exploit a selective body oI
knowledge built by a learning component to inIluence
Iuture query plan generation. In Oracle10g. this body oI
knowledge is encapsulated by SQL proIiles built by the
Automatic Tuning Optimizer. our learning component.
The LEO (LEarning Optimizer) research proiect at
IBM |1| |2| corrects errors in cardinality estimates made
by the query optimizer by comparing them with the
actual values measured at each step oI the execution
plan. The corrections are computed as adiustments to the
query optimizer estimates and stored in dictionary tables.
When a SQL statement is compiled. the query optimizer
Iirst checks whether any adiustments are available as a
result oI previous executions oI a related query and iI so.
it applies them. The idea oI correcting errors in optimizer
estimates to produce a better execution plan is similar to
SQL ProIiling. However. the two approaches diIIer in
several ways:
1. LEO detects cardinality estimate errors only in the
Iinal plan selected by the optimizer. In contrast.
SQL proIiling error detection is perIormed when the
Oracle optimizer is searching Ior an optimal plan.
As a result. SQL proIiling guides the optimizer in its
plan search algorithm so that the true optimal plan
can be Iound. On the other hand. LEO tries to Iind
an optimal plan through several iterations. each oI
which requires a new execution oI the SQL
statement by the application. Several issues with
LEO`s iterative error correction model are detailed
in |2| and outlined here:
a. PerIormance oI intermediate execution plans is
not guaranteed to improve because oI partial
error corrections. Indeed. during the learning
phase. perIormance can even degrade making
this method hard to use in production systems.
By contrast. the Automatic Tuning Optimizer
produces a SQL proIile in a single iteration.
without impacting the application.
b. The process oI converging to an optimal plan
can extend over a long period oI time since a
single estimate error can be the source oI many
other estimate errors. For example. an error in
the cardinality estimate oI a ioin between tables
A and B. will cascade to every ioin permutation
that includes A and B (e.g.. C. A. D. B). Hence.
beIore Iinding the optimal ioin permutation.
LEO might have to correct many ioin
permutations. each correction requiring a Iull
execution oI the SQL statement by the
application. This issue does not exist Ior SQL
proIiling since all relevant estimates are
validated during the plan search process. using
partial query execution techniques on data
samples.
c. Finally. there is no guarantee that LEO will be
able to Iind the optimal plan. For example. two
errors can cancel each other out. making the
real error impossible to detect. Also. it may not
be possible to pinpoint the source oI an estimate
error in a combination oI predicates (e.g..
C110 and C2~50) because the optimizer chose
to evaluate them together. e.g. during a Iull
table scan. IdentiIying the exact source oI an
error is important since it could enable a
diIIerent access path (e.g.. an index range scan).
On the other hand. the Auto Tuning Optimizer
iudiciously veriIies cardinality estimates during
the plan search process. In the above example. it
will veriIy in isolation the selectivity oI
predicates that are access path enablers.
2. Another diIIerence between LEO`s approach and
our approach is the usage model. SQL ProIiling can
target a small subset oI SQL statements. generally
the ones that have the highest impact on the
perIormance oI the system. By contrast. LEO will
potentially gather corrections Ior every statement
executed in the system and what is learned could
impact many other statements. This could be viewed
as an advantage Ior LEO since it could learn even
Ior queries executed only once (e.g. purely ad-hoc
queries) while SQL proIiling cannot do this. On the
other hand. the corrections gathered by LEO can be
overwhelming. both in terms oI storage
requirements and time spent in managing them. SQL
ProIiling maximizes the beneIit/overhead ratio since
it can be used in a very selective way to Iocus on a
small subset oI important statements. while not
disturbing the perIormance oI other statements. In
addition. the impact oI SQL proIiling is easier to
understand and evaluate. making this Ieature
probably less risky to deploy in a real world
production system.
3. The Ieedback mechanism used by LEO is deIined in
very general terms. simply by saying that predicate
corrections are stored in the dictionary. In our
opinion. this aspect is one oI the most challenging.
That is. how to represent. store. lookup. and manage
Ieedback inIormation. In LEO`s approach. it is not
clearly explained how cardinality corrections can be
applied in cases other than Ior single table estimates.
e.g.. ioin. aggregate. set operations. By contrast.
SQL ProIiles allow correction oI any type oI
estimate made by the Oracle optimizer during the
plan search process. Also. lookup oI a SQL ProIile
is simply done by computing a signature on the text
oI the SQL statement. making the Ieedback retrieval
process very eIIicient.
4. A SQL ProIile is a general Ieedback mechanism to
deliver any type oI inIormation to the optimizer to
inIluence query plan generation. For instance. in
addition to correcting optimizer estimates. we use it
to customize the optimization mode oI a SQL
1107
statement. As Iar as we know. LEO has no provision
Ior this.
MicrosoIt SQL Server oIIers an Index Wizard |5| to
provide recommendations to the DBA on the indexes
that can potentially improve the query execution plans.
This approach is similar to the DB2 Advisor |10|. and
SQL Access Advisor |6| component in Oracle 10g
manageability Iramework. However. Index Wizard is
limited to access path recommendations and cannot be
used to improve the quality oI execution plans. unlike
what SQL ProIiles can do.
There are a number oI commercial tools that assist a
DBA in some aspects oI tuning ineIIicient SQL
statements. None oI them. however. provides a complete
tuning solution. partly because it is not integrated with
the Oracle database server. Quest SoItware's SQLab
Vision |3|. provides a mechanism Ior identiIying high
load SQL based on several measures oI resource
utilization. It also can rewrite SQL statements into
semantically equivalent. but potentially more eIIicient.
alternative Iorms and suggests creation oI indexes to
oIIer more eIIicient access path to the data. Since the
product resides outside oI the Oracle RDBMS. the actual
beneIit oI these recommendations is unknown until they
are actually implemented and executed by the user.
LeccoTech's SQLExpert |4| is a toolkit that scans
new applications Ior problematic SQL statements as well
as high load SQL statements in the system. It generates
alternative execution plans Ior a SQL statement by
rewriting it into all possible semantically equivalent
Iorms. There are three problems with this approach.
First. it cannot identiIy all Iorms oI rewriting a SQL
statement (which is normally the domain oI a query
optimizer). Second. equivalent Iorms oI a SQL statement
do not guarantee that the query optimizer will Iind an
eIIicient execution plan iI the bad plan is a result oI
errors in the optimizer estimates. such as cardinality oI
intermediate results. Third. all the alternative plans will
have to be executed to actually determine which. iI any.
is superior to the original execution plan Iound by the
optimizer.
9. Conclusion
In this paper. we have described the Automatic SQL
Tuning Ieature introduced in Oracle10g. It is tightly
integrated with the Oracle query optimizer. and is an
integral part oI the manageability Iramework Ior selI-
managing databases introduced in Oracle10g. The
Automatic SQL Tuning is based on the Automatic
Tuning Optimizer. the new generation Oracle query
optimizer. The SQL Tuning Advisor tunes SQL
statements and produces a set oI comprehensive tuning
recommendations including SQL ProIiles. The user
decides whether to accept the recommendations. Once a
SQL ProIile is created. the Oracle query optimizer will
use it to generate a well-tuned plan Ior the corresponding
SQL statement. A tuning obiect called the SQL Tuning
Set is also introduced that enables a user to create a
customized SQL workload. e.g.. in order to tune it. The
interIace to the Automatic SQL Tuning is provided
primarily through Oracle Enterprise Manager but is also
accessible via a programmatic interIace.
Many oI the techniques we have described in this
paper have been proposed beIore in diIIerent contexts
|1|. |2|. |5|. |10|. |11|. But SQL ProIiling is a novel
technique that we have described here. Also. we have
shown how these techniques have been combined
together in order to oIIer an innovative end-to-end SQL
tuning solution in Oracle 10g.
Finally. we have illustrated the Ieature using a real
customer workload. It works equally well Ior OLTP and
DSS workloads. because it helps the query optimizer
cope with query complexity by improving its estimates.
Although the Ieature is in its Iirst production release.
initial case studies have demonstrated the superiority oI
Automatic SQL Tuning over manual tuning. This
position is Iurther cemented by the Iact that Automatic
SQL Tuning results can scale over a large number oI
queries. and they can evolve over time with changes in
the application workload and the underlying data.
Automatic SQL Tuning is also Iar cheaper option than
manual tuning. Together. these reasons position
Automatic SQL Tuning as an eIIective and economical
alternative to manual tuning.
References
|1| Michael Stillger. Guy M. Lohman. Volker
Markl. Mokhtar Kandil: LEO DB2`s Learning
Optimizer. The JLDB Journal. 2001.
|2| V. Markl. G.M. Lohman. V. Raman: LEO: An
autonomic query optimizer Ior DB2. IBM
Svstems Journal. Jol 42. No 1. 2003.
|3| Quest SoItware. Quest Central Ior Oracle:
SQLab Vision. http://www.quest.com. 2003.
|4| Leccotech. LECCOTECH PerIormance
Optimization Solutions Ior Oracle. White
Paper. http://www.leccotech.com/. 2003.
|5| S. Chaudhuri. V. Narasayya: An EIIicient. Cost-
driven Index Tuning Wizard Ior MicrosoIt SQL
Server. 23
rd
International Conference on Jerv
Large Data Bases. 1997.
|6| Oracle Corporation: PerIormance Tuning using
the SQL Access Advisor. Oracle White Paper.
http://otn.oracle.com. 2003.
|7| Oracle Corporation: Database 10g: The SelI-
Managing Database. Oracle White Paper.
http://otn.oracle.com. 2003.
1108
|8| Oracle Corporation: The SelI-Managing
Database: Automatic PerIormance Diagnosis.
Oracle White Paper. http://otn.oracle.com.
2003.
|9| Oracle Corporation: The SelI-Managing
Database: Guided Application and SQL Tuning.
Oracle White Paper. http://otn.oracle.com.
2003.
|10| Gary Valentin. Michael Zuliani. Daniel Zilio.
Guy Lohman. Alan Skelley: DB2 Advisor: An
Optimizer Smart Enough to Recommend Its
Own Indexes. 16th International Conference on
Data Engineering. 2000.
|11| Hamid Pirahesh. Joseph Hellerstein. Waqar
Hasan: Extensible/Rule Based Query Rewrite
Optimization in Starburst. ACM SIGMOD
Conference. 1992.
1109