Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Installation
Memory Tuning
The total available memory on a system should be configured in such a manner, that all components of the system function at optimum
levels. The following is a rule-of-thumb breakdown to help assist in memory allocation for the various components in a system with an
Oracle back-end.
SYSTEM COMPONENT
Oracle SGA Components
Operating System +Related Components
User Memory
ALLOCATED % OF MEMORY
~ 50%
~15%
~ 35%
The following is a rule-of-thumb breakdown of the ~50% of memory that is allocated for an Oracle SGA. These are good starting numbers
and will potentially require fine-tuning, when the nature and access patterns of the application is determined.
PDFmyURL.com
ALLOCATED % OF MEMORY
~80%
~12%
~1%
~0.1%
The following is an example to illustrate the above guidelines. In the following example, it is assumed that the system is configured with 2 GB
of memory, with an average of 100 concurrent sessions at any given time. The application requires response times within a few seconds and
is mainly transactional. But it does support batch reports at regular intervals.
SYSTEM COMPONENT
Oracle SGA Components
Operating System +Related Components
User Memory
In the aforementioned breakdown, approximately 694MB of memory will be available for Program Global Areas (PGA) of all Oracle Server
processes. Again, assuming 100 concurrent sessions, the average memory consumption for a given PGA should not exceed ~7MB. It should
be noted that SORT_AREA_SIZE is part of the PGA.
ORACLE SGA COMPONENT
Database Buffer Cache
~128 - 188
~8
Hence, we would want to adjust the RAM to the data buffers in order to make the SGA size less than 388 MB (that is 1250MB - 862 MB).
Any SGA size greater than 388 MB, and the server will start RAM paging, adversely affecting the performance of the entire server. The final
task is to size the Oracle SGA such that the total memory involved does not exceed 388 MB.
Examples f or UNIX Environment s
0) for super machines with 4 GB of ram & swap 12 GB
set shmsys:shminfo_shmmax=3221225471
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmseg=100
set semsys:seminfo_semmni=1024
set semsys:seminfo_semmns=163840
set semsys:seminfo_semmsl=160
set semsys:seminfo_semmap=163840
set semsys:seminfo_semmnu=163840
set msgsys:msginfo_msgmap=163840
set msgsys:msginfo_msgmax=6144
set msgsys:msginfo_msgmni=640
set msgsys:msginfo_msgssz=64
set msgsys:msginfo_msgtql=640
set msgsys:msginfo_msgseg=32768
PDFmyURL.com
1) For high end machines with 2 GB of RAM & 6 GB of swap, we recommend the following:
set shmsys:shminfo_shmmax=1073741824
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=250
set shmsys:shminfo_shmseg=100
set semsys:seminfo_semmni=750
set semsys:seminfo_semmns=75000
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmap=75000
set semsys:seminfo_semmnu=75000
set msgsys:msginfo_msgmap=75000
set msgsys:msginfo_msgmax=6144
set msgsys:msginfo_msgmni=640
set msgsys:msginfo_msgssz=64
set msgsys:msginfo_msgtql=640
set msgsys:msginfo_msgseg=32768
2) For medium end machines with 1 GB of RAM & 3 GB of swap we recommend the following:
set shmsys:shminfo_shmmax=536870912
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=150
set shmsys:shminfo_shmseg=50
set semsys:seminfo_semmni=500
set semsys:seminfo_semmns=50000
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmap=50000
set semsys:seminfo_semmnu=50000
set msgsys:msginfo_msgmap=50000
set msgsys:msginfo_msgmax=2048
set msgsys:msginfo_msgmni=512
set msgsys:msginfo_msgssz=32
set msgsys:msginfo_msgtql=512
set msgsys:msginfo_msgseg=16384
CLOSED_CACHED_OPEN_CURSORS = Indicates if the cursors must be closed immediatly after the committ. If you are using a lot of
cursors or Developer 2000, use FALSE
COMPATIBLE - Set for correct version and features
CPU_COUNT = number of CPUs on your system.
DB_CACHE_SIZE = This parameter determines the number of blocks in the database buffer cache in the SGA. The buffer cache is a
holding area in memory for database blocks retrieved from disk. Oracle will typically check for the existence of a needed data block
before performing an I/O operation to retrieve it. Increment if the hit ratio < 95%. If this value is too low, then the data will be flushed
from memory, if it is too high then Swapping. Suggestion: 40% or 50% of the total SGA size (for the main application). The standard
interpretation of this value is that we don't have enough buffers in memory if the ratio is less than 90. In this case, almost ½ of
the time that we request a buffer we need to go to the disk to find it.
*Determine if DB_CACHE_SIZE is high enough (Goal > 98% for web systems, 95% for others)
select 100-(sum(decode(name, 'physical reads', value,0))/
(sum(decode(name, 'db block gets', value,0)) +
(sum(decode(name, 'consistent gets', value,0))))) * 100
"Read Hit Ratio"
from v$sysstat;
Per Buffer
Another way to see this ratio, as of V8.1, is per pool from the V_BUFFER_POOL_STATISTICS view. This does not include the direct
physical reads, so per pool we would have:
select name,(1-(physical_reads/(db_block_gets+consistent_gets)))*100 cache_hit_ratio
from v$buffer_pool_statistics;
NAME
-------------------KEEP
RECYCLE
DEFAULT
CACHE_HIT_RATIO
--------------77.42
100.00
50.91
Now logically, we don't care about the hit ration in the RECYCLE pool since this is for buffers that we think will only be used once and
then flushed out. The KEEP and DEFAULT pools still have a much smaller hit ratio than we are told we need. So if we followed the
guidelines we would add more buffers.
A Different Approach
We can ask the question the other way around. Instead of 'Do we need more?' we can as 'Do we have more than we need?' No matter
what the hit ratio is, if we are not using all of the buffers that have been allocated, there is no advantage in allocating more. In fact, this
could slow us down by forcing more swapping at the OS level. So we can just check if there are free buffers:
select count(1) from v$bh where status='free';
COUNT(1)
---------984
PDFmyURL.com
This is from the same instance in which I have the 56 percent hit ratio. Here I see that increasing the number of buffers will not impact
the hit ratio at all since I have free buffers right now. But I might want to shift my allocation of buffers between the pools. I want the
highest hit ratio in my keep pool since I know that I am going to be reusing this data. Ideally, I have one buffer free all the time. This
would tell me that I have not over-allocated and that I have exactly what is needed. At the same time I will want to check my paging on
the server. I might make the instance faster by decreasing the size of my SGA. Of course, there are other factors in memory
consumption and you will want to take all into account.
DB_BLOCK_SIZE - Size of the blocks (db_block_size x db_cache_size=bytes for data). Setup on database creation. Generally 8K, for
DW 16K
DB_FILE_MULTIBLOCK_READ_COUNT= DB_FILE_MULTIBLOCK_READ_COUNT controls the number of data blocks read for each
read request during a full table scan. If you are using LVM or striping, this parameter should be set so that
DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT is a multiple of the LVM stripe size. If you are not using LVM or striping,
DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT should equal the maximum operating system read buffer. On many UNIX
systems and Windows systems this is 64 KB. In any case, DB_FILE_MULTIBLOCK_READ_COUNT cannot be larger than
DB_CACHE_SIZE / 4.
The maximum read buffer is generally higher on raw file systems. It varies from 64 KB (on AIX) to 128 KB (on Solaris) to 1 MB (HP-UX).
On a UNIX file system, it is usually only possible to read one buffer per I/O, usually 8KB. On 32-bit Windows, the buffer is 256KB.
This parameter will significantly increase the performance of a reorganization if properly tuned. For example, suppose the OS read
buffer is 64 KB, the database block size is 4 KB and DB_FILE_MULTIBLOCK_READ_COUNT is set to eight. During a full table scan,
each I/O operation will read only 32 KB. If DB_FILE_MULTIBLOCK_READ_COUNT is reset to 16, performance will almost double
because twice as much data can be read by each I/O operation.
DB_WRITERS = In Oracle 8.0 and up this parameter has been de-supported and replaced by 2 other parameters namely
DB_WRITER_PROCESSES and DBWR_IO_SLAVES.
DB_BLOCK_LRU_LATCHES - DBWR_IO_SLAVES and DB_WRITER_PROCESSES
The DB_WRITER_PROCESSES parameter supported on Windows NT/Windows 2000?
The Oracle8i documentation and [BUG:925955] incorrectly state that this parameter is not supported on Windows NT/2000.
Multiple DBWR processes are mainly used to simulate asynchronous I/O when the operating system does not support it. Since
Windows NT and Windows 2000 use asynchronous I/O by default, using multiple DBWR processes may not necessarily improve
performance. Increasing this parameter is also likely to have minimal effect on single-CPU systems. Increasing this parameter could, in
fact, reduce performance on systems where the CPU's are already over burdened. In cases where the main performance bottleneck is
that a single DBWR process cannot keep up with the work load, then increasing the value for DB_WRITER_PROCESSES may improve
performance.
When increasing DB_WRITER_PROCESSES it may also be necessary to increase the DB_BLOCK_LRU_LATCHES parameter, as each
DBWR process requires an LRU latch.
Reference for setting DB_BLOCK_LRU_LATCHES parameter
Default value: 1/2 the # of CPU's
PDFmyURL.com
Size in MB
39.6002884
What this return would tell you is that there is 39 M of free memory in the shared pool, which would mean that the shared pool is being
under utilized. If the shared pool was 70 M, over half of it would be under utilized. This memory could be allocated elsewhere.
*DATA DICTIONARY cache miss ratio (Goal > 90%, increase SHARED_POOL)
Contains:
Preparsed database procedures
Preparsed database triggers
Recently parsed SQL & PL/SQL requests
This is the memory allocated for the library and data dictionary cache
select sum(gets) Gets, sum(getmisses) Misses,
(1 - (sum(getmisses) / (sum(gets) +
sum(getmisses))))*100 HitRatio
from v$rowcache;
* El HIT RATIO del SHARED_POOL_SIZE (LIBRARY CACHE hit rat io) debe ser superior al 99%
column namespace
column gets
column gethitratio
column pins
column pinhitratio
column reloads
column invalidations
column db format a10
set pages 58 lines 80
format
format
format
format
format
format
PDFmyURL.com
If all Get Hit% (gethitratio in the view) except for indexes are greater than 80-90 percent, this is the desired state; the value for indexes
is low because of the few accesses of that type of object. Notice that the Pin Hit% should ve also greater than 90% (except for
indexes). The other goals of tuning this area are to reduce reloads to as small a value as possible (this is done by proper sizing and
pinning) and to reduce invalidations. Invalidations happen when for one reason or another an object becomes unusable.
Guideline: In a system where there is no flushing increase the shared pool size in 20% increments to reduce reloads and invalidations
and increase hit ratios.
select sum(pins) Executions, sum(pinhits) Execution_Hits,
((sum(pinhits) / sum(pins)) * 100) phitrat,
sum(reloads) Misses,
((sum(pins) / (sum(pins) + sum(reloads))) * 100) RELOAD_hitrat
from v$librarycache;
* How much memory is lef t f or SHARED_POOL_SIZE
col value for 999,999,999,999 heading "Shared Pool Size"
col bytes for 999,999,999,999 heading "Free Bytes"
select to_number(v$parameter.value) value, v$sgastat.bytes,
(v$sgastat.bytes/v$parameter.value)*100 "Percent Free"
from v$sgastat, v$parameter
where v$sgastat.name = 'free memory'
and v$parameter .name = 'shared_pool_size';
A better query:
select sum(ksmchsiz) Bytes, ksmchcls Status
from SYS.x$ksmsp
group by ksmchcls;
If there is free memory then there is no need to increase this parameter.
* Identifying object s reloaded int o t he SHARED POOL again and again
select substr(owner,1,10) owner,substr(name,1,25) name, substr(type,1,15) type, loads, sharable_mem
from v$db_object_cache
-- where owner not in ('SYS','SYSTEM') and
where loads > 1 and type in ('PACKAGE','PACKAGE BODY','FUNCTION','PROCEDURE')
order by loads DESC;
* Large Object s NOT 'pinned' in Shared Pool
To determine what large PL/SQL objects are currently loaded in the shared pool and are not marked 'kept' (NOT pinned) and therefore
PDFmyURL.com
6 - Look for Buffer Busy Waits resulting from table/index freelist shortages
7 - See if large-table full-table scans can be removed with well-placed indexes
8 - If tables are low volatility, seek an MV that can pre-join/pre-aggregate common queries. Turn-on automatic query rewrite
9 - Look for non-reentrant SQL - (literals values inside SQL from v$sql) - If so, set cursor_sharing=force
10 - Monitor over time - The ongoing STATSPACK reports should show any new performance problems.
INSTANCE TUNING
1) Library Cache Hit Rat io:
In the most basic terms, the library cache is a memory structure that holds the parsed (ie. already examined to determine syntax correctness,
security privileges, execution plan, etc.) versions of SQL statements that have been executed at least once. As new SQL statements arrive,
older SQL statements will be pushed from the memory structure to provide space for the new statements. If the older SQL statements need
to be re-executed, they will now have to be re-parsed. Also, a SQL statement that is not exactly the same as an already parsed statement
(including even capitalization) will be reparsed even though it may perform the exact same operation. Parsing is an expensive operation, so
the objective is to make the memory structure large enough to hold enough parsed SQL statements to avoid a large percentage of reparsing.
Target : 99% or greater.
Value: SELECT (1 - SUM(reloads)/SUM(pins)) FROM v$librarycache;
Correct ion: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.
2) Dict ionary Cache Hit Rat io:
The dictionary cache is the memory structure that holds the most recently used contents of ORACLE's data dictionary, such as security
privileges, table structures, column data types, etc. This data dictionary information is necessary for each and every parsing of a SQL
statement. Recalling that memory is around 300 times faster than disk, it is needless to say that performance is improved by holding enough
data dictionary information in memory to significantly minimize disk accesses.
Target : 90%
Value: SELECT (1 - SUM(getmisses)/SUM(gets)) FROM v$rowcache;
Correct ion: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.
3) Buf f er Cache Hit Rat io:
The buffer cache is the memory structure that holds the most recently used blocks read from disk, whether table, index, or other segment
type. As new data is read into the buffer cache, data that hasn't been recently used is pushed out. Again recalling that memory is
approximately 300 times faster than disk, the objective is to hold enough data in memory to minimize disk accesses. Note that data read
from tables through the use of indexes is held in the buffer cache much longer than data read via full-table scans.
Target : 90% (although some shops find 80% or even 70% acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'consistent gets';
SELECT value FROM v$sysstat WHERE name = 'db block gets';
SELECT value FROM v$sysstat WHERE name = 'physical reads';
Buffer cache hit ratio = 1 - physical reads/(consistent gets + db block gets)
Correct ion: Increase the DB_CACHE_SIZE parameter in the INIT.ORA file.
Ot her not es:
PDFmyURL.com
- Compare the values for "table scans" and "table access by rowid" in the v$sysstat table to gain general insight into whether additional
indexing is needed. Tuning specific applications via indexing will increase the "table access by rowid" value (ie. tables read through the use of
indexes) and decrease the "table scans" values. This effect tends to improve the buffer cache hit ratio since a smaller volume of data is read
into the buffer cache from disk, so less previously cached data is pushed out. (See the article on application tuning for more details
regarding indexing.)
- A low buffer cache hit ratio can very quickly lead to an I/O bound situation, as more reads are required per period of time to provide the
requested data. When the reads/time period exceed the workload supported by the disk subsystem, exponential performance degradations
can occur. (Please see the section on Operating System tuning.)
- Since the buffer cache will typically be the largest memory structure allocated in the ORACLE instance, it is the structure most likely to
contribute to O/S paging. If the buffer cache is sized such that the hit ratio is 90%, but excessive paging occurs at this setting, performance
may be better if the buffer cache were sized to achieve an 85% hit ratio. Careful analysis is necessary to balance the buffer cache hit ratio
with the O/S paging rate.
4) Sort Area Hit Rat io:
Sorts that are too large to be performed in memory are written to disk. Once again, memory is about 300 times faster than disk, so for
instances where a large volume of sorting occurs (such as decision support systems or data warehouses), sorting on disk can degrade
performance. The objective, of course, is to allow a significant percentage of sorts to occur in memory.
Target : 90% (although many shops find 80% or less acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'sorts (memory)';
SELECT value FROM v$sysstat WHERE name = 'sorts (disk)';
Sort area hit ratio = 1 - disk sorts/(memory sorts + disk sorts);
Correct ion: Increase the SORT_AREA_SIZE parameter (in bytes) in the INIT.ORA file.
Ot her not es:
- With release 7.3 and above, setting the SORT_DIRECT_WRITES = TRUE initialization parameter causes sorts to disk to bypass the buffer
cache, thus improving the buffer cache hit ratio.
- As with buffer cache hit ratio, examine the values for "table scans" and "table access by rowid" in the v$sysstat table to determine if
additional indexing is needed. In some cases, the optimizer will choose to retrieve the rows in the correct order by using the index, thus
avoiding a sort. In other cases, retrieval by index rather than full-table scan tends to collect a smaller quantity of rows to be sorted, thus
increasing the probability that the sort can occur in memory, which also tends to improve the sort area hit ratio.
- Also, as with buffer cache hit ratio, sort area size (if very large) can contribute to O/S paging. In general, sorting on disk should be favored
over excessive paging, as paging effects all memory structures (ORACLE and non-ORACLE) while sorting on disk only effects sorts
performed by the ORACLE instance.
5) Redo Log Space Request s:
Redo logs (and archive logs if the ORACLE instance is run in ARCHIVELOG mode) are transaction logs involving a variety of structures. The
redo log buffer is a memory structure into which changes are recorded as they are applied to blocks in the buffer cache (including data,
index, rollback segments, etc.). Committed changes are synchronously flushed to redo log file members on disk, while uncommited changes
are asynchronously written to redo log files. (This approach makes perfect sense on inspection. If an instance crash occurs, commited
changes are already written to the redo logs on disk and are applied during instance recovery. Uncommited changes in the redo log buffer
not yet written to disk are lost, and any uncommited changes that have been written to disk are rolled-back during instance recovery.) A
PDFmyURL.com
session performing an update and an immediate commit will not return until the committed change has been written to the redo log buffer
and flushed to the redo log files on disk. Redo log groups are written to in a round-robin manner. When the mirrored members of a redo log
group become full, a log switch occurs, thus archiving one member of the redo log group (if ARCHIVELOG mode is TRUE), then clearing the
members of that redo log group. Note that a checkpoint also occurs at least on each redo log switch. In most basic form, the redo log buffer
should be large enough that no waits for available space in the memory structure occur while changes are written to redo log files. The redo
log file size should be large enough that the redo log buffer does not fill during a redo log switch. Finally, there should be enough redo log
groups that the archiving and clearing of filled redo logs does not cause waits for redo log switches, thus causing the redo log buffer to fill.
The inability to write changes to the redo log buffer because it is full is reported as redo log space requests in the v$sysstat table.
Target : 0
Value: SELECT value FROM v$sysstat WHERE name = 'redo log space requests';
Correct ion:
- Increase the LOG_BUFFER parameter (in bytes) in the INIT.ORA file.
- Increase the redo log size.
- Increase the number of redo log groups.
Ot her not es:
- The default configuration of small redo log size and two redo log groups is seldom sufficient. Between 4 and 10 groups typically yields
adequate results, depending on the particular archive log destination (whether a single disk, RAID array, or tape). Size will be very dependent
upon the specific application characteristics and throughput requirements, and can range from less than 10 Mb to 500 Mb or greater.
- Since redo log sizes and groups can be changed without a shutdown/restart of the instance, increasing the redo log size and number of
groups is typically the best area to start tuning for reduction of redo log space requests. If increasing the redo log size and number of
groups appears to have little impact on redo log space requests, then increase the LOG_BUFFER initialization parameter.
6) Redo Buf f er Lat ch Miss Rat io:
One of the two types of memory structure locking mechanisms used by an ORACLE instance is the latch. A latch is a locking mechanism that
is implemented entirely within the executable code of the instance (as opposed to an enqueue, see below). Latch mechanisms most likely to
suffer from contention involve requests to write data into the redo log buffer. To serve the intended purpose, writes to the redo log buffer
must be serialized (ie. one process locks the buffer, writes to it, then unlocks it, a second process locks, writes, and unlocks, etc., while other
processes wait for their chance to acquire these same locks). There are four different groupings applicable to redo buffer latches: redo
allocation latches and redo copy latches, each with immediate and willing-to-wait priorities. Redo allocation latches are acquired by small
redo entries (having an entry size smaller than or equal to the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter) and utilize only a
single CPU's resources for execution. Redo copy latches are requested by larger redo entries (entry size larger than the
LOG_SMALL_ENTRY_MAX_SIZE), and take advantage of multiple CPU's for execution. Recall from above that committed changes are
synchronously written to redo logs on disk: these entries require an immediate latch of the appropriate type. Uncommitted changes are
asynchronously written to redo log files, thus they attempt to acquire a willing-to-wait latch of the appropriate type. Below, each category of
redo buffer latch will be considered seperately.
- Redo allocation immediate and willing-to-wait latches:
Target : 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo allocation' AND b.latch# = a.latch# ;
Value (willing-to-wait):
PDFmyURL.com
which case, sessions will wait for write access to an available rollback segment. Some waits for rollback segment data blocks or header
blocks (usually header blocks) will always occur, so criteria for tuning is to limit the waits to a very small percentage of the total number of all
data blocks requested. Note that rollback segments function exactly like table segments or index segments: they are cached in the buffer
cache, and periodically checkpointed to disk.
Target : 1% or less
Value:
Rollback waits = SELECT max(count) FROM v$waitstat
WHERE class IN ('system undo header', 'system undo block','undo header', 'undo block')
GROUP BY class;
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Rollback segment contention ratio = rollback waits / block gets
Correct ion: Create additional rollback segments.
10) Freelist cont ent ion:
In each table, index, or other segment type, the first one or more blocks contain one or more freelists. The freelist(s) identify the blocks in
that segment that have free space available and can accept more data. Any INSERT, UPDATE, or DELETE activity will cause the freelist(s) to
be accessed. Change activity with a high level of concurrency may cause waits to access to these freelist(s). This is seldom a problem in
decision support systems or data warehouses (where updates are processed as nightly single-session batch jobs, for example), but can
become a bottleneck with OLTP systems supporting large numbers of users. Unfortunately, there are no initialization parameters or other
instance-wide settings to correct freelist contention: this must be corrected on a table by table basis by re-creating the table with additional
freelists and/or by modifying the PCT_USED parameter. (Please see the article on storage management.) However, freelist contention can
be measured at the instance level. Some freelist waits will always occur; the objective is to limit the freelist waits to a small percentage of the
total blocks requested.
Target : 1% or less
Value:
Freelist waits = SELECT count FROM v$waitstat WHERE class = 'free list';
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Freelist contention ratio = Freelist waits / block gets
Correct ion: No method for instance-level correction. Please see the article on storage management.
11) Oracle Session hogs
If the complaint of poor performance is current, then the connected sessions are one of the first things to check to see which users are
impacting the system in undesirable ways. There are a couple of different avenues to take here. First, you can get an idea of the percentage
that each session is/has taken up with respect to I/O. One rule of thumb is that if any session is currently consuming 50% or more of the
total I/O, then that session and its SQL need to be investigated further to determine what activity it is engaged in. If you are a DBA that is
just concerned with physical I/O, then the physpctio.sql query will provide the information you need:
This script queries the sys.v_$statname, sys.v_$sesstat, sys.v_$session, and sys.v_$bgprocess views.
select sid, username,
round(100 * total_user_io/total_io,2) tot_io_pct
from (select b.sid sid,nvl(b.username,p.name) username,
sum(value) total_user_io
from sys.v_$statname c, sys.v_$sesstat a,
PDFmyURL.com
sys.v_$session b, sys.v_$bgprocess p
where a.statistic#=c.statistic# and
p.paddr (+) = b.paddr and
b.sid=a.sid and
c.name in ('physical reads',
'physical writes',
'physical writes direct',
'physical reads direct',
'physical writes direct (lob)',
'physical reads direct (lob)')
group by b.sid, nvl(b.username,p.name)),
(select sum(value) total_io
from sys.v_$statname c, sys.v_$sesstat a
where a.statistic#=c.statistic# and
c.name in ('physical reads',
'physical writes',
'physical writes direct',
'physical reads direct',
'physical writes direct (lob)',
'physical reads direct (lob)'))
order by 3 desc;
Regardless of which query you use, the output might resemble something like the following:
SID
---9
20
5
2
12
6
7
10
11
8
1
3
4
USERNAME
-------USR1
SYS
SMON
DBWR
SYS
RECO
SNP0
SNP3
SNP4
SNP1
PMON
ARCH
LGWR
TOT_IO_PCT
------------------71.26
15.76
7.11
4.28
1.42
.12
.01
.01
.01
.01
0
0
0
In the above example, a DBA would be prudent to examine the USR1 session to see what SQL calls they are making. You can see that the
above queries are excellent weapons that you can use to quickly pinpoint problem I/O sessions.
* The SQL sentences must be the same in order to re-use them in memory.
* Size of Dat abase
* Memory Values.
* Identify the SQL responsible for the most BUFFER HITS and/or DISK READS. If I want to see what is on SQL AREA:
SELECT SUBSTR(sql_text,1,80) Text, disk_reads, buffer_gets, executions
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 100000
and DISK_READS > 100000
ORDER BY (DISK_READS * 100) + BUFFER_GETS desc;
The column BUFFER_GETS is the total number of times the SQL statement read a database block from the buffer cache in the SGA. Since
almost every SQL operation passes through the buffer cache, this value represents the best metric for determining how much work is being
performed. It is not perfect, as there are many direct-read operations in Oracle that completely bypass the buffer cache. So, supplementing
this information, the column DISK_READS is the total number times the SQL statement read database blocks from disk, either to satisfy a
logical read or to satisfy a direct-read. Thus, the formula:
(DISK_READS * 100) + BUFFER_GETS
is a very adequate metric of the amount of work being performed by a SQL statement. The weighting factor of 100 is completely arbitrary,
but it reflects the fact that DISK_READS are inherently more expensive than BUFFER_GETS to shared memory.
Patterns to look for
DISK_READS close to or equal to BUFFER_GETS This indicates that most (if not all) of the gets or logical reads of database blocks are
becoming physical reads against the disk drives. This generally indicates a full-table scan, which is usually not desirable but which usually can
be quite easy to fix.
PDFmyURL.com
* Finding t he t op 25 SQL
declare
top25 number;
text1 varchar2(4000);
x number;
len1 number;
cursor c1 is
select buffer_gets, substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line('Gets'||'
'||'Text');
dbms_output.put_line('----------'||
' '||'----------------------');
open c1;
for i in 1..25 loop
fetch c1 into top25, text1;
dbms_output.put_line(rpad(to_char(top25),9)||
' '||substr(text1,1,66));
len1:=length(text1);
x:=66;
while len1 > x-1 loop
dbms_output.put_line('"
'||substr(text1,x,66));
x:=x+66;
end loop;
end loop;
end;
/
* Displays the porcentage of SQL executed that did NOT incur an expensive hard parse. So a low number may indicate a lit eral SQL or
ot her sharing problem.
Ratio success is dependant on your development environment. OLTP should be 90 percent.
select 100 * (1-a.hard_parses/b.executions) noparse_hitratio
from (select value hard_parses
from v$sysstat
where name = 'parse count (hard)' ) a
,(select value executions
from v$sysstat
where name = 'execute count') b;
and (Consistent_Gets+Block_Gets)>0
and Username is not null;
* IO PER DATAFILE:
* Schema's Report
* COALESCING FREE SPACE = Los distintos bloques libres (chunks) que sean adjuntos se pueden juntar en uno mas grande. Inspecciono
con:
select file_id, block_id, blocks, bytes from dba_free_space
where tablespace_name = 'xxx' order by 1,2;
Esto me devuelve una lista de resultados. Si file_id de 2 filas es igual y el block_id + blocks = Block_id de la fila siguiente, entonces los puedo
juntar.
Se hace con ALTER TABLESPACE XX COALESCE;
* Quick Script to coalesce all the tablespaces t ablespaces
PDFmyURL.com
* ROW_CHAINING
* To find out chained rows
ANALYZE TABLE TEST ESTIMATE STATISTICS;
Then from DBA_TABLES,
SELECT (CHAIN_CNT / NUM_ROWS) * 100 FROM DBA_TABLES WHERE TABLE_NAME = upper('&Table_name');
This will give us the chained rows as a percentage of the total number of rows in that table. If this percentage is high near 5% and the row
doe not contain LONG or similar datatype or the row can be contained inside one single data block then PCTFREE should definitely be
decreased.
5 DISKS
1- exec, redo logs, system tablespace, control files
2- data, temp, control files
3- indexes, control files
4- rollback segments, export, control files
5- archive, control files
In Oracle 10g, Oracle automatically gathers index statistics whenever the index is created or rebuilt.
Example:
EXEC DBMS_STATS.gather_table_stats(USER, 'LOOKUP', cascade => TRUE);
execute dbms_stats.gather_table_stats
(ownname => 'SCOTT'
, tabname => 'DEPT'
, partname=> null
, estimate_percent => 20
, degree => 5
, cascade => true
, opt ions => 'GATHER AUTO');
execute dbms_stats.gather_schema_stats
(ownname => 'SCOTT'
, estimate_percent => 10
, degree => 5
, cascade => true);
execute dbms_stats.gather_database_stats
(estimate_percent => 20
, degree => 5
, cascade => true);
There are several values for the opt ions parameter that we need to know about:
- gather - re-analyzes the whole schema.
- gather empty - Only analyze tables that have no existing statistics.
- gather stale - Only re-analyze tables with more than 10% modifications (inserts, updates, deletes). The table should be in monitor status
first.
- gather auto - This will re-analyze objects which currently have no statistics and objects with stale statistics.The table should be in
monitor status first.
Using gather auto is like combining gather stale and gather empty .
Note that both gather stale and gather auto require monitoring. If you issue the "alter table xxx monitoring" command, Oracle tracks changed
tables with the dba_tab_modifications view. Below we see that the exact number of inserts, updates and deletes are tracked since the last
analysis of statistics.
The most interesting of these options is the gather stale option. Because all statistics will become stale quickly in a robust OLTP database,
we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time).
Hence, almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for
systems that are largely read-only. For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be
re-analyzed with the "gather stale" option.
The CASCADE => TRUE option causes all indexes for the tables to also be analyzed. In Oracle 10g, set CASCADE to AUTO_CASCADE to
PDFmyURL.com
let Oracle decide whether or not new index statistics are needed.
The DEGREE Opt ion
Note that you can also parallelize the collection of statistics because the CBO does full-table and full-index scans. When you set degree=x,
Oracle will invoke parallel query slave processes to speed up table access. Degree is usually about equal to the number of CPUs, minus 1 (for
the OPQ query coordinator).
In Oracle 10g, set DEGREE to DBMS_STATS.AUTO_DEGREE to let Oracle select the appropriate degree of parallelism.
Force St at ist ics t o a Table
You can use the following sentence to force statistics to a Table:
exec dbms_stats.set_table_stats( user, 'EMP', numrows => 1000000, numblks => 300000 );
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL ;
end;
/
Analyze Options
- Estimate over all rows
DBMS_UTILITY.ANALYZE_SCHEMA('userid', 'COMPUTE');
- Estimate 20% of all rows for a specific Schema
DBMS_UTILITY.ANALYZE_SCHEMA('userid', 'ESTIMATE',NULL,20);
- Estimate 20% of a table
DBMS_UTILITY.ANALYZE_SCHEMA('TABLE' , 'schema', 't_name', 'ESTIMATE',null,20);
or
ANALYZE TABLE table ESTIMATE STATISTICS sample 20 percent;
- Estimate 20% of an index
DBMS_UTILITY.ANALYZE_SCHEMA('INDEX' , 'schema', 'i_name', 'COMPUTE';
- Estimate 1000 rows of all the tables for a schema
DBMS_UTILITY.ANALYZE_SCHEMA ('userid', 'ESTIMATE', 100000);
or
ANALYZE TABLE table ESTIMATE STATISTICS sample 5000 rows;
- Delete all stats
DBMS_UTILITY.ANALYZE_SCHEMA ('userid', 'DELETE');
or
ANALYZE TABLE table DELETE STATISTICS;
PDFmyURL.com
rgt
BOOLEAN;
retval BOOLEAN;
v_undo_size NUMBER(10);
BEGIN
select sum(a.bytes)/1024/1024 into v_undo_size
from v$datafile a, v$tablespace b, dba_tablespaces c
where c.contents = 'UNDO'
and c.status = 'ONLINE'
and b.name = c.tablespace_name
and a.ts# = b.ts#;
retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt);
dbms_output.put_line('UNDO Tablespace is
: ' || tsn);
dbms_output.put_line('UNDO Tablespace size is
: ' || TO_CHAR(v_undo_size) || ' MB');
IF aex THEN
dbms_output.put_line('Undo Autoextend is set to
ELSE
dbms_output.put_line('Undo Autoextend is set to
END IF;
dbms_output.put_line('Undo Retention is
IF rgt THEN
dbms_output.put_line('Undo Guarantee is set to
ELSE
dbms_output.put_line('Undo Guarantee is set to
END IF;
END;
/
UNDO Tablespace is
: UNDOTBS1
UNDO Tablespace size is
: 925 MB
Undo Autoextend is set to : TRUE
Undo Retention is
: 900
Undo Guarantee is set to
: FALSE
: TRUE');
: FALSE');
: ' || TO_CHAR(unr));
: TRUE');
: FALSE');
You can choose to allocate a specific size for the UNDO tablespace and then set the UNDO_RETENTION parameter to an optimal value
according to the UNDO size and the database activity. If your disk space is limited and you do not want to allocate more space than
necessary to the UNDO tablespace, this is the way to proceed. If you are not limited by disk space, then it would be better to choose the
UNDO_RETENTION time that is best for you (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the
database activity.
This tip help you get the information you need whatever the method you choose.
set serverout on size 1000000
set feedback off
set heading off
set lines 132
declare
PDFmyURL.com
cursor get_undo_stat is
select d.undo_size/(1024*1024) "C1", substr(e.value,1,25)
"C2",
(to_number(e.value) * to_number(f.value) * g.undo_block_per_sec) / (1024*1024) "C3",
round((d.undo_size / (to_number(f.value) * g.undo_block_per_sec)))
"C4"
from (select sum(a.bytes) undo_size
from v$datafile a, v$tablespace b, dba_tablespaces c
where c.contents = 'UNDO'
and c.status = 'ONLINE'
and b.name = c.tablespace_name
and a.ts# = b.ts#) d,
v$parameter e, v$parameter f,
(select max(undoblks/((end_time-begin_time)*3600*24)) undo_block_per_sec from v$undostat) g
where e.name = 'undo_retention'
and f.name = 'db_block_size';
begin
dbms_output.put_line(chr(10)||chr(10)||chr(10)||chr(10) || 'To optimize UNDO you have two choices :');
dbms_output.put_line('====================================================' || chr(10));
for rec1 in get_undo_stat loop
dbms_output.put_line('A) Adjust UNDO tablespace size according to UNDO_RETENTION :' || chr(10));
dbms_output.put_line(rpad('ACTUAL UNDO SIZE ',60,'.')|| ' : ' || TO_CHAR(rec1.c1,'999999') || ' MB');
dbms_output.put_line(rpad('OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION (' || ltrim(TO_CHAR(rec1.c2,'999999')) || '
SECONDS) ',60,'.') || ' : ' || TO_CHAR(rec1.c3,'999999') || ' MB');
dbms_output.put_line(chr(10));
dbms_output.put_line(chr(10));
dbms_output.put_line('B) Adjust UNDO_RETENTION according to UNDO tablespace size :' || chr(10));
dbms_output.put_line(rpad('ACTUAL UNDO RETENTION ',60,'.') || ' : ' || TO_CHAR(rec1.c2,'999999') || ' SECONDS');
dbms_output.put_line(rpad('OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE (' || ltrim(TO_CHAR(rec1.c1,'999999'))
|| ' MEGS) ',60,'.') || ' : ' || TO_CHAR(rec1.c4,'999999') || ' SECONDS');
end loop;
dbms_output.put_line(chr(10)||chr(10));
end;
/
To optimize UNDO you have two choices :
====================================================
A) Adjust UNDO tablespace size according to UNDO_RETENTION :
ACTUAL UNDO SIZE ........................................... :
OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION (15 MINUTES) .. :
B) Adjust UNDO_RETENTION according to UNDO tablespace size :
ACTUAL UNDO RETENTION ...................................... :
OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE (925 MEGS) .... :
925 MB
82 MB
900 SECONDS
10125 SECONDS
Undo Segment s
With the following query, we can check the segments of the UNDO:
SELECT active.active, unexpired.unexpired, expired.expired
FROM (SELECT Sum(bytes / 1024 / 1024) AS unexpired
FROM dba_undo_extents
PDFmyURL.com
Where:
ACTIVE = it means that those UNDO segments contains active transactions, so a commit was not executed yet.
UNEXPIRED = it means that those UNDO segments contains commited transactions, and those transactions are still required for
FLASHBACK.
EXPIRED = it means that those UNDO segments are not required after the time defined under "undo_retention" parameter.
So when you execute an insert, you start using undo segments and those are in ACTIVE state until you fire the COMMIT.
Once the COMMIT is fired, they are on UNEXPIRED status (still using UNDO Tablespace) until they reach the "undo_retention" time.
Once that time is completed, they are moved to the EXPIRED status.
Monit ort ing Transact ions in UNDO
It's possible to monitor the transactions that are taking UNDO segments with the following query:
ACTIVE
START_TIME
08/16/11 09:08:40
LOGON_TIME
8/16/2011 9:08:15
BLOCKING_SESSION_STATUS NO HOLDER
SCHEMANAME
FRAUDGUARD
MACHINE
VOP-DPAFUMI
PROGRAM
sqlplus.exe
MODULE
SQL*Plus
PDFmyURL.com
SQL_TEXT
SERIAL#
1489
SID
USERNAME
FRAUDGUARD
STATUS_SESION
INACTIVE
SQL_ID
9babjv8yq8ru3
PREV_SQL_ID
9babjv8yq8ru3
PDFmyURL.com
empno
ename
job
mgr
hiredate
sal
comm
deptno
);
ALTER TABLE EMP ADD CONSTRAINT FK_EMP_DEPT
FOREIGN KEY (deptno)
REFERENCES dept (deptno);
Once this constraint is enabled, attempting to insert an "EMP" record with an invalid DEPTNO, or trying to delete a DEPTNO row that
has matching "EMP" records, will generate an error. However, in order to preserve integrity during the operation, Oracle needs to apply
a full "table-level" lock (as opposed to the usual row-level locks) to the child table when the parent table is modified.
Solut ion
By creating an index on the foreign key of the child table, these "table-level" locks can be avoided. (for instance, creating a foreign key
on "EMP.DEPTNO").
CREATE INDEX FK_EMP_DEPT
ON emp(deptno)
TABLESPACE indx;
Keep in mind that you will often be creating an index on the foreign keys in order to optimize join and queries. However, if you fail to
create such a foreign key index and if the parent table is subject to updates, you may see heavy lock contention. If ever in doubt, it's
often safer to create indexes on ALL foreign keys, despite the possible overhead of maintaining unneeded indexes.
Having Unindexed foreign keys can be a performance issue. There are two issues associated with unindexed foreign keys. The first is
the fact that a table lock will result if you update the parent records primary key (very very unusual) or if you delete the parent record
and the child's foreign key is not indexed.
The second issue has to do with performance in general of a parent child relationship. Consider that if you have an on delete cascade
and have not indexed the child table (eg: EMP is child of DEPT. Delete deptno = 10 should cascade to EMP. If deptno in emp is not
indexed -- full table scan). This full scan is probably undesirable and if you delete many rows from the parent table, the child table will be
scanned once for each parent row deleted.
Also consider that for most (not all, most) parent child relationships, we query the objects from the 'master' table to the 'detail' table.
The glaring exception to this is a code table (short code to long description). For master/detail relationships, if you do not index the
foreign key, a full scan of the child table will result.
So, how do you easily discover if you have unindexed foreign keys in your schema? This script can help. When you run it, it will
generate a report such as:
SQL>
STAT
---****
@unindex
TABLE_NAME
COLUMNS
COLUMNS
------------------------------ -------------------- -------------------APPLICATION_INSTANCES
AI_APP_CODE
PDFmyURL.com
ok
EMP
DEPTNO
DEPTNO
The **** in the first row shows me that I have an unindexed foreign key in the table APPLICATION_INSTANCES. The ok in the second
row shows me I have a table EMP with an indexed foreign key.
The script
column columns format a20 word_wrapped
column table_name format a30 word_wrapped
select decode( b.table_name, NULL, '****', 'ok' ) Status,
a.table_name, a.columns, b.columns
from
( select substr(a.table_name,1,30) table_name,
substr(a.constraint_name,1,30) constraint_name,
max(decode(position, 1,
substr(column_name,1,30),NULL)) ||
max(decode(position, 2,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 3,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 4,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 5,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 6,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 7,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 8,', '||substr(column_name,1,30),NULL)) ||
max(decode(position, 9,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,10,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,11,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,12,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,13,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,14,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,15,', '||substr(column_name,1,30),NULL)) ||
max(decode(position,16,', '||substr(column_name,1,30),NULL)) columns
from user_cons_columns a, user_constraints b
where a.constraint_name = b.constraint_name
and b.constraint_type = 'R'
group by substr(a.table_name,1,30), substr(a.constraint_name,1,30) ) a,
( select substr(table_name,1,30) table_name, substr(index_name,1,30) index_name,
max(decode(column_position, 1,
substr(column_name,1,30),NULL)) ||
max(decode(column_position, 2,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 3,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 4,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 5,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 6,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 7,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 8,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position, 9,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,10,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,11,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,12,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,13,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,14,', '||substr(column_name,1,30),NULL)) ||
PDFmyURL.com
max(decode(column_position,14,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,15,', '||substr(column_name,1,30),NULL)) ||
max(decode(column_position,16,', '||substr(column_name,1,30),NULL)) columns
from user_ind_columns
group by substr(table_name,1,30), substr(index_name,1,30) ) b
where a.table_name = b.table_name (+)
and b.columns (+) like a.columns || '%'
/
Rebuild Indexes
When I first started doing DBA work, I was thrilled to find the "analyze index .... validate structure" command. This command puts
information about a specific index in the view index_stats. The problem with using this command is that while the analyze is running,
the index is locked. This prompted me look for other signs that an index needs to be rebuilt. Now that I am working under a tight space
constraint, I've come back to that first "cool" command I learned.
This tells a lot about the index, but what interests me is the space the index is taking, what percentage of that is really being used, and
what space is unusable because of delete actions. Remember that when rows are deleted, the space is not re-used in the index. Let's
check one:
analyze index john.APPTHIST_CURR_STAT_FK validate structure;
select btree_space,pct_used,del_lf_rows_len from index_stats;
BTREE_SPACE
----------19889296
PCT_USED
DEL_LF_ROWS_LEN
---------- --------------43
5374551
So we see this index is only using 43 percent of the almost 19M allocated to it and that it holds over 5M of space that it cannot use
because of deletes. This is a candidate to rebuild. Of course, we don't want to rebuild one at a time. You can use the following script to
rebuild all of them automatically::
set serveroutput on
declare
v_MaxHeight integer := 3;
v_MaxLeafsDeleted integer := 20;
v_Count integer := 0;
--Cursor to Manage NON-Partitioned Indexes
cursor cur_Global_Indexes is
select index_name, tablespace_name
from user_indexes
where partitioned = 'NO';
--Cursor to Manage Current Index
cursor cur_IndexStats is
select name, height, lf_rows as leafRows, del_lf_rows as leafRowsDeleted
PDFmyURL.com
/*
*/
end if;
end if;
close cur_IndexStats;
end loop;
dbms_output.put_line('Local Indexes Rebuilt: ' || to_char(v_Count));
-EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL ;
end;
/
Make a Script
The drawback you will see when working with index_stats is that it only holds one row at a time. So we will first create a table to hold the results from this
view:
create table t_ind_used_size
(owner
varchar2(30)
,name
varchar2(30)
PDFmyURL.com
,name
varchar2(30)
,btree_space number(12)
,pct_used
number(3)
,del_len
number(12)
,dt
date
)
tablespace xxx
storage (initial 256k next 256k pctincrease 0) pctused 80 pctfree 0;
Now I know that what I want to do is to check each index for a given owner:
declare
v_stmt varchar2(100);
cursor c1 is
select owner,index_name from dba_indexes where owner = 'JOHN';
begin
for line in c1 loop
v_stmt := 'analyze index '||line.owner||'.'||line.index_name||
' validate structure';
execute immediate v_stmt;
insert into t_ind_used_size
(owner,name,btree_space,pct_used,del_len,dt)
select line.owner,name,btree_space,pct_used,del_lf_rows_len,sysdate
from index_stats;
if mod(c1%rowcount,100)=0 then
commit;
end if;
end loop;
commit;
end;
/
Our cursor gives us all of the indexes for this owner. You can also take the "where" clause off the cursor and get all indexes.For each index, we create
the analyz e statement and then execute it using dynamic SQL. The results from this analyz e statement are then put into our table for reference later.
Remember that the "analyz e" will lock the index, so be sure to run this operation during off- hours. I have 372 indexes taking 2564M of space, and this
script takes 7:40 to complete. Not too bad.
Now Let 's Use t he Inf ormat ion
So we have gathered all of this information. We can just look at it to get an overview with:
variable block_size number;
begin
select value into :block_size from v$parameter where name = 'db_block_size';
end;
/
select * from t_ind_used_size
where btree_space > :block_size order by pct_used DESC;
Notice that I am only interested in the indexes that are taking more than one block of space. Any indexes currently taking one block cannot be improved,
PDFmyURL.com
Notice that I am only interested in the indexes that are taking more than one block of space. Any indexes currently taking one block cannot be improved,
no matter what percentage is being used. I have 176 rows from this query with the indexes making the least efficient use of space at the bottom of the
result set.
You will notice that we have the date in the table, too, so we can compare over time with the following:
select a.owner,a.name,a.dt,a.pct_used,b.dt,b.pct_used
from t_ind_used_size a, t_ind_used_size b
where a.owner = b.owner and a.name = b.name and a.pct_used>1.1*b.pct_used
and a.dt >= (b.dt - 7);
This would show us the indexes that have dropped their percent used by more than 10 percent during the last seven days. Here we are assuming that
you would run this periodically (daily or weekly).
I am not so much interested in the change over time than in the use of space right now. We want to reclaim space, so we will rebuild all of the indexes
that have too much unused space. For "too much," I have chosen indexes that are using less than 75 percent of their space held or that have more than
one block of delete space that is unusable.
My "where" clause for this is:
select count(1)
from t_ind_used_size a, dba_indexes b
where btree_space > :block_size
and (pct_used < 75 or del_len > :block_size)
and a.owner = b.owner and a.name = b.index_name
order by pct_used;
COUNT(1)
---------46
I join my table with dba_indexes so I can get more information on how the index is created. We now have 46 indexes that are candidates for a rebuild.
For each index we want to; rebuild, analyz e to get new statistics, and then remove the index from the t_ind_used_siz e table so we don't do it again. It will
take me forever to alter each one manually so we make a script to do it for us:
select 'alter index '||a.owner||'.'||a.name||
' rebuild tablespace '||b.tablespace_name||chr(10)||
'storage (initial '||initial_extent||
' next '||next_extent||
' pctincrease '||pct_increase||') pctfree 0 nologging;'||chr(10)||
'analyze index '||a.owner||'.'||a.name||
' compute statistics;'||chr(10)||
'delete t_ind_used_size where name = '''||a.name||
''' and owner = '''||a.owner||''';'||chr(10)||
'commit;'
from t_ind_used_size a, dba_indexes b
where btree_space > :block_size
and (pct_used < 75 or del_len > :block_size)
and a.owner = b.owner and a.name = b.index_name
order by pct_used;
PDFmyURL.com
HINTS
You should first get the explain plan of your SQL and determine what changes can be done to make the code operate without
using hints if possible. However, Oracle hints such as ORDERED, LEADING, INDEX, FULL, and the various AJ and SJ Oracle hints
can tame a wild optimizer and give you optimal performance.
Some suggestions:
- Use ALIASES for the tablenames in the hints.
- Ensure tables containst up-to-date statistics
- Syntax: /*+ HINT HINT ... */ (In PLSQL the space between the '+' and the first letter of the hint is vital so /*+ ALL_ROWS */ is
fine but /*+ALL_ROWS */ will cause problems
Here is a list of all the Hints:
Oracle Hint
+
ALL_ROWS
CHOOSE
FIRST_ROWS
Meaning
Must be immediately after comment indicator, tells Oracle this is a list of hints.
Use the cost based approach for best throughput.
Default, if statistics are available will use cost, if not, rule.
Use the cost based approach for best response time.
PDFmyURL.com
RULE
Access Method Oracle
Hints:
CLUSTER(table)
FULL(table)
HASH(table)
HASH_AJ(table)
ROWID(table)
Use rules based approach; this cancels any other hints specified for this statement.
FACT(table)
NO_FACT(table)
PUSH_SUBQ
REWRITE(mview)
NOREWRITE
USE_CONCAT
NO_MERGE (table)
NO_EXPAND
Oracle Hints for Join
Operations:
USE_HASH (table)
When performing a star transformation use the specified table as a fact table.
When performing a star transformation do not use the specified table as a fact table.
This causes nonmerged subqueries to be evaluated at the earliest possible point in the execution
plan.
If possible forces the query to use the specified materialized view, if no materialized view is
specified, the system chooses what it calculates is the appropriate view.
Turns off query rewrite for the statement, use it for when data returned must be concurrent and
cant come from a materialized view.
Forces combined OR conditions and IN processing in the WHERE clause to be transformed into a
compound query using the UNION ALL set operator.
This causes Oracle to join each specified table with another row source without a sort-merge join.
Prevents OR and IN processing expansion.
This causes Oracle to join each specified table with another row source with a hash join.
USE_NL(table)
This operation forces a nested loop using the specified table as the controlling table.
USE_MERGE(table,[table,]) This operation forces a sort-merge-join operation of the specified tables.
The hint forces query execution to be done at a different site than that selected by Oracle. This
DRIVING_SITE
hint can be used with either rule-based or cost-based optimization.
LEADING(table)
The hint causes Oracle to use the specified table as the first table in the join order.
Oracle Hints for Parallel
Operations:
This specifies that data is to be or not to be appended to the end of a file rather than into existing
[NO]APPEND
free space. Use only with INSERT commands.
NOPARALLEL (table
This specifies the operation is not to be done in parallel.
PARALLEL(table, instances) This specifies the operation is to be done in parallel.
PARALLEL_INDEX
Allows parallelization of a fast full index scan on any index.
Other Oracle Hints:
Specifies that the blocks retrieved for the table in the hint are placed at the most recently used end
CACHE
of the LRU list when the table is full table scanned.
Specifies that the blocks retrieved for the table in the hint are placed at the least recently used end
NOCACHE
of the LRU list when the table is full table scanned.
[NO]APPEND
For insert operations will append (or not append) data at the HWM of table.
Turns on the UNNEST_SUBQUERY option for statement if UNNEST_SUBQUERY parameter is set
UNNEST
to FALSE.
Turns off the UNNEST_SUBQUERY option for statement if UNNEST_SUBQUERY parameter is set
NO_UNNEST
PDFmyURL.com
NO_UNNEST
PUSH_PRED
to TRUE.
Pushes the join predicate into the view.
ALL_ROWS:
This is the cost-based approach designed to provide the best overall throughput and minimum resource consumption. It's the default
option of Oracle
select /*+ ALL_ROWS */ COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
This example will usually execute NESTED LOOPS. The ALL_ROWS forces the optimizer to use a MERGE JOIN.
AND-EQUAL:
Causes merge scans of two to five single-column indexes.:
CLUSTER:
Requests a cluster scan of the table_name:
/*+ CLUSTER(table) */
FIRST_ROWS:
This hint is the opposite of ALL_ROWS. It tells the the optimizer to return the rows as fast as it can, even if it needs to perform more
I/O operations:
select
from
where
and
and
FULL:
It performs a FULL ACCESS to the table. Yoy may want to use it if you know that the distribution of the data is not good.
select /*+ FULL(COMPANY) */ Name, City, State
from COMPANY
where City = 'Roanoke'
and State = 'VA';
HASH:
Causes a hash scan
/*+ HASH(table) */
PDFmyURL.com
INDEX(table_name index_name):
It can be used in 3 different ways:
1. If only one index is mentioned, it will use that index
2. If you mention more than one index, the optimizer will decide which one to use.
3. If you mention just a table, the optimizer will decide wich index to use on that table.
select
from
where
and
/*+ INDEX(COMPANY) */
COMPANY
City = 'Roanoke'
State = 'VA';
INDEX_ASC(table_name index_name):
It will use the indicated index in ASC order.
INDEX_DESC(table_name index_name):
It will use the indicated index in DESC order.
NO_MERGE:
This hint is used in a view to prevent it from being merged into a parent query.
NOCACHE
This hint causes the table CACHE option to be bypassed.
ORDERED:
Requests that the tables should be joined in the order that they are specified (left to right). For example, if you know that a state table
has only 50 rows, you may want to use this hint to make state the driving table.
ROWID:
Requests a rowid scan of the specified table.
RULE:
Indicates that the rule-based optimizer should be invoked (sometimes due to the absence of table statistics)
select
from
where
and
and
PDFmyURL.com
from
where
and
and
COMPANY, SALES
COMPANY.Company_ID = SALES.Company_ID
SALES.Period_ID =3
SALES.Sales_Total>1000;
USE_MERGE:
It is the opposite of ISE_NL. It tells the optimizer to use a MERGE JOIN between the tables mentioned there.
select /*+ USE_MERGE(COMPANY, SALES) */ COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
*****************************************
** Parallel Execution
**
** Note: Oracle ignores parallel
**
**
hints on a temporary table. **
*****************************************
/*+ APPEND */
/*+ NOAPPEND */
Specifies that data is simply appended (or not) to a table; existing free space is not used. Use these hints only following the INSERT
keyword.
/*+ NOPARALLEL(table) */
Disables parallel scanning of a table, even if the table was created with a PARALLEL clause.
/*+ PARALLEL(table)
/*+ PARALLEL(table integer) */
Lets you specify parallel execution of DML and queries on the table; integer specifies the desired degree of parallelism, which is the
number of parallel
threads that can be used for the operation. Each parallel thread may use one or two parallel execution servers. If you do not specify
integer, Oracle
computes a value using the PARALLEL_THREADS_PER_CPU parameter. If no parallel hint is specified, Oracle uses the existing
degree of parallelism for the table.
DELETE, INSERT, and UPDATE operations are considered for parallelization only if the session is in a PARALLEL DML enabled
mode. (Use ALTER SESSION ENABLE PARALLEL DML to enter this mode.)
NOLOGGING Option
PDFmyURL.com
The NOLOGGING clause only affects Direct-path INSERT and Direct Loader (SQL*Loader) all other DML (insert/update/delete) are logged to
the redo logs. Regular DML statements are always logged. So you should be able to recover them even if the table mode is nologging
Although you can set the NOLOGGING attribute for a table, partition, index, or tablespace, NOLOGGING mode does not apply to every
operation performed on the schema object for which you set the NOLOGGING attribute. Only the following operations can make use of the
NOLOGGING option:
alter table...move partition
alter table...split partition
alter index...split partition
alter index...rebuild
alter index...rebuild partition
create table...as select
create index
direct load with SQL*Loader
direct load INSERT Inserts with Append Option
All of these SQL statements can be parallelized. They can execute in LOGGING or NOLOGGING mode for both serial and parallel execution.
Other SQL statements (such as UPDATE, DELETE, conventional path INSERT, and various DDL statements not listed above) are
unaffected by the NOLOGGING attribute of the schema object." NOLOGGING is used mainly for SQL-LOADER and DIRECT-INSERTS. If you
are not performing either of these (or those mentioned above) then the operation you perform WILL be logged.
If you performed any of those operations you should backup your database ASAP.
If you performed any of those operations the steps to recover a standby database would be:
1. Stop recovery on the standby.
2. Put the datafile in backup mode, back it up, and ftp the file to the standby host (in binary mode).
3. Put the Standby in Managed Recovery Mode:
On the Standby:
SQL> alter database recover managed standby database disconnect;
if you use RMAN:
1. Stop recovery on the standby.
2. Connect to the target and standby:
rman target / auxiliary sys/change_on_install@standby
3. Restore and recover the file with something like this:
run {
set newname for datafile 8 to
"/u03/mpolaski/oradata/users01.dbf";
restore datafile 8;
set until time 'Oct 24 2000 08:00:00';
recover
standby
clone database; }
PDFmyURL.com
-- FOR TABLESPACES
set heading off
set feedback off
set pagesize 200
spool tablespace_logging.sql
select 'alter tablespace ' || tablespace_name || ' logging;'
from dba_tablespaces
where logging = 'NOLOGGING';
spool off
@tablespace_logging
CBO Options
opt imizer_index_cost _adj
This is the most important parameter of all, and the default setting of 100 is incorrect for most Oracle systems. For OLTP systems, resetting this parameter to a smaller value (between 10- to 30) may result in huge performance gains!
If you are having slow performance because the CBO first_rows optimizer mode is favoring too many full-table scans, you can reset the
optimizer_index_cost_adj parameter to immediately tune all of the SQL in your database to favor index scans over full-table scans. This is a
"silver bullet" that can improve the performance of an entire database in cases where the database is OTLP and you have verified that the
PDFmyURL.com
Some results I have obtained from various combinations of hardware platform and IO sub-system.
avg. wait time
avg. wait time
new setting for
(db file sequential read) (db file scattered read) optimizer_index_cost_adj
------------------------- ------------------------ -----------------------.171659257
3.33033582
5
.13254
1.12365
12
.017605522
.104148241
17
1.29639067
2.06954043
63
.535133533
.397919802
134
.940889054
.509830001
185
.537904057
.145183814
370
PDFmyURL.com
In real life, this metric is only good enough to give a very rough indicator as to how fast the IO sub-system is. New-value settings below 100
indicate slow disks, anything above 100 might indicate the presence of fast or cache-backed disks (or abuse of the UNIX file system cache).
You have to exaggerate these results for it to have any real influence on the CBO. For example, if the above query suggests a new setting
of 63%, you may have to go as low as 1% or 2% before the CBO will actually use an index. Conversely, a suggestion of 370% may need to
be bumped up to around 3700% before a full-table or index fast-full scan is favoured.
Opt imizer Modes
In Oracle there are four optimizer modes, all determined by the value of the optimizer_mode parameter. The values are rule, choose,
all_rows and first_rows. The rule and choose modes reflect the obsolete rule-based optimizer so we will focus on the CBO modes.
The optimizer mode can be set at the system-wide level, for an individual session, or for a specific SQL statement:
alter system set optimizer_mode=first_rows_10;
alter session set optimizer_goal = all_rows;
select /*+ first_rows(100) */ from student;
Oracle offers several optimizer modes that allow you to choose your definition of the best execution plan for you:
opt imizer_mode=f irst _rows This is a cost-based optimizer mode that will return rows as soon as possible, even if the overall query
runs longer or consumes more computing resources than other plans. The first_rows optimizer_mode usually involves choosing an
index scan over a full-table scan because index access will return rows quickly. Since the first_rows mode favors index scans over fulltable scans, the first_rows mode is more appropriate for OLTP systems where the end-user needs to see small result sets as quickly
as possible.
opt imizer_mode=all_rows This is a cost-based optimizer mode that ensures that the overall computing resources are minimized,
even if no rows are available until the entire query has completed. The all_rows access method often favors a parallel full-table scan
over a full-index scan, and sorting over pre-sorted retrieval via an index. Because the all_rows mode favors full-table scans, it is best
suited for Data Warehouse, decision support systems and batch-oriented databases where intermediate rows are not required for
real-time viewing.
opt imizer_mode=f irst _rows_n This is an Oracle9i optimizer mode enhancement that optimizes queries for am small, expected return
set. The values are first_rows_1, first_rows_10, and first_rows_100 and first_rows_1000. The CBO uses the 'n' in first_rows_n as an
important driver in determining cardinalities for query result sets. By telling the CBO, a priori, that we only expect a certain number of
rows back from the query, the CBO will be able to make a better decision about whether to use an index to access the table rows.
While the optimizer_mode is the single most important factor in invoking the cost-based optimizer, there are other parameters that
influence the CBO behavior.
Using hist ograms wit h t he CBO
In some cases, the distribution of values within an index will effect the CBOs decision to use an index vs. perform a full-table scan. This
happens when the value with a where clause has a disproportional amount of values, making a full-table scan cheaper than index access.
A column histogram should only be created when we have a highly-skewed column, where some values have a disproportional number of
rows. In the real world, this is quite rare, and one of the most common mistakes with the CBO is the unnecessary introduction of histograms
in the CBO statistics. The histograms signals the CBO that the column is not linearly distributed, and the CBO will peek into the literal value
in the SQL where clause and compare that value to the histogram buckets in the histogram statistics.
PDFmyURL.com
As a general rule, histograms are used to predict the cardinality and the number of rows returned in the result set. For example, assume that
we have a product_type index and 70% of the values are for the HARDWARE type. Whenever SQL with where product_type=HARDWARE
is specified, a full-table scan is the fastest execution plan, while a query with where product_type=SOFTWARE would be fastest using
index access.
Because histograms add additional overhead to the parsing phase of SQL, they should be avoided unless they are required for a faster CBO
execution plan.
So how do we find those columns that are appropriate for histograms? One exciting feature of dbms_stats is the ability to automatically
look for columns that should have histograms, and create the histograms. Again, remember that multi-bucket histograms add a huge
parsing overhead to SQL statements, and histograms should ONLY be used when the SQL will choose a different execution plan based
upon the column value.
To aid in intelligent histogram generation, Oracle uses the method_opt parameter of dbms_stats. There are also important new options
within the method_opt clause, namely skewonly, repeat and auto.
method_opt=>'for all columns size skewonly'
method_opt=>'for all columns size repeat'
method_opt=>'for all columns size auto'
Lets take a close look at each method option.
The first is the skewonly option which very time-intensive because it examines the distribution of values for every column within every
index. If dbms_stats discovers an index whose columns are unevenly distributed, it will create histograms for that index to aid the costbased SQL optimizer in making a decision about index vs. full-table scan access. For example, if an index has one column that is in 50% of
the rows, a full-table scan is faster than and index scan to retrieve these rows.
Histograms are also used with SQL that has bind variables and SQL with cursor_sharing enabled. In these cases, the CBO determines if the
column value could affect the execution plan, and if so, replaced the bind variable with a literal and performs a hard parse.
begin
dbms_stats.gather_schema_stats(
ownname
=> 'SCOTT',
estimate_percent => dbms_stats.auto_sample_size,
method_opt
=> 'for all columns size skewonly',
degree
=> 7
);
end;
/
The auto option is used when monitoring is implemented (alter table xxx monitoring;) and creates histograms based upon data distribution
and the manner in which the column is accessed by the application (e.g. the workload on the column as determined by monitoring). Using
method_opt=>auto is similar to using the gather auto in the option parameter of dbms_stats.
begin
dbms_stats.gather_schema_stats(
ownname
=> 'SCOTT',
estimate_percent => dbms_stats.auto_sample_size,
method_opt
=> 'for all columns size auto',
degree
=> 7
);
PDFmyURL.com
end;
/
LOG_PARALLELISM default value in 10g is 2. You can increase to 8 or 12 so that different concurrent sessions would start using
different log buffer. This will increase transaction through put.
Tune tnsnames.ora file. Consider adding SDU/TDU parameters.
Divide the buffer cache into 2 parts: keep and default. Use "keep" for indexes and "default" for table data.
PDFmyURL.com