Documentos de Académico
Documentos de Profesional
Documentos de Cultura
AIX
Performance Tuning
Customer Technical Session
Virtual Processors
Capped vs uncapped
ƒ Capped: CPU Capacity limited to desired setting.
ƒ Uncapped: CPU Capacity limited by unused capacity in ‘pool’ and cannot exceed number of
virtual processors (not related to maximum processing units)
AIX 5.2 AIX 5.3 AIX 5.3 AIX 5.3 AIX 5.3
V V V V V V V V V Virtual
2 CPUs 1 CPU 2.1 Proc. Units 0.8 Proc Units 1.2 Proc Units
Physical
(dedicated) (dedicated) 13 CPU Shared Processor Pool*
SPLPAR Summary
Shared Processor concepts
splpar 2
Partitions run on a Virtual Processor
(VP).
virtual timebase
VP runs on Physical Processors (PP)
only part of the time.
A VP has one or two logical processor
depending on the SMT state. virtual 1CPU
splpar Dispatch splpar 3
Minimum size of a partition is .1 with splpar1
increments of 1/100th of a processing virtualtimebase
timebase
Wheel (10ms) virtual timebase
timebase
LPAR # Entitlement
Dispatched / # of VP
Physical
1 .2 / 1
Processor
2 .2 / 1
0 1 2 3 4 5 6 7 8 9 10 physical CPU
3 .1 / 1
100 units
Dispatch Window 4 .5 / 1
timebase
Different number of
AIX 5.3 virtual processors AIX 5.3
LPAR LPAR
Same amount of
V V V V processing units V V
1.6 Proc. Units 1.6 Proc Units
Each virtual processor will receive 0.4 Each virtual processor will receive
processing units 0.8 processing units
Different number of
AIX 5.3 virtual processors AIX 5.3
LPAR LPAR
Excess processing
V V V V Unit Capacity V V
Available
4.0 Proc. Units 2.0 Proc Units
Each virtual processor will receive 1.0 Each virtual processor will receive 1.0
processing units processing units
Virtual processors receive 1 full CPU Virtual processors receive 1 full CPU
worth of processing units. worth of processing units.
In the presence of excess processing units, virtual processors receive same amount of processing units.
4
3
2
1
0
Time
3 4
0 1 2 3 4
0 1 2
Time Time
3 4
3 4
Processing Units
Processing Units
Processing Units
1 (CPUs)
1 (CPUs)
1 (CPUs)
2
2
2
0
0
0
The goal is to match Users and Donors so that the planned overall
shared processing pool CPU utilization does not exceed 100%.
12 © 2008 IBM Corporation
Advanced Technical Support, Americas
Deployment Choices
No information on application behavior and utilization of resources?
– Use dedicated processors
• Minimize risk, but excess capacity is unused
• Collect performance data to determine suitability for moving to micro-
partitions
– Use shared processors
• Allocate entitlement liberally, uncap, until resource behavior known
Mixed applications, variable behavior
– Size to known peaks
• Enough application, benchmark or local performance information to model
expected behavior
• Size each to micro-partition, allocate extra shared pool and memory
resources
– Collect performance data to validate model, free shared pool and memory
allocation to optimize
Well-defined applications
– Detailed application knowledge allowing for partitions to be individually over-
committed (don’t conflict for shared resources)
– Ideal usage of resources
13 © 2008 IBM Corporation
Advanced Technical Support, Americas
mpstat Example CPU user & sys values are relative to physical consumed
mpstat –s
# mpstat -s
Physical Processor / Virtual Process Busy – with SMT enabled each physical process has
proc
two logical processors.
Logical CPU number and the overall busy percentage, which is the sum or user + system
cpu
mode utilization. Gives the relative SMT split between processors.
lparstat Review
# lparstat -h 1 4
System configuration: type=Shared mode=Capped smt=On lcpu=4 mem=4096 psize=2 ent=0.40
%user %sys %wait %idle physc %entc lbusy app vcsw phint %hypv hcalls Additional information when
----- ---- ----- ----- ----- ----- ------ --- ---- ----- ----- ------ “-h” flag is specified
84.9 2.0 0.2 12.9 0.40 99.9 27.5 1.59 521 2 13.5 2093
86.5 0.3 0.0 13.1 0.40 99.9 25.0 1.59 518 1 13.1 490
%user / %sys / Shows the percentage of the entitled processing capacity used. So, you would say that the system is consuming
%wait / %idle 86.9% (84.9 + 2) of four-10th of a physical processor. For dedicated partitions, the entitled capacity = # of physical
processors
physc Shows the number of physical processors consumed. For a capped partition this number will not exceed the entitled
capacity. For an uncapped partition this number could match the number of processors in the shared pool; however,
this my be limited based on the number of on-line Virtual Processors.
%entc Shows the percentage of entitled capacity consumed. For a capped partition the percentage will not exceed 100%;
however, for uncapped partitions the percentage can exceed 100%.
lbusy Shows the percentage of logical processor utilization that occur while executing in user and system mode. Note: In
this example we’re using approx 25% of the logical processors. This is the “traditional” measure of CPU utilization Shared
using time-based sampling. As this value approaches 100% it may indicate that the partition could make use of Mode
additional VPs. Only
app Shows the number of available processors in the shared pool. The shared pool ‘psize’ is 2 processors. Must set
‘Allow shared processor pool utilization authority’. View the “properties” for a partition and click the Hardware tab,
then Processors and Memory.
vcsw Shows the number of virtual context switches.
phint Shows the number of phantom interrupts. A phantom interrupt is an interrupt that belongs to another shared partition.
%hypv / hcalls Shows the percentage of time spent in the hypervisor and the number of hypervisor calls.
------------------------------------dedicated-----------------------------------
ptoolsl1 A53 S 4.1 0.5 4 20 10 0 70 0.60
ptoolsl4 A53 4.1 0.5 2 100 0 0 0 2.00
ptoolsl6 A52 4.1 0.5 1 5 5 12 88 0.10
•M – System Mode
•c means capped, C - capped with SMT
•u means shared, U - uncapped with SMT
•S means SMT
© 2008 IBM Corporation
Advanced Technical Support, Americas
* Can adjust cpu cycles required for other processors by computing a ratio between processor speeds
Virtual Ethernet
Packets transferred in memory between partitions on the same
server
– Higher throughput than physical ethernet
– Physical devices do not support MTU 65394
Throughput linearly scales with processor entitlements
– MTU 9000 is 3X MTU 1500
– MTU 65394 is 7X MTU 9000
– Try to use the highest MTU
No unique TCP/IP tunables methodology
TCP Checksum Offloading
– Because virtual network does not suffer from physical network link
errors, checksums do not need to be generated (this is the default in
later AIX 5.3 levels)
• # chdev –l <device> -a chksum_offload=yes
Shared Ethernet
Heavy network load, use same sizings as dedicated systems
– MTU 1500, 1 CPU
– MTU 9000, 0.5 CPU
Shared processors
– Shared processors can result in higher latency, decreasing throughput
– For bursty network loads, use uncapped and allow for more entitlement than would
be allocated for a dedicated partition hosting the same application
Tools
• lsattr –El en#
• topas
• entstat, netstat
• seastat
– Tool from Nigel Griffith simplifies output, provides intervals
– http://www-941.ibm.com/collaboration/wiki/display/WikiPtype/nmon
Whenever there is a VIO Client/Server issue, check if there is a CPU constraint
first
– Add entitlement (shared), uncap or increase CPUs (dedicated)
– Use larger MTU sizes if possible
90002 C W Y Y
Set Ref bit to N
90003 C W Y Y JFS
Write to Page Space
90004 C W N N
90005 C P N N Candidate to Steal
90006 C P Y N
90007 NC C N N
Set Ref bit to N
90008 NC C Y Y Write to JFS2
JFS2
strict_maxclient &
numclient Number of client pages in memory. Working Segs
maxclient%
Client / Persistent Segs
maxperm% Configured maximum number of non-computational hold “Text”
pages in memory. Enforcement is controlled by
strict_maxperm and lrud.
numclient
numperm
maxclient% Configured maximum number of client pages in
memory. Client pages are a sub-set of non-
computational pages. This is why maxclient% <= Non-Computational
maxperm%. Enforcement is controlled by
strict_maxclient and lrud. Persistent Segs
Client Segs
strict_maxperm & Sets hard or soft enforcement of file system cache
strict_maxclient limits. When memory is available, soft enforcement
will allow memory utilization to grow beyond the Minperm%
configured limit.
# vmstat -v
233472 memory pages
197128 lruable pages
5201 free pages
0 memory pools
53534 pinned pages
80.0 maxpin percentage
20.0 minperm percentage
80.0 maxperm percentage
36.5 numperm percentage
72058 file pages
0.0 compressed percentage
0 compressed pages
39.3 numclient percentage
80.0 maxclient percentage
77641 client pages VMM counters provide a snapshot of
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf memory used for file cache.
0 paging space I/Os blocked with no psbuf
2740 filesystem I/Os blocked with no fsbuf
200 client filesystem I/Os blocked with no fsbuf
0 external pager filesystem I/Os blocked with no fsbuf
b The number of threads blocked waiting for a file system I/O operation to complete.
kthr
p The number of threads blocked waiting for a raw device I/O operation to complete.
avm The number of active virtual memory pages, which represents computational memory requirements.
The maximum avm number divided by number of real memory frames equals the computational
Memory
memory requirement.
fre The number of frames of memory on the free list. Note: A frame refers to physical memory vs. a page
which refers to virtual memory.
fi / fo File pages In and File pages Out per second, which represents I/O to and from a file system.
pi / po Page Space Page In and Page Space Page Out per second, which represents paging.
Page
fr / sr The number of pages scanned ‘sr’ and the number of pages stolen (or freed) ‘fr’. The ratio of scanned
to freed represents relative memory activity. The ratio will start at 1 and increase as memory
contention increases. I had to examine ‘Sr # of pages’ to steal ‘Fr pages’. Note: Interrupts are
disabled at times when ‘lrud’ is running.
In addition, the system has been configured to use large pages. A total of 160 MB (10 16MB large pages), which are all free.
...............................................................................
SYSTEM segments Inuse Pin Pgsp Virtual
13347 7334 0 13347
Segment Used by the System
and shared by all processes
...............................................................................
SHARED segments Inuse Pin Pgsp Virtual
Segment Shared by other Users
12684 0 0 12656
-------------------------------------------------------------------------------
Pid Command Inuse Pin Pgsp Virtual 64-bit Mthrd LPage
401636 memget 34758 4798 0 34754 N N N
-------------------------------------------------------------------------------
Pid Command Inuse Pin Pgsp Virtual 64-bit Mthrd LPage
401636 memget 26932 4798 3562 34754 N N N
The ipcs command reports information about inter-process communication facilities, which include
shared memory, semaphores, and message queues. The following command limits the report to
shared memory segments only.
# ipcs -bmS
IPC status from /dev/mem as of Wed Aug 10 16:20:33 EDT 2005 Requested size of the Shared
T ID KEY MODE OWNER GROUP SEGSZ memory Segment - Real memory is
Shared Memory: not allocated until the process /
SID : thread actually uses the memory.
0x1e375
m 4 0x080d3d74 --rw-rw-rw- db2inst1 db2grp1 140665792 When an application requests
SID : memory it receives a “pointer” and a
0x3448 “promise” from the kernel.
m 5 0x080d3d61 --rw------- db2inst1 db2grp1 22855680 Use svmon to determine how much
SID : is currently ‘In Use’.
0x440f
m 6 0xffffffff --rw------- db2fenc1 db2fgrp1 245284864
SID : 140665792 / 4096 = 3434 pages
0x18473
m 7 0x080d3e68 --rw-rw---- db2inst1 db2grp1 58720256
SID : Segment Id used with the command
0xf444
m 282066952 0x0d000ada --rw-rw-rw- root system 1440 svmon -lS
SID : which will give you information about
0x145df the segment and a listing of the
attached processes.
51813 51813
Only one pool shown here, but look for large disparity in
pages (NB_PAGES) or free frames (NUMFRB)
====================================================|==========|===========
Memory Overview | Pages | Megabytes
----------------------------------------------------|----------|-----------
Total memory in system | 524288 | 2048.00
Total memory in use | 518325 | 2024.70
Free memory | 5963 | 23.29
====================================================|==========|===========
Segment Overview | Pages | Megabytes
----------------------------------------------------|---------|-----------
Total segment id mempgs | 486394 | 1899.97
Total fork tree segment pages | 0 | 0.00
Total kernel segment id mempgs | 197484 | 771.42
jfs segment | 32 | 0.12
kernel heap | 134437 | 525.14
kernel segment | 17037 | 66.55
lfs segment | 656 | 2.56
lock instrumentation | 0 | 0.00
mbuf pool | 28128 | 109.87
mpdata debug | 1024 | 4.00
other kernel segments | 7278 | 28.42
page space disk map | 16 | 0.06
page table area | 1237 | 4.83
process and thread tables | 80 | 0.31
vmm ame segment | 16 | 0.06
vmm data segment | 560 | 2.18
….
vmm vmintervals | 16 | 0.06
miscellaneous kernel segs | 3998 | 15.61
Total kernel mem w/ no segment id (wlm_hw_pages) | 31676 | 123.73
RMALLOC | 9 | 0.03
SW_PFT | 12288 | 48.00
PVT | 1024 | 4.00
PVLIST | 16384 | 64.00
RTAS_HEAP | 2396 | 9.35
----------------------------- | |
Total | 32101 | 125.39
===========================================================================
Detailed Memory Components | Pages | Megabytes
----------------------------------------------------|----------|-----------
Light Weight Trace memory | 4092 | 15.98
LVM Memory | 928 | 3.62
Total Kernel Heap memory | 134439 | 525.15
JFS2 total non-file memory | 542 | 2.11
metadata_cache | 78 | 0.30
inode_cache | 272 | 1.06
fs bufstructs | 140 | 0.54
misc jfs2 | 52 | 0.20
misc kernel heap | 133897 | 523.03
Total file memory | 228385 | 892.12
Total clnt (JFS2, NFS,...) file memory | 0 | 0.00
Total pers (JFS) memory | 228385 | 892.12
Total text memory | 9863 | 38.52
Total clnt text memory | 0 | 0.00
Total pers text memory | 9863 | 38.52
User memory | |
USER: root | |
total process private memory | 16292 | 63.64
total shared memory | 1543 | 6.02
working (shared w/ other users) | 18674 | 72.94
working (exclusive to user) | 29450 | 115.03
shared memory (exclusive to user) | 5 | 0.01
shared memory (shared w/ other users) | 1538 | 6.00
shlib text (shared w/ other users) | 17136 | 66.93
shlib text (exclusive to user) | 928 | 3.62
file pages | 1588 | 6.20
file pages (exclusive to user) | 1588 | 6.20
file pages (shared w/ other users) | 78 | 0
===========================================================================
Memory accounting summary | 4K Pages | Megabytes
----------------------------------------------------|----------|-----------
Total memory in system | 524288 | 2048.00
Total memory in use | 518325 | 2024.70
Kernel identified memory (segids,wlm_hw_pages) | 225162 | 879.53
Kernel un-identified memory | 3998 | 15.61
Fork tree pages | 0 | 0.00
Large Page Pool free pages | 0 | 0.00
Huge Page Pool free pages | 0 | 0.00
User private memory | 18548 | 72.45
User shared memory | 1543 | 6.02
User shared library text memory | 18064 | 70.56
Text memory | 9863 | 38.52
File memory | 228385 | 892.12
User un-identifed memory | 12762 | 49.85
---------------------- | |
Total accounted in-use | 518325 | 2024.70
Free memory | 5963 | 23.29
---------------------- | |
Total identified (total ident.+free) | 507528 | 1982.53
Total unidentified (kernel+user w/ segids) | 16760 | 65.46
---------------------- | |
Total accounted | 524288 | 2048.00
Total unaccounted | 255 | 0.99
minfree and maxfree on AIX 5.3 are now applied to each memory pool. With AIX 53,
total free list = minfree * # of memory pools
In ealrlier releases of AIX (5.2 and 5.1), minfree was divided by the number of memory pools
so that the total free list (determined by adding minfree for *each* memory pool) equaled the
vmo/vmtune value of minfree.
Where,
Max Read Ahead = max( maxpgahead, j2_maxPageReadAhead)
64KB Pages
64K pages are intended to be general purpose.
64K pages will be automatically managed by the kernel.
– Automatically used by the kernel and shared library text regions
– Fully pageable
– Size of the 64K page pool is dynamically adjusted and managed by the kernel.
– The kernel will vary the number of 4K and 64K pages to meet system demand
It is expected that many applications will see performance benefits when using 64K pages
rather than 4K pages.
Performance Monitoring commands have been updated to reflect and report on memory
usage by page size.
64K Pages can be used for Data, Stack and Text regions via an environment variable
LDR_CNTRL or the modification of an application XCOFF binary.
– Data (-bdatapsize / DATAPSIZE)
• Example: ldedit –bdatapsize [binary]
• Example: LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K
– Stack (-bstackpsize / STACKPSIZE)
– Text (-btextpsize / TEXTSIZE)
(Note: Environment variable overrides XCOFF setting)
64K Pages can be used for shared memory regions; however, application code must be
modified.
Reference Guide to Multiple Page Size Support for more detail
– http://www-03.ibm.com/servers/aix/whitepapers/multiple_page.pdf
Tuning Memory
Memory Model Tuning
– %Computational < 80% - Large Memory Model – Goal is to adjust tuning parameters to prevent
paging
• Multiple Memory pools
• Page Space smaller than Memory
• Must Tune VMM key parameters (lru_file_repage)
– %Computational > 80% - Small Memory Model – Goal is to make paging as efficient as possible
• Add multiple page spaces on different spindles
• Make all pages space the same size to ensure round-robin scheduling
• PS = 1.5 computational requirements (smaller ratios for systems greater than 16 GB)
– No changes required in AIX 6.1
Check for unbalanced Memory Pools
– Use KDB “mempool *” to check number of frames, update to latest APAR levels
Application Memory Adjustments
– Consider alternate page sizes
– Reduce SGA, pinned allocations, other application-specific memory tunings
Implement VMM-related mount options to reduce cache needs
– Use DIO / CIO
– Release behind or read and/or write
Add additional memory
Generic IO Tuning
Device Driver (s) Queues exist for both adapters and disks
Disk Subsystem (optional) Adapter device drivers use DMA for IO
Disk subsystems have read and write cache
Disk
Disks have memory to store commands/data
Cache Write Cache - ack sent back to application
ƒ i-node locking: decrease file sizes or use cio mount option if possible
ƒ Disable interrupts
For sequential IO, IO rates are not near the disk(s) capability?
ƒ For reads, FS or disk subsystem “read ahead” is insufficient or inhibited
ƒ IOs are queuing somewhere in the IO stack due to a bottleneck
ƒ We expect to have a bottleneck somewhere on the IO stack since we're
pushing the data thru as fast as possible
Random vs Sequential IO
Know Your IO Patterns
Remember iostat, filemon, application and DB metrics
Random IO
ƒ Spread your IOs across the disks to balance the IO load
Sequential IO
ƒ Use lspv, lslv, fileplace and filemon commands to determine how localized IO is
Assuming 4 ms seeks, 8 ms latency, transfer rates of 7 MB/s for older SSA disk
(this ignores some internal disk factors, newer disks get better performance)
I/O Tuning – iostat -D Service times you could only get from filemon before
------------------------------------------------------------------------
Detailed Physical Volume Stats (512 byte blocks)
------------------------------------------------------------------------
Device tuning
List device attributes with # lsattr -EHl <device>
Attributes with a value of True for user_settable can be changed
Sometimes you can change these via smit
Allowable values can be determined via:
# lsattr -Rl <device> -a attribute
Disks
ƒ Usually a parameter indicating number of commands that can be queued at the disk
Adapters
ƒ Usually a parameter for number of commands to queue at the adapter device driver
Tools for
LPAR & CEC
Historical Performance
•setup access to partitions Add Host to topas external subnet search file
not on local subnet (Rsi.hosts)
•
List hosts in topas external subnet search
turn on/off CEC and local recordings file (Rsi.hosts)
78
© IBM Corporation 2007
What's new in AIX 5.3
Advanced Technical Support, Americas
...
...
AIX TL06
•Tools updated
•lparstat, mpstat andBarcelona
sar 2006
•topas and topasout reports
86
© IBM Corporation 2007
What's new in AIX 5.3
87 Advanced Technical Support, Americas
IBM Global Services
Dedicated idle cycles donation - lparstat
•New mode # lparstat 1 3
$ lparstat -i
System configuration: type=Dedicated mode=Donating
Node Name : smt=On lcpu=2 mem=800
va01
Partition Name :
va %user %sys %wait %idle physc vcsw
Partition Number : 2 ----- ---- ----- ----- ----- -------
Type : 0.1 0.4 0.0 99.5 0.68 670234
Dedicated-SMT 0.0 0.2 0.0 99.8 0.68 670234
Mode : 0.0 0.2 0.0 99.8 0.68 670234
Donating
Entitled Capacity :
1.00
Partition Group-ID :
32770
Shared Pool ID : - donation causes
Online Virtual CPUs : 1 hardware context
Maximum Virtual CPUs : 1 switches
Minimum Virtual CPUs : 1
Online Memory :
800 MB Stay relative to
Maximum Memory : partition capacity.
1024 MB
Minimum Memory :
128 MB In this case one shows actual physical processor
Variable Capacity Weight : - processor consumption:
Minimum Capacity :
1.00
Maximum Capacity :
number of physical processors
1.00 minus donated and stolen cycles
Capacity Increment :
1.00
Barcelona 2006
Maximum Physical CPUs in system : 4
Active Physical CPUs in system : 4
Active CPUs in Pool : -
Unallocated Capacity : -
87 Physical CPU Percentage :
© IBM Corporation 2007
What's new in AIX 5.3
100.00%
Unallocated Weight : -
88 Advanced Technical Support, Americas
IBM Global Services
Dedicated idle cycles donation - lparstat details
•New -d flags shows more details %idon, %bdon: percentages of
idle and busy times donated
88
© IBM Corporation 2007
What's new in AIX 5.3
89 Advanced Technical Support, Americas
•sar
•automatically displays phyc when donation is enabled
•mpstat
•automaticaly displays pc and lcs if donation is enabled
•new -h option to show more details on hypervisor related statistics
►donation enabled
System configuration: lcpu=2 mode=Donating
cpu pc ilcs vlcs idon bdon istol bstol
0 0.3 50327 687231635 10.2 4.5 0.59 0.32
1 0.5 61702 684989764 10.2 4.5 0.59 0.32
ALL 0.8 112029 1372221399 20.4 9.0 1.18 0.64
►shared partition
System configuration: lcpu=2 ent=0.5 mode=Uncapped
cpu Barcelona
pc ilcs 2006
vlcs
0 0.6 503727 687231635
1 0.6 61702 684989764
ALL 0.8 565429 1372221399
89
© IBM Corporation 2007
What's new in AIX 5.3
90 Advanced Technical Support, Americas
IBM Global Services
Dedicated idle cycles donation - topas -L
Interval: 2 Logical Partition: Fri Sep 22 09:01:46 2006
Donating SMT ON Online Memory: 3200.0
Partition CPU Utilization Online Virtual CPUs: 1 Online Logical CPUs: 2
%user %sys %wait %idle %hypv hcalls %istl %bstl %idon %bdon vcsw
1 1 0 98 1 200 0 2.1 3.5 10.0 1.0 includes same
updates as
===============================================================================
LCPU minpf majpf intr csw icsw runq lpa scalls usr sys _wt idl pclparstat and
Cpu0 0 0 190 176 84 0 100 5089 1 2 0 97 0.52mpstat
Cpu1 0 0 14 0 0 0 0 0 0 0 0 100 0.48
•Topasout report
Report: System Detailed --- hostname: ptoolsl1 version: 1.2
new version
Start:12/21/05 10.00.00 Stop:12/21/05 11.00.00 Int: 5 Min Range: 60 Min
number to mark
Time: 10.00.00 --------------------------------------------------------------
CPU UTIL MEMORY PAGING EVENTS/QUEUES NFS the new format
Kern 12.0 PhyB 0.7 Sz,GB 16.0 Sz,GB 4.0 Cswth 3213 SrvV2 32
User 8.0 Ent 0.0 InU 4.3 InU 2.3 Syscl 43831 CltV2 12
Wait 0.0 EntC 0.0 %Comp 3.1 Flt 221 RunQ 1 SrvV3 44
Idle 78.0 bdon 0.1 %NonC 9.0 Pg-I 87 WtQ 0 CltV3 18
SMT ON idon 1.0 %Clnt 2.0 Pg-O 44 VCSW 1214
LP 4 bstl 0.5
Mode Don istl 0.0
91
© IBM Corporation 2007
What's new in AIX 5.3
92 Advanced Technical Support, Americas
IBM Global Services
Barcelona 2006
92
© IBM Corporation 2007
What's new in AIX 5.3
93 Advanced Technical Support, Americas
IBM Global Services
TL07 & AIX 6.1
•iostat
•Tape device support!
►Uses standard dkstat structures, same as disk devices
►Includes support for read/write service times
►No queuing, so no queue wait metrics
►ATAPE only at this time
•Filesystem and Workload Partition reports (AIX 6)
•svmon
•AIX 6.1 dynamic pages sizes
►4K– 64K mixed segment
►Short (segment), detailed (by page) and long (by page size) reports
•Major usage improvements coming in 2008
►Reports simplification!
►Better system and process reports
• Similar to current user reports that provide breakdown by system, shared and exclusive resources
• Better answers to “what are all my processes footprint?”
•Workload Partitions Support
•Commands
►ps, ipcs, netstat, proc*, trace,
Barcelona 2006vmstat
•Tools
►topas, tprof, filemon, netpmon, pprof, curt
93
© IBM Corporation 2007
What's new in AIX 5.3
94 Advanced Technical Support, Americas
IBM Global Services
TL07 & Power6
•Allows definition of virtual pools
•Subset(s) of shared physical processor pool
•Have their own capacity limits similar to partitions
►entitled capacity
• sum of partitions entitled capacity and pool reserved capacity
►maximum capacity
•Makes it possible to set uncapped partitions maximum capacity
►not necessarily equal to their number of virtual processors
•Can be used to lower cost with virtual pool aware licenses
►4 ABC licenses versus 7
•lparstat
•-i will show pool entitlement and max capacity
•topas
•Updated CEC panel
►new pool section (p option)
•Two roll ups
►CEC level
►by virtual pool (f option)
Barcelona 2006
95
© IBM Corporation 2007
What's new in AIX 5.3
96 Advanced Technical Support, Americas
IBM Global Services
New Tunables
• vmo additions
► psm_timeout_interval = 20000
• Determines the timeout interval, in milliseconds, to wait for page size
management daemons to make forward progress before LRU page
replacement is started. This setting is only valid on the 64-bit kernel. Default:
20 seconds. Possible values: 0 through 60,000 (1 minute). When page size
management is working to increase the number of page frames of a particular
page size, LRU page replacement is delayed for that page size for up to this
amount of time. On a heavily loaded system, increasing this tunable can give
the page size management daemons more time to create more page frames
before LRU runs.
• Basically, 64 KB page migrations can cause a deadlock between lrud and
psmd
► wlm_mem_limit_nonpg = 1
• Selects whether non-pageable page sizes (16M, 16G) are included in the
WLM realmem and virtmem counts. If 1 is selected, then non-pageable page
sizes are included in the realmem and virtmem limits count. If 0 is selected,
then only pageable page sizes (4K, 64K) are included in the realmem and
virtmem counts. This value can only be changed when WLM Memory
Accounting is off, or the change will fail.
Barcelona 2006
96
© IBM Corporation 2007
What's new in AIX 5.3
97 Advanced Technical Support, Americas
IBM Global Services
New Tunables
•ioo JFS2 Sync Tunables
The file system sync operation can be problematic in situations where there is very
heavy random I/O activity to a large file. When a sync occurs all reads and writes from
user programs to the file are blocked. With a large number of dirty pages in the file the
time required to complete the writes to disk can be large. New JFS2 tunables are
provided to relieve that situation.
►j2_syncPageCount
Limits the number of modified pages that are scheduled to be written by sync in one
pass for a file. When this tunable is set, the file system will write the specified number
of pages without blocking i/o to the rest of the file. The sync call will iterate on the
write operation until all modified pages have been written.
Default: 0 (off), Range: 0-65536, Type: Dynamic, Unit: 4KB pages
►j2_syncPageLimit
Overrides j2_syncPageCount when a threshold is reached. This is to guarantee that
sync will eventually complete for a given file. Not applied if j2_syncPageCount is off.
Default: 16, Range: 1-65536, Type: Dynamic, Unit: Numeric
• If application response times impacted by syncd, try j2_syncPageCount settings from
256 to 1024. Smaller values improve short term response times, but still result in larger
syncs that impact reponse times over larger intervals.
• These will likely require
Barcelona a lot of experimentation, and detailed analysis of IO
2006
• Does not apply to mmap() or shmat() memory files.
97
© IBM Corporation 2007
What's new in AIX 5.3
98 Advanced Technical Support, Americas
IBM Global Services
98
© IBM Corporation 2007
What's new in AIX 5.3
99 Advanced Technical Support, Americas
IBM Global Services
Thanks
Barcelona 2006
99
© IBM Corporation 2007
What's new in AIX 5.3
Advanced Technical Support, Americas