Está en la página 1de 43

Oracle RAC 11gR2

RAC Joins the Cloud


RAC Grows Up
 RAC 11gR2 (aka 12gR1) adds a number of features that
transforms RAC clusters into database clouds
 Many features are geared towards enterprises but there are
new features that any organization can take advantage of
 With today’s high power x86 and 11gR2 RAC it is
entirely possible to replace many silo clusters with fewer
11gR2 “clouds”
 Many 11gR1 and older RAC commands are depreciated
11gR2 RAC New Features
 Brand New Installer
 Clusterware is now Grid Infrastructure
 ASM is now part of the Grid Infrastructure installation
 Voting and OCR disk are ASM diskgroup
 Raw devices are only supported for upgraded clusters
 Future Linux release may drop support for raw devices
 Plug and Play For Clusters
 Cluster is now a DNS service
 DHCP for server VIPs
 New tools are introduced old tools retired
RAC 11gR2 Install
Non Linear Installer
Grid Plug and Play
SSH Keys
OCR/Voting Disks
OCR/Voting Disks
 OCR/Voting disks can be ASM Diskgroup
 Raw devices are only supported for upgraded clusters
 Other options are NFS and certified clustered filesystems
 3 disks for normal redundancy, 5 disks for high
IPMI
IPMI (cont)
 Intelligent Platform Management Interface
 Defines a cross vendor standard to monitor and manage
hardware
 Dell, HP, IBM, Intel, AMD and dozens more
 IPMI in relation to RAC allows other nodes to manage the
server regardless of the OS state
ASM Roles
Fix Common Problems
 Installer can create a script that can fix some problems
 Won’t install missing packages
Finish Install
 After GUI finished is will ask to run the command
oraInrootinst.sh and root.sh
Cluster Startup Sequence
 Init starts Oracle High Availability Service Deamon
(OHASD)
 OHASD starts
 cssdagent – starts cssd
 orarootagent – manages all root owned ohasd processes
 oraagent – manages all oracle owned ohsad resources
 cssdmonintor – monitors cssd and node health
 OHASD rootagent starts
 crsd – main deamon for managing cluster resources
 ctssd – time synchronization
 Diskmon
 ACFS – ASM cluster file system drivers
Cluster Startup Sequence
 Oraagent starts – level 2
 mdnsd – used for DNS lookups
 gipcd – inter-process and inter-node communication
 gpnpd – Grid Plug & Play deamon
 EVMD – event monitor
 ASM – resource for monitoring ASM instance
 CRSD – level 3
 orarootagent
 oraagent
 CRSD rootagent – level 4
 SCAN VIP(s) - Single Client Access Name
 Node VIPs – one per node
 ACFS Registry – For mounting ASM Cluster File System
 GNS VIP - optional
SCAN
 SIMPLE CLIENT ACCESS NAME
 A single DNS entry that represents the cluster has a whole
 Clients no longer need to know the actual servers that are in a
cluster
 THIN connections or connections not using a VIP are better
protected against a node failure
 If using GNS SCAN must be in the domain managed by GNS
 DNS needs to be able to delegate resolution to GNS
 If not using GNS DNS should be configured to round robin up
to 3 IPs for the SCAN name. Recommend use 11.2 client
 If using GNS 3 IPs will be requested from DHCP
SCAN Listener
 Listener that is dependent on a SCAN VIP
 If the SCAN VIP moves to a different node in the cluster the
listener will move with the VIP
 Depending on the number of nodes in a cluster a single node
may have more than one SCAN VIP and listener
 remote_listener parameter should be set to a SCAN VIP for
11gR2 databases
 Simplify RAC Dataguard connections for clients
 Two new sqlnet parameters
 CONNECT_TIME = timeout in seconds
 RETRY_COUNT = number of tries
 In a single TNS entry enter both primary and standby servers SCAN address.
If the connection is made to standby it will failover to primary.
SCAN Load Balancing
 Client receives IP from DNS or GNS that resolves to
SCAN VIP
 SCAN Listener bound to SCAN VIP redirects client to
local listener that has the least load
 Client connects to the local listener on the node VIP
 Client needs to be able to connect to SCAN VIP and to the
local node VIP and be able to resolve all names
SCAN
 SCAN Listeners Overview
Grid - Plug And Play
 Greatly simplifies the adding/removing and connecting to
a cluster
 Basically run cluster verify script and then addNode.sh
 Configures cluster listener and ASM
 No longer use static IPs for VIPs
GNS
 Grid Naming Service
 Handles name resolution for the cluster
 Requires DHCP for VIP addresses!
 DNS needs to delegate resolution for the cluster to GNS
 Use the GNS VIP not server or scan VIP
 Bind Example
 prodcluster.db.oracle.com NS gns.prodcluster.db.oracle.com
 gns.prodcluster.db.oracle.com. 10.90.51.20
 When a client connects the cluster will provide the IP address
 As RAC servers are added DHCP assigns a VIP address, GNS is
updated with the new server’s VIP address and now can route connects
to the new server.
 No client connection strings are never changed
 Remove a node is basically reversing the process
Client Connections With GNS
ASM Changes
 ASM part of grid infrastructure – one less HOME to
manage
 New GUI management tool – asmca
 Replaces DBCA for ASM management
 Create diskgroups, volumes, ACFS mounts
 Enhanced controls to manage ASM attributes
 SPFILE is stored in ASM diskgroup
 Cluster reads the diskgroup before ASM starts
 ASMCMD is greatly enhanced and mostly replaces sqlplus
 ASM roles can play a large role in managing ASM
 Diskgroups can be cluster resources
ASM Changes (cont)
 New O/S aware cluster file system (ACFS)
 ASM disks can be used to create filesystem mounted by O/S
 Only supported on Redhat/Oracle Linux, other O/S support coming in
the future
 If a file can natively support ASM then the file is not supported on
ACFS!
 No data, control or redo files
 DirectIO is not available
 Take advantage of ASM performance over other filesystems
 ACFS is able to do snapshots
 Still able to access ASM diskgroups with FTP/HTTP via
XMLDB
 Copy command can copy out and in of ASM (11.1.0.7)
ASM Changes (cont)
 Intelligent placement of data
 ASM is able to put hot datafiles on the outer edge of the disk
 Can provide a 50% increase in I/O performance
 Compatible parameters for both ASM & RDBMS 11.2
 Work best with the geometry of the disks are know
 JOBD – uses disks sectors for data placment
 LUN – logical sectoring number which may not have any relation to
the physical layout of the lun on the disk
 Diskgroup should be 50% full for full benefit
 alter diskgroup data add template datafile_hot attributes (hot
mirrorhot);
 ascmd lstmpl –l –G data
ASM – Access Control Lists
 Set permissions at the ASM file level
 Set permissions for users and groups
 User is the oracle database software owner
 Only available for Linux and Unix
 Not so much for security as for separation of duties
 Files created in ASM are owned by DBUSER
 Create a separate OSDBA group for each database using a
separate ORACLE_HOME. Need different groups for
OSASM and OSDBA for ASM
 Compatible 11.2 diskgroup both for ASM and RDBMS
ASM – Access Control Lists (cont)
 Disk attributes
 ACCESS_CONTROL.ENABLED = TRUE
 ACCESS_CONTROL.MASK has to be set
 Mask values
 6 removes all
 2 removes write
 0 removes nothing
 Mask of 026 sets values of 640
 read-write for owner, read for group and nothing for everyone else
 asmcmd> setattr –d data access_control_enabled true
 sql> alter diskgroup data set attribute ‘access_control.enabled’
= true;
ASM Permissions
 ALTER DISKGROUP ADD USERGROUP … WITH MEMBER
 ALTER DISKGROUP DROP USERGROUP
 ALTER DISKGROUP MODIFY USERGROUP ADD MEMBER
 ALTER DISKGROUP MODIFY USERGROUP DROP MEMBER
 ALTER DISKGROUP ADD USER
 ALTER DISKGROUP DROP USER
 ALTER DISKGROUP SET PERMISSION
ALTER DISKGROUP SET OWNERSHIP
 SELECT * FROM V$ASM_USER
 SELECT * FROM V$ASM_USERGROUP
 ALTER DISKGROUP ADD USERGROUP myfiles WITH
MEMBER bill;
ASM Roles
 OS groups control rights in ASM
 OSDBA – can access ASM files and set ACLs
 ASMOPER – can start/stop ASM
 OSASM – full ASM control
 Allow people to administer ASM without having sysdba
rights to the databases
ASM Commandline Tool
ASM Commandline Tool (cont)
 Create diskgroups from ascmd
 Same syntax as sqlplus
 Create diskgroup mydata external redundancy
 DISK ‘ORCL:DISK1’ NAME mydata1, ‘ORCL:DISK2’ NAME
mydata2;
 Create diskgroup from XML via asmcmd
 asmcmd> chkdg
ASM Commandline Tool (cont)
 lsdg – list diskgroup information
 Very similar to select * from v$asm_diskgroup
 lsdsk – list disk information
 Same information found in select * from v$asm_disk
Cluster Registry & Voting Disks
 Raw devices only supported for upgraded clusters
 Can move OCR and Voting disks to ASM diskgroup after
upgrade
 ocrconfig add <diskgroup name>
 ocrcheck
 ocrconfig delete <raw device>
 crsctl query css votedisk
 crsctl add css votedisk <diskgroup name>
 crsctl delete css votedisk <raw device>
Cluster Management
 crsctl in the grid infrastructure home manages almost
every cluster command
 crs_* scripts are depreicated
Depreciated Commands
 crs_stat
 crs_register
 crs_unregister
 crs_start
 crs_stop
 crs_getperm
 crs_profile
 crs_relocate
 crs_setperm
 crsctl check crsd
 crsctl check cssd
 crsctl check evmd
 crsctl debug log
 crsctl set css votedisk
 crsctl start resources
 crsctl stop resources
Server Pools
 Create “mini” clusters inside of larger clusters for policy
based databases
 Two default pools Free/Generic
 Free pool contains servers not assigned to another pool
 Generic pool is for running pre 11.2 databases and non policy
managed databases
 Specify min/max nodes and priority
 Cluster manages the members of the pool based on load, availability,
priority and min/max requirements
 Cluster will move servers from free and lower priority pools to meet
the needs of the server pool
 Can create ACL on server pools for role based management
Vote/OCR Disk Management
 Raw is supported for upgraded clusters but it is possible to
migrate to ASM disks after 11.2 cluster upgrade
 Move vote disks to ASM
 crsctl query css votedisk
 crsctl replace votedisk +asm_disk_group
 Move OCR to ASM
 ocrconfig -add +new_disk_group
 ocrconfig -delete old_storage_location
Troubleshooting Cluster Problems
 Golden Rule – Ensure time is synced across cluster
 Help with comparing log files
 Ensures nodes aren’t evicted due to time skew
 If NTP is not used Oracle will use Cluste Time Synchronization
service
 Recommend to have cluster down if changing the time
 Logs are stored under GRID_HOME/log/hostname
 Main alert log
 Separate directory for each cluster process
 Diagnostic Script
 GRID_HOME/bin/diagcollection.pl
 Collects cluster and system logs for debugging
Troubleshooting (cont)
 Modify time before node is evicted
 Default is 30 seconds
 crsctl set css miscount 45
 On busy systems logs may not flush before node reboots
 Crsctl set css diagwait 13 – force
 Not set by default
 Requires cluster outage to set
Adding A Node
 Much easier in 11gR2
 Setup ssh keys for new host
 Run cluster verify tool
 cluvfy stage –pre crsint –n <new host> <-fixup>
 Fixes any problems
 cluvfy stage –pre nodeadd –n <new host>
 $ORACLE_HOME/oui/bin/addNode.sh –silent
“CLUSTER_NEW_NODES=<new host>”
“CLUSTER_NEW_VIRTUAL_HOSTNAMES=<new host
vip>”
Instance Caging
 Puts a cap on the number of CPUs a instance will use
 Set the cpu_count parameter
 Enable resource manager with CPU polices
Oracle Joins The Cloud
 With 11gR2 it is now possible to create few larger clusters
while being able to maintain performance, security and
reassure customers concerns
 Increased utilization rates
 Lower administration costs
 Happy Management

También podría gustarte