Está en la página 1de 54

1

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Understanding Oracle RAC Internals Part 2 for the Oracle RAC SIG
Markus Michalewicz (Markus.Michalewicz@oracle.com) Senior Principal Product Manager Oracle RAC and Oracle RAC One Node
2 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Safe Harbor Statement

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
Client Connectivity Node Membership The Interconnect

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Direct or indirect connect
Connect Time Load Balancing (CTLB) Connect Time Connection Failover (CTCF) Runtime Connection Load Balancing (RTLB) Runtime Connection Failover (RTCF)
BATCH Production Email

SCAN

Connection Pool

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Connect Time Connection Failover
jdbc:oracle:thin:@MySCAN:1521/Email PMRAC = (DESCRIPTION = (FAILOVER=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = MySCAN)(PORT = 1521))

(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = Email)))

BATCH Production Email

MySCAN

Connection Pool

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Runtime Time Connection Failover
PMRAC = (DESCRIPTION = (FAILOVER=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = MySCAN)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = Email) ...))

BATCH Production Email

MySCAN

Connection Pool

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Runtime Time Connection Failover
PMRAC = (DESCRIPTION = (FAILOVER=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = MySCAN)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = Email) (FAILOVER_MODE= (TYPE=select)(METHOD=basic)(RETRIES=180)(DELAY=5))))

BATCH Production Email

MySCAN

Connection Pool

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
More information
If problems occur, see:
Note 975457.1 How to Troubleshoot Connectivity Issues with 11gR2 SCAN Name
For more advanced configurations, see: Note 1306927.1 Using the TNS_ADMIN variable and changing the default port

number of all Listeners in an 11.2 RAC for an 11.2, 11.1, and 10.2 Database
BATCH Production Email

? ? ?
MySCAN

Connection Pool

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Two ways to protect the client
1. Transparent Application Failover (TAF)
Tries to make the client unaware of a failure Provides means of CTCF and RTCF Allows for pure selects (reads) to continue Write transactions need to be re-issued The Application needs to be TAF aware

2.

Fast Application Notification (FAN)


FAN wants to inform clients ASAP Client can react to failure asap Expects clients to re-connect on failure (FCF) Sends messages about changes in the cluster

BATCH Production Email

MySCAN

Connection Pool

10

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity and Service Definition


Define settings on the server
HA (and LB) settings

can be defined per service


Clients connecting to the service will

[GRID]> srvctl config service -d ORCL -s MyService

Service name: MyService


... DTP transaction: false AQ HA notifications: false

adhere to the settings considering the client used.

Failover type: NONE


BATCH Production Email

Failover method: NONE


MySCAN

TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC

11

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Use a FAN aware connection pool
1
If a connection pool is used
The clients (users) get a physical connection to the connection pool The connection pool creates a physical connection to the database It is a direct client to the database

Internally the pool maintains logical connections

BATCH Production Email

Connection Pool

12

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

MySCAN

Client Connectivity
Use a FAN aware connection pool
2
The connection pool
Invalidates connections to one instance Re-establishes new logical connections May create new physical connections Prevent new clients to be misrouted

The application needs to handle the transaction failure that might have occurred.
BATCH Production Email
Connection Pool

13

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

MySCAN

Client Connectivity
The Load Balancing (LB) cases
Connect Time Load Balancing (CTLB) Runtime Connection Load Balancing (RTLB) On the Client Side On the Server Side
BATCH Production Email

MySCAN

Connection Pool

14

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Connect Time Load Balancing (CTLB) on the client side
PMRAC = (DESCRIPTION = (FAILOVER=ON)(LOAD_BALANCE=ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = MySCAN)(PORT = 1521))


(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = Email)))

BATCH Production

MySCAN

Email

Connection Pool

15

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Connect Time Load Balancing (CTLB) on the server side
Traditionally, PMON dynamically registers the services to the specified listeners with:
Service names for each running instance of the database and instance names for the DB The listener is updated with the load information for every instance and node as follows: 1-Minute OS Node Load Average all 30 secs. Number of Connections to Each Instance

Number of Connections to Each Dispatcher


BATCH Production

MySCAN

Email

Connection Pool

?
16 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Use FAN for the Load Balancing cases
Connect Time Load Balancing (CTLB) Connect Time Connection Failover (CTCF)

Runtime Connection Load Balancing (RTLB) Runtime Connection Failover (RTCF) 30% connections
Im busy

RAC Database

Instance1 10% connections Im very busy Im idle Instance2

60% connections

Instance3

17

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connectivity
Use FAN for the Load Balancing cases
Connect Time Load Balancing (CTLB) Runtime Connection Load Balancing (RTLB) Also via AQ (Advanced Queuing) based notifications Background is always the Load Balancing Advisory
30% connections

RAC Database

Im busy Instance1 10% connections Im very busy Im idle Instance2

For more information, see: Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2: 5 Introduction to Automatic Workload Management
18 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

MySCAN

60% connections

Instance3

Node Membership

19

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Oracle RAC Architecture


Oracle Grid Infrastructure 11g Release 2 process overview

ASM Instance

Oracle Grid Infrastructure


OS OS

HA Framework

Node Membership My Oracle Support (MOS) OS Note 1053147.1 - 11gR2 Clusterware and Grid Home - What You Need to Know Note 1050908.1 - How to Troubleshoot Grid Infrastructure Startup Issues
20 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Node Membership Processes and Basics
Public Lan Public Lan

Main processes involved:

CSSD (ora.cssd)
CSSDMONITOR was: oprocd now: ora.cssdmonitor
CSSD

Private Lan / Interconnect

Oracle Clusterware
CSSD CSSD

SAN Network

Voting Disk

SAN Network

21

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


What does CSSD do?
Monitors nodes using 2 communication channels:

Private Interconnect Network Heartbeat Voting Disk based communication Disk Heartbeat

Evicts (forcibly removes nodes from a

cluster) nodes dependent on heartbeat feedback (failures) CSSD

Oracle Clusterware

Ping

CSSD

Ping
22 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Interconnect basics network heartbeat
Each node in the cluster is pinged every second Nodes must respond in css_misscount time (defaults to 30 secs.)

Reducing the css_misscount time is generally not supported Ping

Network heartbeat failures

will lead to node evictions

CSSD-log:
[date / time] [CSSD][1111902528] clssnmPollingThread: node mynodename (5) at 75% heartbeat fatal, removal in 6.770 seconds

CSSD

CSSD

23

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Voting Disk basics disk heartbeat
1 Each node in the cluster pings (r/w) the Voting Disk(s) every second
Nodes must receive a response in (long / short) diskTimeout time

IF I/O errors indicate clear accessibility problems timeout is irrelevant

Disk heartbeat failures

will lead to node evictions

CSSD-log:
[CSSD] [1115699552] >TRACE: clssnmReadDskHeartbeat: node(2) is down. rcfg(1) wrtcnt(1) LATS(63436584) Disk lastSeqNo(1)

CSSD

CSSD

Ping
24 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Voting Disk basics Structure
2 Voting Disks contain dynamic and static data:

Dynamic data: disk heartbeat logging


Static data: information about the nodes in the cluster

With 11.2.0.1 Voting Disks got an identity:

E.g. Voting Disk serial number: [GRID]> crsctl query css votedisk
1. 2 1212f9d6e85c4ff7bf80cc9e3f533cc1 (/dev/sdd5) [DATA]

Node information

Disk Heartbeat Logging

Voting Disks must therefore not be copied using dd or cp anymore


25 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Voting Disk basics Simple Majority rule
3 Oracle supports redundant Voting Disks for disk failure protection
Simple Majority Rule applies:
Each node must see the simple majority of configured Voting Disks

at all times in order not to be evicted (to remain in the cluster)


trunc(n/2+1) with n=number of voting disks

configured and n>=1


CSSD CSSD

Ping

26

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Simple Majority rule in extended clusters

http://www.oracle.com/goto/rac
Using standard NFS to support

a third voting file for extended cluster configurations (PDF)


CSSD CSSD

Same principles apply Voting Disks are just

geographically dispersed
27 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Voting Disks in Oracle ASM does not change their usage
[GRID]> crsctl query css votedisk

1.
2. 3.

2 1212f9d6e85c4ff7bf80cc9e3f533cc1 (/dev/sdd5) [DATA]


2 aafab95f9ef84f03bf6e26adc2a3b0e8 (/dev/sde5) [DATA] 2 28dd4128f4a74f73bf8653dabd88c737 (/dev/sdd6) [DATA]

Located 3 voting disk(s).

Oracle ASM auto creates 1/3/5 Voting Files


Voting Disks reside in one diskgroup only Based on Ext/Normal/High redundancy and on Failure Groups in the Disk Group Per default there is one failure group per disk ASM will enforce the required number of disks New failure group type: Quorum Failgroup

28

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Oracle Cluster Registry (OCR) placement in Oracle ASM

The OCR is managed like a datafile in ASM (new type)


It adheres completely to the redundancy settings for the diskgroup (DG) There can be more than one OCR location in more than one DG (DG:OCR 1:1) Recommendation is 2 OCR locations, 1 in DATA, 1 in FRA for example

29

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture


Backup of Clusteware files is fully automatic (11.2+)
Clusterware Files (managed in ASM) enables fully Automatic Backups: The Voting Disks are backed up into the OCR Any configuration change in the cluster (e.g. node addition) triggers a new backup of the Voting Files. A single, failed Voting Disks is restored by Clusterware automatically within a Disk Group, if sufficient disks are used no action required Note: Do not use DD to back up the Voting Disks anymore! The OCR is backed up automatically every 4 hours Manual Backups can be taken as required ONLY IF all Voting Disks are corrupted or failed AND (all copies of) the OCR are also corrupted or unavailable THEN manual interference would be required the rest is automatic.

30

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Why are nodes evicted?
Evicting (fencing) nodes is a preventive measure (its a good thing)!

Nodes are evicted to prevent consequences of a split brain:


Shared data must not be written by independently operating nodes The easiest way to prevent this is to forcibly remove a node from the cluster

CSSD

CSSD

31

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
How are nodes evicted? STONITH

Once it is determined that a node needs to be evicted,


A kill request is sent to the respective node(s) Using all (remaining) communication channels STONITH foresees that a remote node kills the node to be evicted

A node (CSSD) is requested to kill itself STONITH like

CSSD

CSSD

32

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
EXAMPLE: Network heartbeat failure

The network heartbeat between nodes has failed


It is determined which nodes can still talk to each other A kill request is sent to the node(s) to be evicted

Using all (remaining) communication channels Voting Disk(s) A node is requested to kill itself; executer: typically CSSD

CSSD
2

CSSD

33

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
What happens, if CSSD is stuck?

A node is requested to kill itself

BUT CSSD is stuck or sick (does not execute) e.g.:


CSSD failed for some reason CSSD is not scheduled within a certain margin

OCSSDMONITOR (was: oprocd) will take over and execute

See also: MOS note 1050693.1 Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

2 CSSDmonitor CSSD

CSSD

CSSD
34 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
How can nodes be evicted?
Oracle Clusterware 11.2.0.1 and later supports IPMI (optional) Intelligent Platform Management Interface (IPMI) drivers required IPMI allows remote-shutdown of nodes using additional hardware A Baseboard Management Controller (BMC) per cluster node is required

CSSD

CSSD

35

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
EXAMPLE: IPMI based eviction on heartbeat failure
The network heartbeat between the nodes has failed
It is determined which nodes can still talk to each other

IPMI is used to remotely shutdown the node to be evicted

CSSD

36

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Which node gets evicted?
Voting Disks and heartbeat communication is used to determine the node
In a 2 node cluster, the node with the lowest node number should survive In a n-node cluster, the biggest sub-cluster should survive (votes based)

CSSD

CSSD

37

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Cluster members can escalate a kill request
Cluster members (e.g Oracle RAC instances) can request

Oracle Clusterware to kill a specific member of the cluster


Oracle Clusterware will then attempt to kill the requested member

Oracle RAC DB Inst. 1

Oracle RAC DB Inst. 2

Oracle Clusterware

Inst. 1: kill inst. 2


38 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Cluster members can escalate a kill request
Oracle Clusterware will then attempt to kill the requested member
If the requested member kill is unsuccessful, a node eviction escalation can be issued,

which leads to the eviction of the node, on which the particular member currently resides

Oracle RAC DB Inst. 1

Oracle RAC DB Inst. 2

Oracle Clusterware

Inst. 1: kill inst. 2


39 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Cluster members can escalate a kill request
Oracle Clusterware will then attempt to kill the requested member
If the requested member kill is unsuccessful, a node eviction escalation can be issued,

which leads to the eviction of the node, on which the particular member currently resides

Oracle RAC DB Inst. 1

Oracle RAC DB Inst. 2

Oracle Clusterware

Inst. 1: kill inst. 2


40 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fencing Basics
Cluster members can escalate a kill request
Oracle Clusterware will then attempt to kill the requested member
If the requested member kill is unsuccessful, a node eviction escalation can be issued,

which leads to the eviction of the node, on which the particular member currently resides

Oracle RAC DB Inst. 1


Oracle Clusterware

41

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


With 11.2.0.2 onwards, fencing may not mean re-boot
Until Oracle Clusterware 11.2.0.2, fencing meant re-boot

With Oracle Clusterware 11.2.0.2, re-boots will be seen less, because:

Re-boots affect applications that might run an a node, but are not protected Customer requirement: prevent a reboot, just stop the cluster implemented...

App X
RAC DB Inst. 1 RAC DB Inst. 2

App Y

CSSD

Oracle Clusterware

CSSD

42

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


How it works
With Oracle Clusterware 11.2.0.2, re-boots will be seen less:

Instead of fast re-booting the node, a graceful shutdown of the stack is attempted
It starts with a failure e.g. network heartbeat or interconnect failure

App X
RAC DB Inst. 1 RAC DB Inst. 2

App Y

CSSD

Oracle Clusterware

CSSD

43

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


How it works
With Oracle Clusterware 11.2.0.2, re-boots will be seen less:

Instead of fast re-booting the node, a graceful shutdown of the stack is attempted

Then IO issuing processes are killed; it is made sure that no IO process remains
For a RAC DB mainly the log writer and the database writer are of concern
App X
RAC DB Inst. 1

App Y

CSSD

Oracle Clusterware

CSSD

44

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


How it works
With Oracle Clusterware 11.2.0.2, re-boots will be seen less:

Instead of fast re-booting the node, a graceful shutdown of the stack is attempted

Once all IO issuing processes are killed, remaining processes are stopped
IF the check for a successful kill of the IO processes, fails reboot
App X
RAC DB Inst. 1

App Y

CSSD

Oracle Clusterware

CSSD

45

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


How it works
With Oracle Clusterware 11.2.0.2, re-boots will be seen less:

Instead of fast re-booting the node, a graceful shutdown of the stack is attempted

Once all remaining processes are stopped, the stack stops itself with a restart flag

App X
RAC DB Inst. 1 Oracle Clusterware CSSD

App Y

OHASD

46

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


How it works
With Oracle Clusterware 11.2.0.2, re-boots will be seen less:

Instead of fast re-booting the node, a graceful shutdown of the stack is attempted

OHASD will finally attempt to restart the stack after the graceful shutdown

App X
RAC DB Inst. 1 Oracle Clusterware CSSD

App Y

OHASD

47

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Re-Bootless Node Fencing


EXCEPTIONS
With Oracle Clusterware 11.2.0.2, re-boots will be seen less, unless:

IF the check for a successful kill of the IO processes fails reboot IF CSSD gets killed during the operation reboot IF cssdmonitor (oprocd replacement) is not scheduled reboot IF the stack cannot be shutdown in short_disk_timeout-seconds reboot
App X
RAC DB Inst. 1 RAC DB Inst. 2

App Y

CSSD

Oracle Clusterware

CSSD

48

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Interconnect

49

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Interconnect
Heartbeat and memory channel between instances
Network Public Lan
Node 1 Node 2 Node N-1 Node N

Client

Interconnect
with switch SAN switch

50

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Interconnect
Redundant Interconnect Usage
1 Redundant Interconnect Usage can be used as a bonding alternative
It works for private networks only; the nodeVIPs use a different approach It enables HA and Load Balancing for up to 4 NICs per server (on Linux / Unix)
It can be used by Oracle Databases 11.2.0.2 and Oracle Clusterware 11.2.0.2 It uses so called HAIPs that are assigned to the private networks on the server The HAIPs will be used by the database and ASM instances and processes

Node 1

Node 2

HAIP1
HAIP2

HAIP3
HAIP4

51

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Interconnect
Redundant Interconnect Usage
2 A multiple listening endpoint approach is used
The HAIPs are taken from the link-local (Linux / Unix) IP range (169.254.0.0)

To find the communication partners, multicasting on the interconnect is required With 11.2.0.3 Broadcast is a fallback alternative (BUG 10411721) Multicasting is still required on the public lan for MDNS for example. Details in My Oracle Support (MOS) Note with Doc ID 1212703.1: 11.2.0.2 Grid Infrastructure Install or Upgrade may fail due to Multicasting
Node 1 Node 2

HAIP1
HAIP2

HAIP3
HAIP4

52

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

The Interconnect
Redundant Interconnect Usage and the HAIPs
If a network interface fails, the assigned HAIP is failed over to a remaining one. Redundant Interconnect Usage allows having networks in different subnet

You can either have one subnet for all networks or a different one for each You can also use VLANs with the interconnect. For more information see:

Note 1210883.1 - 11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip Note 220970.1 - RAC: Frequently Asked Questions - How to use VLANs in Oracle RAC? AND
Are there any issues for the interconnect when sharing the same switch as the public network by using VLAN to separate the network?

Node 1

Node 2

HAIP1
HAIP1 HAIP2

HAIP3
HAIP4 HAIP3

53

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

54

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

También podría gustarte