Está en la página 1de 35

A QUANTITATIVE MODEL FOR INFORMATION

SECURITY RISK ASSESSMENT

BITS ZG629T: Dissertation

By

HARIHARAN M

(2007HZ12033)

Dissertation work carried out at

SAMTEL GROUP, New Delhi

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE

PILANI (RAJASTHAN)

MARCH 2010
A QUANTITATIVE MODEL FOR INFORMATION SECURITY RISK
ASSESSMENT

BITS ZG629T: Dissertation

By

HARIHARAN M

(2007HZ12033)

Dissertation work carried out at

SAMTEL GROUP, New Delhi

Submitted in partial fulfilment of M.S. (Software Systems) degree


programme

Under the Supervision of

Mr. Sudhir Mittal, Chief Information Officer,

SAMTEL Group, New Delhi

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE

PILANI (RAJASTHAN)

MARCH 2010
CERTIFICATE

This is to certify that the Dissertation entitled A QUANTITATIVE MODEL FOR INFORMATION

SECURITY RISK ASSESSMENT and submitted by HARIHARAN M, having ID-No. 2007HZ12033 for

the partial fulfillment of the requirements of M.S. (Software Systems) degree of BITS, embodies the

bonafide work done by him under my supervision.

______________________

Signature of the Supervisor

Place : New Delhi

Date : ____________________ Sudhir K Mittal, CIO

SAMTEL Group, New Delhi

i
Birla Institute of Technology & Science, Pilani
Work-Integrated Learning Programmes Division
Second Semester 2009-2010
BITS ZG629T : Dissertation

ID No. : 2007HZ12033
NAME OF THE STUDENT : HARIHARAN M
EMAIL ADDRESS : mhharan@yahoo.com
STUDENT’S EMPLOYING : SAMTEL GROUP (NEW DELHI)
ORGANISATION & LOCATION
SUPERVISOR’S NAME : MR. SUDHIR K MITTAL
SUPERVISOR’S EMPLOYING : SAMTEL GROUP (NEW DELHI)
ORGANISATION & LOCATION
SUPERVISOR’S EMAIL ADDRESS : skmittal@samtelgroup.com
DISSERTATION TITLE : A QUANTITATIVE MODEL FOR INFORMATION
SECURITY RISK ASSESSMENT

ABSTRACT

Information Security is of paramount importance in today’s digital world, especially with statutory
and regulatory pressure building on the corporates to introduce Enterprise Risk Assessment
Framework in the environment. There has been an increased focus on performing information
security risk assessment by independently handling the “Confidentiality”, “Integrity” and
“Availability” aspects of Information Security risk. Irrespective of the type of information asset, its
“Availability” is of utmost importance and the same has been taken as the theme of this thesis.

The currently available methodology and approaches for performing Availability Risk Assessment
either provides technology view of risk assessment or trespasses into financial valuation for
quantifying risk.

This thesis detours from conventional approach and puts forth a new model by defining a service
oriented approach to availability risk assessment on one side and quantifying risk on non monetary
terms on the other. In order to quantify risk, the established theory of software architecture is used
to derive the availability percentage.

A case study has also been presented to substantiate application of the proposed model in
management reporting of IT Performance.

Broad Academic Area of Work : Information Systems Audit


Key words : Information Security; Risk Assessment; Quantitative Approach

____________________ ______________________

Signature of the Student Signature of the Supervisor


Name: HARIHARAN M Name: SUDHIR K MITTAL

Date: Date:
Place: Place:

ii
Acknowledgements

I would like to thank my supervisor Mr. Sudhir K Mittal, Chief Information


Officer, SAMTEL Group for providing me an opportunity to undertake this
dissertation work under his guidance and extending a positive environment for
research.

I would also like to thank Dr. H Sathyanarayana Sai, General Manager (IS)
Manav Rachna International University, for sparing valuable time in facilitating
with research references and his support as the additional examiner for this
dissertation work.

_____________________
Hariharan M
Divisional Manager - IT
SAMTEL Group

iii
List of Figures

Figure 1 – ISMS Road Map......................................................................................................... 2

Figure 2 —High Level Use Case Diagram ..................................................................................... 10

Figure 3 —e-mail Infrastructure .................................................................................................. 12

Figure 4 — Software Deployment Architecture of CRM ..................................................................... 15

Figure 5—Five Levels of Process Maturity..................................................................................... 16

Figure 6—Architectural Maturity level of System.............................................................................. 16

Figure 7—Maturity level for Support Architecture ............................................................................. 17

Figure 8— Software Deployment Architecture with MTBF .................................................................. 19

Figure A-1— Service Availability MIS – Internet Browsing .................................................................. 25

Figure A-2— Service Availability MIS – Document Management Service ................................................ 26

iv
List of Tables

TABLE 1 – RISK ASSESSMENT MATRIX ............................................................................................................................ 5

TABLE 2—INVENTORY OF INFORMATION ASSET (IT SYSTEMS) .............................................................................. 8

TABLE 3 – RISK ASSESSMENT VERDICT OF ERP & E-MAIL SYSTEM....................................................................... 9

TABLE 4 —SERVICE CATALOGUE .................................................................................................................................... 11

TABLE 5—SERVICE CAPABILITY MATURITY ASSESSMENT ..................................................................................... 18

TABLE 6—MTBF AND MTTR MATRIX ................................................................................................................................ 20

TABLE 7—AVAILABILITY RISK ASSESSMENT SHEET ................................................................................................. 21

TABLE A-1—SERVICE-WISE IT PERFORMANCE SHEET............................................................................................. 26

v
Table of Contents
CERTIFICATE ..............................................................................................................................................................I

ABSTRACT .................................................................................................................................................................II

ACKNOWLEDGEMENTS ..........................................................................................................................................III

LIST OF FIGURES .....................................................................................................................................................IV

LIST OF TABLES ........................................................................................................................................................V

1. INTRODUCTION .................................................................................................................................................. 1

1.1 CONTEXT OF RISK ASSESSMENT ....................................................................................................................................... 1


1.2 REPORT STRUCTURE ......................................................................................................................................................... 2

2. BACKGROUND ................................................................................................................................................... 4

2.1 RISK ASSESSMENT APPROACH.......................................................................................................................................... 4


2.2 RISK ASSESSMENT TOOLS ................................................................................................................................................. 4
2.3 INFORMATION SECURITY RISK ........................................................................................................................................... 5
2.4 AVAILABILITY RISK ASSESSMENT....................................................................................................................................... 6

3. SERVICE ORIENTED APPROACH..................................................................................................................... 8

3.1 INVENTORY OF INFORMATION ASSETS ............................................................................................................................... 8


3.2 SERVICE CATALOGUE ........................................................................................................................................................ 9
3.3 SERVICE AVAILABILITY ..................................................................................................................................................... 11

4. CHARACTERISATION OF SERVICE AVAILABILITY ..................................................................................... 12

4.1 SERVICE AVAILABILITY COMPONENTS ............................................................................................................................. 12


4.2 SERVICE AVAILABILITY PARAMETERS .............................................................................................................................. 12

5. SERVICE CAPABILITY MATURITY ASSESSMENT........................................................................................ 14

5.1 SOFTWARE SYSTEM’S DEPLOYMENT ARCHITECTURE .................................................................................................... 14


5.2 SYSTEM AND SUPPORT CAPABILITY MATURITY LEVELS ................................................................................................. 15
5.3 SERVICE CAPABILITY MATURITY ASSESSMENT ............................................................................................................... 18

6. SERVICE CAPABILITY MEASUREMENT MATRIX ......................................................................................... 19

6.1 SYSTEM MTBF AND SUPPORT MTTR ............................................................................................................................ 19


6.1 MTBF AND MTTR MATRIX ............................................................................................................................................... 20

7. AVAILABILITY RISK ASSESSMENT MATRIX................................................................................................. 21

CONCLUSION ........................................................................................................................................................... 22

BIBLIOGRAPHY........................................................................................................................................................ 23

APPENDIX: IT PERFORMANCE REPORTING CASE STUDY ............................................................................... 25

vi
1. Introduction
“The only truly secure system is one
that is powered off, cast in a block
of concrete and sealed in a lead-
lined room with armed guards – and
even then I have my doubts.”

Eugene H. Spafford

1.1 Context of Risk Assessment


Risk Assessment is the corner stone of any Enterprise Risk Management (ERM)
Framework. Lately, regulatory pressure like compliance to US Sarbanes-Oxley Act
[1], Basel II [2] (International Convergence of Capital Measurement and Capital
Standards), has forced organizations to implement ERM framework to meet the
compliance requirement. Similar regulation exists in many countries (India
“SEBI’s Clause 49”, Japan “J-SOX”, Canada “Bill 198”, Australia “CLERP 9”) and
many others would follow, which would make risk assessment an integral and
important part of organization’s management system.

As most of the organizations heavily rely on IT Systems for their business


operations, the risk assessment exercise substantially focuses on risk assessment
of IT systems so as to ensure that security and control infrastructure are in place
and operating effectively [3]. Organisations have started implementing internal
control frameworks like COSO[4] and COBIT[5] to address these emerging
requirements.

Apart from ERM, organisations require Information Security Management System


(ISMS) to be in place to secure vital corporate and customer information. In some
sectors having ISO 27001 certification is mandatory. ISMS helps maintain:-

a) competitive edge,
b) cash-flow,
c) profitability,
d) legal compliance and
e) commercial image.

Irrespective of whether ERM framework is being implemented or ISMS, the


Information Security Risk Assessment of IT systems is an important phase. The
relative position of Information Security Risk Assessment in implementation of
ISMS is shown in Figure 1.

1
Figure 1 – ISMS Road Map

1.2 Report Structure


This report is divided into eight chapters and four appendices as specified below:

• Chapter 1: Introduction

Provides the relevance of Information Security Risk Assessment in the domain of


Enterprise Risk Assessment and Information Security Management Systems

• Chapter 2: Background

Provides a brief survey of literature, explaining various approaches and tools


already established and justifies the motivation behind this thesis.

• Chapter 3: Service-Oriented Approach

Introduces the need for a new approach with a reference to attempts made by
other researchers.

• Chapter 4: Characterisation of Service Availability

Decomposes the Service Availability to facilitate the measurement of availability


percentage

• Chapter 5: Service Capability Maturity Modelling

Proposes a new capability maturity model for assessing service capability


maturity based on the principles of Software Architecture.

• Chapter 6: Service Capability Measurement Matrix

Provides a template for measuring the parameters of service availability

2
• Chapter 7: Availability Risk Assessment Matrix

Provides application of various components of the proposed model in creating the


Availability Risk Assessment Matrix

• Conclusion

• References:

• Appendix:

Provides the Case study of application of this model in reporting IT performance


to management

3
2. Background

2.1 Risk Assessment Approach


Stilianos Vidalis in his work “A Critical Discussion of Risk and Threat Analysis
Methods and Methodologies.” [6], has classified risk assessment approaches
under four categories.

• Quantitative Approach

• Qualitative Approach

• Knowledge-Based Approach

• Model-Based Approach

There are many model-based approaches [7],[8], which attempts to map the
infrastructure using UML and other modelling techniques, but the risk analysis is
done using either qualitative or quantitative techniques.

As the “moral hazard of the analyst has influence on the results because human
nature is subjective” [25], researchers have pointed that both quantitative and
qualitative approach have flaws as the assessment is subjective.

Irrespective of the approach the final risk analysis results in either qualitatively
measuring risk as “High”,, “Low”, “Medium” or quantitatively measuring risk as
annual loss expectancy (ALE) which is derived from the annualized rate of
occurrence (ARO) of risk incidence.

Sanjay Goel et al. [9] has proposed use of matrix based approach comprising of
“Vulnerability Matrix”, “Threat Matrix” and “Control Matrix” to perform
Information Security Risk Assessment, but has relied upon scale and weights
which are fundamentally intuitive and indirectly provides qualitative assessment.

2.2 Risk Assessment Tools


In order to standardise the process and practice of risk assessment many tools
have been adapted and developed by organisations to meet their specific
requirements.

The following are the most commonly used tools, which has found reference
across the various approaches [26].

• Risk Assessment Matrix

• Questionnaire

4
Due to the inherent complexity of quantitative technique in appropriately valuing
assets [10] the most widely used method is creating Risk Assessment Matrix
using qualitative measurement.

2.3 Information Security Risk


ISO 27001 [11] classifies “Information Security” risk into “confidentiality,”
“integrity” and “availability”(CIA) risk. Here “confidentiality” means “that the
assets of the system are only accessed by authorized parties”, “integrity” means
“that the assets of the system can be modified by authorized parties only, and in
authorized ways” and “availability” refers to “that the assets of a system are
always available to the authorized parties”[12]. In order to assess Information
Security risk, the most widely followed methodology is creating a risk assessment
matrix, which is tabulated (Table 1) to record “Asset Value” and overall “Risk
Value” for the asset. Asset Value represents relative importance of the asset
w.r.t. CIA. An asset can be assigned a cardinal value between 1 to 3 indicating
importance in the grade of Low, Medium or High. The overall Risk Value of an
asset for the purposes of Security Risk Assessment is aimed to quantify value of
asset at risk. The risk value is taken as “Risk Value” = Sum (Confidentiality Value
of Asset, Integrity Value of Asset, Availability Value of Asset).

Table 1 – Risk Assessment Matrix

Risk is a function of likelihood and impact where “likelihood” is the frequency or


probability of occurrence of the incidence and “impact” is the effect it has on
business. By performing a Failure Mode and Effects Analysis (FMEA) exercise a
cardinal value for “likelihood” and “impact,” is assigned and then the risk is
determined using the equation “risk = likelihood x impact”.

Further, to relatively rank the Information Security risk of “Information Asset” the
Risk Priority Number (RPN) is determined for every item listed in the inventory.
RPN is an indicator for prioritising/ranking of risks that takes into account the risk
(probability of occurrence and potential impact) and the chance of non-detection.

RPN = “Risk” X “Chance of non-detection”

The RPN can range between zero to 150. To illustrate the approach, if the
“likelihood” that a system will be unavailable is scored at 3 (medium chance) in a
scale of 0-6, and the corresponding “impact” is scored at 2 (significant impact) in
a scale of 0-5, then the risk is valued as 6, which may be interpreted as medium
risk. As the un-availability can be easily detected the value for “chance of non-
detection” suppose is taken as 1, then RPN in this case is 6, had the “chance of
non-detection” being extremely high say 5 then the RPN will be 30 indicating
increased emphasis.

5
It is worth noting that the values assigned for “CIA asset value”, “likelihood” and
“impact” are assigned intuitively and hence the risk of the same asset are likely
to vary across assessments if the scores are assigned by different persons. It is
also visible that the Risk Value has not been factored in the calculation of RPN
leaving behind the Information Security aspect of risk.

In order to perform in-depth “Information Security Risk Assessment” [13] the risk
assessment can be done for individual criterion viz. confidentiality, integrity and
availability. A. Morali et. al.[ 14] and E. Zambon et al, [15] have taken
Information Security Risk Assessment a step further by proposing model based
approach to Availability Risk Assessment and architecture based approach to
confidentiality risk assessment respectively.

The current trend towards Cloud computing and Software as a Service (SaaS) has
brought Service Availability to the forefront in Security Risk Assessment. E.
Zambon et al. [27] has rightly highlighted the emergence and importance of
Service Level Agreements (SLA) between the service provider (IT Service) and
the service receiver. Having a special focus on Availability Risk Assessment as
part of Information Security Risk Assessment appears all the more relevant at
this juncture and hence the thesis is focused on Availability Risk.

2.4 Availability Risk Assessment


In this thesis, as part of Information Security Risk Assessment a quantitative
model is proposed for performing Availability Risk Assessment.

Risk Assessment Matrix has been chosen as the tool of choice keeping in view the
simplicity and comprehensiveness it has for presenting to stakeholders. The
fundamental step of creating the inventory of Information Asset as performed in
ISMS is maintained in the proposed methodology with slight variation, which is to
create a Service Catalogue instead of inventory of Information Asset.

A review of Risk Assessment literature shows the following process of risk


management

• Risk identification: The process of determining what can happen, why and how.

• Risk assessment: The overall process of risk analysis and risk evaluation.

• Risk analysis: A systematic use of available information to determine


how often specified events may occur and the magnitude of their
consequences.

• Risk evaluation: The process used to determine risk management


priorities by comparing the level of risk against predetermined standards,
target risk levels or other criteria.

• Risk treatment: Selection and implementation of appropriate options for dealing


with risk.

As per the conventional approach suggested in various literatures, Information


Security Risk Assessment is an exercise in which an attempt is made to identify
and list the known vulnerabilities which can be exploited. Subsequently Risk

6
(quantitative or qualitative) is calculated by taking into account the “Asset Value”,
“Probability of Vulnerability being exploited” and “Business Impact”. Based on the
“Risk Assessment” data, “Risk Treatment” is suggested which would reduce the
overall risk either by reducing vulnerability, possibility of exploitation or business
impact.

When these exercises are done repeatedly year-after-year, especially when most
of the recommended “Risk Treatment” has already been implemented, the
conventional approach fails to provide prescriptive inputs. The following
illustration would substantiate this point. Say a core banking application is hosted
in an Application Server which is vulnerable to Virus attack due to OS level
vulnerabilities. In the Availability Risk Assessment the probability of successful
attack is estimated and the resultant business loss is computed. The Risk
Treatment plan gives alternate option which could be installation of Antivirus
software, Implementation of automatic OS patching solution etc. to mitigate the
risk and the estimated cost so that a cost benefit analysis can be done. Still many
questions remain unanswered.

• What if these are already in place (which in any case will happen over the
years)?

• What is the “Residual Risk”?

• What is the overall road map in securing the Application Server?

It can be seen that the above approach focused more on security vulnerabilities
which ideally should be part of “Vulnerability Assessment and Penetration
Testing” (VAPT) exercise. VAPT is technically a specialised domain and conducted
specifically to un-earth exploitable vulnerabilities, this exercise uses technical
tools to scan the environment and list exploitable vulnerabilities. Basing
Availability Risk Assessment on technical vulnerabilities dilutes the essence of
Availability Risk Assessment. The objective of Availability Risk Assessment and
“Vulnerability Assessment” are different and hence the basic approach should also
be different. The proposed model hence deviates from the traditional approach
and attempts to provide a comprehensive view of Availability Risk, including
Residual Risk (Residual risk is the risk left over after prescribed controls are
already in place).

Another aspect to be considered in conventional quantitative measurement of risk


is the methodology used for assigning financial value to asset and loss to
business resulting from unavailability. The financial valuation should be based on
research in the field of finance, whereas IT and security professional have taken
the liberty to assign financial value to asset and business loss, which may not be
acceptable to finance community.

The need for a simple and a practical quantitative approach in risk assessment
can be hardly overemphasised. It can be empirically argued that the availability
percentage of a system or service is a good measure to quantify Availability Risk
and need not be substantiated with monitory value. For example, if a system or
service is rated at 99.5 percent availability, the risk is clearly reflected. Hence the
proposed methodology attempts to derive availability percentage for quantifying
risk rather than ALE for every service listed in the Service Catalogue.

7
3. Service Oriented Approach

3.1 Inventory of Information Assets


As per the ISO27002, “The Code of Practice Standard”, the foremost step in
performing risk assessment is creating the inventory of Information Assets. In
order to create an inventory of Information Assets of IT system, the asset is
categorised into:-

• IT Hardware

• Software

• Data

• Networking and Communication

A comprehensive exercise is done to collect details of all the assets within the
scope of risk assessment and tabulated as shown in Table 2

Table 2—Inventory of Information Asset (IT Systems)

Subsequent to creation of inventory, an item by item risk assessment is done


using FMEA and then a final verdict on risk is arrived at for every asset listed. To
Illustrate, a Risk Assessment Verdict of an ERP & e-mail system is shown in Table
3.

8
Table 3 – Risk Assessment Verdict of ERP & e-mail System

As can be seen, the conventional approach provides a technology view and not a
business view of risk. As per the verdict shown in Table 3, the availability of ERP
Database Server is at lower risk as compared to ERP Application Server and ERP
Web Server and Mail Gateway are at the highest risk level. It fails to address
questions like:-

• What is the overall availability risk of the ERP system?

• What is the impact of SAN Storage on other application like e-mail system
which also might be using SAN Storage?

• Which IT Service is at higher availability risk, ERP or e-mail system?

A challenge in presenting Risk Assessment Matrix to business management is that


the bird’s eye view of overall risk is not clearly reflected when inventory of
Information asset is used as the base for conducting risk assessment.

3.2 Service Catalogue


The business, views IT as a service provider. Taking a service-oriented approach
to risk assessment enables the business process owner to directly relate the IT
systems with the business area for which it is operating. The proposed model
uses a service catalogue instead of an information asset inventory. The service
catalogue is prepared by listing the services offered to the users from various IT
systems [16]. For example, the e-mail system might offer e-mail access using
Outlook client, web client or Blackberry. Similarly, all the IT systems are
scrutinized for creating a comprehensive service catalogue

The Open Group Architecture Framework suggests the use of an Application Use-
Case diagram for mapping application services, according to TOGAF [17]
“Application services are consumed by actors or other application services and

9
the Application Use-Case diagram provides added richness in describing
application functionality”

In order to create Service Catalogue, existing Use Case documentation could be


used as it would enable identification of services offered to users/actors.
Wherever such documentation is not available a high level use case diagram for
the IT system being reviewed needs to be created. Figure 2 shows a top level Use
case of an ERP system.

Figure 2 —High Level Use Case Diagram

The use case clearly helps in identifying the various services available to actor,
here it can be seen that the actor “account manager” uses “accounting” service,
the actor “payroll manager” uses “HR and Payroll” service etc. In this way the use
case diagram of respective IT system will enable in creating a comprehensive
Service Catalogue.

A comparison of Service Catalogue vs. Inventory of Information Asset would


clearly highlight the relevance a Service Catalogue brings to business managers
as they see connection to the business process they handle. The Service
Catalogue (Table 4) clearly lists the user’s view of IT system.

10
Table 4 —Service Catalogue

3.3 Service Availability


The service-oriented approach to availability risk assessment also under pins the
need for measuring service availability rather than asset availability for
quantifying availability risk.

The user community can understand the impact of non-availability of say


“Accounting Service” rather that of “SAN Storage” (non-availability of SAN
Storage would make accounting functionality of ERP system also non-available)

The thesis puts forth a model which is based on Service-Oriented approach to


create a Risk Assessment Matrix for quantitatively measuring service availability
risk.

11
4. Characterisation of Service Availability

4.1 Service Availability Components


An IT service is considered available when it is accessible to end user [18]. An
end-to-end service availability requires that all connecting components are
functioning properly. A typical e-mail infrastructure may look as shown in Figure
3.

Figure 3 —e-mail Infrastructure

Internet Incoming &


Outgoing
message
Delivery

Mail Gateway Server


Directory Service
(User Management)

Data Centre LAN

Message Delivery

Mail Storage Server in Mail Transport Server


High Availability Mode

A user would be able to use the service of e-mail system only when all the
components are functioning. If say mail gateway is not functioning, then user will
not be able to exchange mails with external domains and if the mail store is not
functioning then user will not be able access even a single e-mail.

In order to decide whether e-mail service is available, the availability of all the
components affecting the service needs to be assessed.

4.2 Service Availability Parameters


There are two factors affecting service availability [19], one is related to fault
resulting into failure of service(here degradation of service is not considered) and

12
another related to restoration of service. Whereas, in the conventional approach
only failure is taken into consideration, in the proposed model the restoration of
service is also factored to assess overall risk including residual risk.

The availability of a service depends on how often the service fails and how much
time it takes to restore the service back. Mean Time Between Failure (MTBF)
measures average failure rate and Mean Time To Repair (MTTR) measures
average restoration time. Using MTBF and MTTR, the availability percentage can
be calculated as follows [20]:

The proposed model puts forth a methodology for deriving MTBF and MTTR by
assessing the system and support capabilities, and then using it to calculate
availability percentage.

13
5. Service Capability Maturity Assessment

5.1 Software System’s Deployment Architecture


IT systems are essentially an outcome of the software engineering process.
Research in the field of software engineering has established that software
architecture has a decisive role in meeting various quality attributes, system
availability being one of them. Research also prescribes the use of software
architecture in evaluating quality attributes [21], such as availability,
performance and modifiability.

Availability of a software/service is a Non-functional Requirement (NR). Research


in the field of software architecture provided methods and notation to map the
components, how they communicate with each other and clearly brings out
various points of failure.

The use of software architecture in modern software application environment is


necessitated due to partitioning of overall service delivery across multiple
software sub-systems. The notations and architectural primitives help describe
this complex scenario in a formal way which can be shared with all stakeholders.

Basing risk assessment on architectural framewotk provides a platform for not


only deriving risk indicators for existing systems, but also for new systems.

As the proposed model uses restoration time of service for computation of Service
Availability, it brings into focus the Support available with respect to various
hardware and software responsible for delivering the service. Hence the
architecture modelling proposed to be used is software system’s deployment
architecture. Software system’s deployment architecture has a significant effect
on the system’s non-functional properties, availability being one of them [22].
Figure 4 illustrates the “Software Deployment Architecture” of a CRM system
which provides “Customer Subscription Registration Service”.

Consciously, the network or connectivity components which inter-connects


various hardware and users are kept outside the scope of Availability Risk
Assessment, as the failure of the network is considered as “Network Outage”, and
affect all the services and inclusion of same will affect the availability percentage
of respective services by a constant factor. Rather, “Network Services” should be
part of Service Catalogue as a separate line item requiring Availability Risk
Assessment.

14
Figure 4 — Software Deployment Architecture of CRM

In the proposed model, a Software Deployment Architecture is to be prepared for


the entire Service Catalogue item, uniquely identifying each independent
component (Hardware and Software system) which can be physically mapped on
to a single server or cluster of servers.

5.2 System and Support Capability Maturity Levels


Watts Humphrey of Software Engineering Institute (SEI), Carnegie Mellon
University introduced the concept of five levels of software development process
maturity [23], which formed the base of the Capability Maturity Model (CMM) for
Software [24]. In Figure 5 the 5 levels of process maturity has been depicted:-

Initial (Level 1): There is ad-hocism and fire fighting in the actions

Repeatable (Level 2): The actions are repeatable but are intuitive

Defined (Level 3): The actions are governed by written/documented methods

Managed (Level 4): The actions can be measured for efficiency

Optimised (Level 5): The measurements are reviewed and actions optimised

15
Figure 5—Five Levels of Process Maturity

The maturity levels used by Humphrey finds a universal applicability and have
been used in this thesis as a reference to evaluate the system and support
capability by assessing their architectural maturity levels. The architectural
maturity level (Figure 6) shows the different style in which an IT system can
architecturally deployed. Here a System means Hardware and Software.

Figure 6—Architectural Maturity level of System


Optimised HA
System
(Level 5)

System
Deployed in HA
Mode
(Level 4)

Standby System
part of
deployment
architecture
(Level 3)

Standby System
can be arranged
(Level 2)

Single System
(Level 1)

16
An IT system, the components of which are running in a set of individual
hardware is considered to be at maturity level 1. In case the environment is such
that if one or more of the hardware out of the total set fails and a standby can be
arranged by redeployment of available resources, then the system architectural
maturity is at level 2. Further, if the standby hardware has already been
provisioned as a dedicated replacement (defined in advance), then that
environment is at maturity level 3. An IT system by design, if recognises the
existence of alternate resource and the failover time is predictable and
measurable, then the architectural maturity of IT system is at Level 4, such IT
systems are also referred to as High-Availability (HA) system. Finally if there is a
process followed to review and reduce the failover time of a High-Availability
system, then the maturity is at Level 5.

Similarly, the maturity of support architecture (IT systems), can also be graded
into five levels as shown in Figure 7.

Figure 7—Maturity level for Support Architecture

Wherever, an IT system restoration is done in a firefighting or chaotic manner,


the support maturity for that system is at Level 1. In case if the skillset are
already available and identifiable and can be deployed for restoration, then it can
be said that support maturity is at Level 2. If the support architecture for the
system is mature enough to make available a documented restoration process,
then the level of maturity is at level 3. Further if there is a mechanism in the
environment to measure the time taken to restore and a corresponding SLA to
benchmark, then the maturity of support is at Level 4. Finally if there is a process
followed to review the Restoration process and optimise the SLA to reduce
restoration time then the maturity is at Level 5.

17
5.3 Service Capability Maturity Assessment
Using the maturity model defined above for system and support architecture, the
respective service is assessed using their deployment architecture and a service
wise system maturity level and support maturity level is established. This is done
by understanding the system landscape and support services available for
respective services. Say, for example, in an assessment of an e-mail system, it is
found that the Blackberry service is running on single system architecture and the
administrator demonstrates that in case of its failure a standby can be made
available to install the Blackberry application, and further it is noted that the
administrator has skills to restore the application. In such a scenario, the
Blackberry service can be presumed to be at level 2 (“standby can be arranged”)
in system architecture maturity and level 2 (“skillset available”) in support
architecture maturity.

A similar exercise for all of service catalogue item is to be done to create Service
Capability Maturity Assessment sheet as shown in Table 5.

Table 5—Service Capability Maturity Assessment

This assessment sheet records the service-wise system and support capability
maturity levels and is later extended to create the Availability Risk Assessment
Matrix. The major advantage of using the maturity level is that there is no
subjectivity and a clear road map for improvement gets recorded for all the
identified services.

18
6. Service Capability Measurement Matrix

6.1 System MTBF and Support MTTR


Commonly MTBF data is available as part of Hardware data sheets. In our model
the requirement is to get MTBF for the entire system, which is hardware and all
other software which makes the service available. The Software Deployment
Architecture assists in identifying all the components of the system. Based on
empirical observation, each component is to be assigned a MTBF value. In case
historic data is available then the following formula can be used to derive
component level MTBF

If S denotes set of MTBF of all the components of the system then:

System MTBF = min S

To illustrate, a Software Deployment Architecture of an e-mail system with


component level MTBF is shown in Figure 8.

Figure 8— Software Deployment Architecture with MTBF

Server – Mail Gateway


Server
SMTP Server
Operating
MTBF System
8760
hrs
Server – Mail
Database
Server – Mail Server in HA
HUB Server Cluster
Mail
Mail Store
Transport
Server
Server
Operating Server – Directory Operating
System Server System
MTBF MTBF
8760 26280
hrs LDAP Server hrs

Operating
System MTBF
4380
hrs

Operating
System
E-mail Client
Client -
Computer

In this case, the System MTBF is 4380 hrs which commensurate with the saying
“security is only as strong as its weakest link”.

19
Support MTTR in the proposed model intends to help organisations reflect upon as
to what they consider as “enterprise grade”. The MTTR value should not be biased
by the existing system and vendor-specific experiences; rather, the value should
be an indicator of what the organization considers as acceptable resolution time
from its support service.

6.1 MTBF and MTTR matrix


The proposed model prescribes creation of a universal MTBF and MTTR matrix
based on techniques and principles established above. The universal applicability
of the matrix is the key and this requires a rigorous discussion with the various
stakeholders to arrive at a consensus.

This matrix is therefore created by assigning MTBF in hours against each of the
system architecture maturity levels under the MTBF column. A corresponding
MTTR in hours is assigned for every support capability maturity level. The MTBF
value for the first three levels of system architecture maturity will be the same,
as effectively the service is operating on a single system. The difference in
maturity level is an indicator of the capability that exists in the environment to
arrange standby or alternate systems, in other words it reflects the maturity of
the environment to repair/restore the service. Table 6 shows the template that is
to be used for creating the MTBF and MTTR matrix.

Table 6—MTBF and MTTR Matrix

As can be seen from the values, the repair time (MTTR) is not only dependent on
the support architectural maturity but also on system architectural maturity. This
implies that given a particular level of support maturity, the time taken to restore
a service would decrease with an increase in the system maturity level. Another
point to be noted is that the matrix would evolve as the organisation matures in
its system and support capability levels, In initial stage MTBF and MTTR value for
level 5 of System Architecture maturity may not be available.

20
7. Availability Risk Assessment Matrix

Using the MTBF and MTTR matrix, respective MTBF and MTTR values for each
Service Catalogue item is derived. Continuing with the earlier example, the
Blackberry service has System Architecture Maturity level 2, hence the
corresponding MTBF value of 4380 hours is taken and, as the Support
Architecture Maturity level is 2, the MTTR value is taken from the intersection of
the maturity levels, which in this case is 16 hours. Using the availability
percentage formula, i.e., “MTBF / (MTBF+MTTR) x 100,” the availability
percentage for Blackberry service is rated at 99.636 percent.

Applying the aforementioned approach to the entire Service Catalogue, an


Availability Risk Assessment Sheet is prepared, quantifying the availability
percentage against each service as shown in Table 7.

Table 7—Availability Risk Assessment Sheet

The preparation of Availability Risk Assessment Sheet concludes the scope of the
Availability Risk Assessment exercise. This sheet provides a comprehensive view
of overall availability against each service offerings. This not only helps
stakeholder in understanding severity of risk to business (if any) but also gives a
clear direction for improvement. Taking the case of Blackberry service further, if
the stakeholders feel that availability of this service is to be increased, they can
clearly choose between either increasing the system maturity by deploying a
additional Blackberry server as standby (Level 3) and/or can insist on defining
and documentation of service restoration process.

21
Conclusion
The model proposed in this thesis provides a quantitative approach for conducting
Availability Risk Assessment of IT services. This model provides necessary tools
and methodologies to help in engaging with management to arrive at an
acceptable level of service.

An Availability Risk Assessment based on the proposed model also provides


prescriptive input for achieving the desired service levels. The desired availability
percentage can be achieved by appropriately focusing on improving system or
support maturity.

The baseline provided by the Availability Risk Assessment exercise can also be
used for benchmarking and reporting the performance of IT operations. In
addition, this methodology can assist in performing Availability Risk Assessment
of new systems that are in the design stage, thereby providing valuable input to
management at an early stage of system development.

22
Bibliography
[1] Sarbanes-Oxley Act of 2002. Assessment of Internal Control, 2002.

[2] Basel II. Revised international capital framework, 2005, from:


http://www.bis.org/publ/bcbsca.htm.

[3] Tipton, Harold F., and Micki Krause, Information Security Management
Handbook , Sixth Edition. Books24x7: Auerbach Publications, 2007.

[4] COSO. Enterprise Risk Management – Integrated Framework Executive


Summary, September 2004, from:
www.coso.org/Publications/ERM/COSO_ERM_ExecutiveSummary.pdf.

[5] COBIT. IT Governance Institure, 1996-2007, from: www.isaca.org/cobit.

[6] Stilianos Vidalis. A Critical Discussion of Risk and Threat Analysis Methods and
Methodologies[Technical Report]. Wales, UK: University of Glamorgan; 2004.

[7] F. Innerhofer-Oberperfler, and R. Breu, Using an enterprise architecture for IT


risk management; In Proceedings of the ISSA 2006 from Insight to Foresight
Conference. 5-7 July, Sandton, South Africa, 2006.

[8] R. Breu, F. Innerhofer-Oberperfler, and A. Yautsiukhin, Quantitative


assessment of enterprise security system. In Int. Workshop on Privacy and
Assurance. IEEE Computer Society, 2008.

[9] Goel, S. and Chen, V., Information security risk analysis-A matrix-based
approach. In Proceedings of the 2005 Annual International Conference of the
Information Resources Management Association (IRMA). May 15-18, San Diego,
CA, 2005.

[10] Reilly F. R., Schweihs P. R., Valuing Intangible Assets. New York: McGraw-
Hill, 1999.

[11] ISO/IEC 27001:2005. Information security management systems --


Requirements, 2000, from: www.iso.org.

[12] Pfleeger, C. P., Security in Computing, Third Edition. New Jersey: Pearson
Education Inc, 2003.

[13] Artur Rot. Enterprise Information Technology Security: Risk Management


Perspective. In Proceedings of The World Congress on Engineering and Computer
Science 2009. October 20 - 22, 2009, San Francisco, USA, 2009.

[14] A. Morali et. al., IT Confidentiality Risk Assessment for an Architecture-


Based Approach. In Proceedings of BDIM 2008, 3rd IEEE/IFIP International
Workshop on Business-Driven IT Management. April 7, Salvador, Brazil, 2008.

[15] E. Zambon, D. Bolzoni, S. Etalle, and M. Salvato, Model-Based Mitigation of


Availability Risks[Technical Report]. University of Twente, 2007

23
[16] Miler J., A service-oriented approach to the identification of IT Risk, In
Proceedings of IEEETEHOSS 2005 conference. September 28-30, Gdańsk, Poland,
2005.

[17] The Open Group, TOGAFTM Version 9. Van Haren Publishing, 2009, from:
http://www.opengroup.org/togaf

[18] M. Dahlin, B. Chandra, L. Gao, and A. Nayate, End-to-end WAN service


availability. IEEE/ACM Transactions on Networking, vol. 11, no. 2, Apr. 2003.

[19] J. Tapolcai, P. Chołda, T. Cinkler, K. Wajda, A. Jajszczyk, A. Autenrieth, S.


Bodamer, D. Colle, G. Ferraris, H. Lønsethagen, I.-E. Svinnset, and D. Verchere,
Quality of Resilience (QoR): NOBEL Approach to the Multi-Service Resilience
Characterization. In Proceedings of 1st IEEE/CreateNet International Workshop on
Guaranteed Optical Service Provisioning GOSP 2005, Oct. 7, Boston, MA, 2005.

[20] Z. Xu, R. Kalbarczyk, and Iyer. Networked Windows NT System Filed Failure
Data Analysis. In Proceedings of Pacific Rim International Symposium on
Dependable Computing. December 16-17, Hong Kong, China, 1999.

[21] Bass, Len & others. Software Architecture in Practice, Second Edition.
Boston: Pearson Education Inc, 2003.

[22] N. Medvidovic and S. Malek. Software deployment architecture and quality-


of-service in pervasive environments. In proceedings of the International
Workshop on the Engineering of Software Services for Pervasive Environments
(ESSPE 2007). September, Dubrovnik, Croatia, 2007.

[23] W.S. Humphrey, Characterizing the Software Process: A Maturity Framework


[Technical Report]. Software Engineering Institute, 1987.

[24] Paulk, M., Curtis, B., Chrissis, M., and Weber, C. Capability Maturity Model
for Software (Version 1.1)[Technical Report]. Software Engineering Institute,
1993.

[25] Adrian Munteanu, Alexandru Ioan, Information Security Risk Assessment:


The Qualitative Versus Quantitative Dilemmain. In Khalid S. Soliman (Ed.),
Managing Information in the Digital Economy: Issues & Solutions - Proceedings of
the 6th International Business Information Management Association (IBIMA)
Conference, pp. 227, 19-21 June, Bonn, Germany,2006.

[26] Jaisingh, J., Rees, J, Value at risk: A methodology for information security
risk assessment. In Proceedings of the INFORMS Conference on Information
Systems and Technology 2001. November 4-7, Miami, Florida, 2001

[27] E. Zambon, D. Bolzoni, S. Etalle, and M. Salvato. Model based Mitigation of


Availability Risks. In Proceedings of BDIM 2007, 2nd IEEE/IFIP International
Workshop on Business-Driven IT Management, May 21, 2007, Munich, Germany
2007.

24
Appendix: IT Performance Reporting Case
Study
The model proposed in this thesis was practically used in the Monthly MIS Report
submitted to the management and for creating business case for improvement.

Figure A-1— Service Availability MIS – Internet Browsing

The Risk Assessment of Internet Browsing service reportedly gave a service


capability index (availability percentage) of 98.92%, which was derived with
historic data available for MTBF and MTTR. Based on the assessment a
commitment was made to the user community and services monitored. In the
month of October and in subsequent months a high rate of system failure was
noted which resulted in SLA violation. Due to a clear reduction in MTBF w.r.t.
what was considered as “enterprise grade” an immediate decision could be taken
change the service provider.

In another IT service related to Document Management (DMS), based on historic


data of MTBF and MTTR an SLA with the end-user department was agreed for
99.91%. Though there were few hardware failures in the month of Aug, the IT
Management remained confident as the overall Infrastructure and Support
capability maturity was good enough to meet the agreed SLA

25
Figure A-2— Service Availability MIS – Document Management Service

The baseline data which emerged out of availability risk assessment for this
service also gave a clear path for improvement which was to create a well
documented process for shifting the service to standby system.

Using the availability risk assessment IT could engage with users on a SLA based
service for various services offered and report to the management (Table 8) on
its ability meet the committed SLA.

Table A-1—Service-wise IT Performance Sheet

26
Checklist of items for the Final Dissertation Report
1. Is the final report properly hard bound? (Spiral bound or Soft bound or Yes / No
Perfect bound reports are not acceptable.)
2. Is the Cover page in proper format as given in Annexure A? Yes / No
3. Is the Title page (Inner cover page) in proper format? Yes / No
4. (a) Is the Certificate from the Supervisor in proper format? Yes / No
(b) Has it been signed by the Supervisor? Yes / No
5. Is the Abstract included in the report properly written within one page? Yes / No
Have the technical keywords been specified properly? Yes / No
6. Is the title of your report appropriate? The title should be adequately descriptive, Yes / No
precise and must reflect scope of the actual work done.
7. Have you included the List of abbreviations / Acronyms? Yes / No
Uncommon abbreviations / Acronyms should not be used in the title.
8. Does the Report contain a summary of the literature survey? Yes / No
9. Does the Table of Contents include page numbers?
(i). Are the Pages numbered properly? (Ch. 1 should start on Page # 1) Yes / No
(ii). Are the Figures numbered properly? (Figure Numbers and Figure Titles Yes / No
should be at the bottom of the figures)
(iii). Are the Tables numbered properly? (Table Numbers and Table Titles should Yes / No
be at the top of the tables)
(iv). Are the Captions for the Figures and Tables proper? Yes / No
(v). Are the Appendices numbered properly? Are their titles appropriate Yes / No
10. Is the conclusion of the Report based on discussion of the work? Yes / No
11. Are References or Bibliography given at the end of the Report? Yes / No
Have the References been cited properly inside the text of the Report? Yes / No
Is the citation of References in proper format? Yes / No

12. Is the report format and content according to the guidelines? The report should Yes / No
not be a mere printout of a Power Point Presentation, or a user manual. Source
code of software need not be included in the report.

Declaration by Student:

I certify that I have properly verified all the items in this checklist and ensure that
the report is in proper format as specified in the course handout.

______________________________
Place: _____________________ Signature of the Student

Date:______________________ Name: HARIHARAN M__________


ID No.: 2007HZ12033__________

27

También podría gustarte