Documentos de Académico
Documentos de Profesional
Documentos de Cultura
By
HARIHARAN M
(2007HZ12033)
PILANI (RAJASTHAN)
MARCH 2010
A QUANTITATIVE MODEL FOR INFORMATION SECURITY RISK
ASSESSMENT
By
HARIHARAN M
(2007HZ12033)
PILANI (RAJASTHAN)
MARCH 2010
CERTIFICATE
This is to certify that the Dissertation entitled A QUANTITATIVE MODEL FOR INFORMATION
SECURITY RISK ASSESSMENT and submitted by HARIHARAN M, having ID-No. 2007HZ12033 for
the partial fulfillment of the requirements of M.S. (Software Systems) degree of BITS, embodies the
______________________
i
Birla Institute of Technology & Science, Pilani
Work-Integrated Learning Programmes Division
Second Semester 2009-2010
BITS ZG629T : Dissertation
ID No. : 2007HZ12033
NAME OF THE STUDENT : HARIHARAN M
EMAIL ADDRESS : mhharan@yahoo.com
STUDENT’S EMPLOYING : SAMTEL GROUP (NEW DELHI)
ORGANISATION & LOCATION
SUPERVISOR’S NAME : MR. SUDHIR K MITTAL
SUPERVISOR’S EMPLOYING : SAMTEL GROUP (NEW DELHI)
ORGANISATION & LOCATION
SUPERVISOR’S EMAIL ADDRESS : skmittal@samtelgroup.com
DISSERTATION TITLE : A QUANTITATIVE MODEL FOR INFORMATION
SECURITY RISK ASSESSMENT
ABSTRACT
Information Security is of paramount importance in today’s digital world, especially with statutory
and regulatory pressure building on the corporates to introduce Enterprise Risk Assessment
Framework in the environment. There has been an increased focus on performing information
security risk assessment by independently handling the “Confidentiality”, “Integrity” and
“Availability” aspects of Information Security risk. Irrespective of the type of information asset, its
“Availability” is of utmost importance and the same has been taken as the theme of this thesis.
The currently available methodology and approaches for performing Availability Risk Assessment
either provides technology view of risk assessment or trespasses into financial valuation for
quantifying risk.
This thesis detours from conventional approach and puts forth a new model by defining a service
oriented approach to availability risk assessment on one side and quantifying risk on non monetary
terms on the other. In order to quantify risk, the established theory of software architecture is used
to derive the availability percentage.
A case study has also been presented to substantiate application of the proposed model in
management reporting of IT Performance.
____________________ ______________________
Date: Date:
Place: Place:
ii
Acknowledgements
I would also like to thank Dr. H Sathyanarayana Sai, General Manager (IS)
Manav Rachna International University, for sparing valuable time in facilitating
with research references and his support as the additional examiner for this
dissertation work.
_____________________
Hariharan M
Divisional Manager - IT
SAMTEL Group
iii
List of Figures
iv
List of Tables
v
Table of Contents
CERTIFICATE ..............................................................................................................................................................I
ABSTRACT .................................................................................................................................................................II
ACKNOWLEDGEMENTS ..........................................................................................................................................III
1. INTRODUCTION .................................................................................................................................................. 1
2. BACKGROUND ................................................................................................................................................... 4
CONCLUSION ........................................................................................................................................................... 22
BIBLIOGRAPHY........................................................................................................................................................ 23
vi
1. Introduction
“The only truly secure system is one
that is powered off, cast in a block
of concrete and sealed in a lead-
lined room with armed guards – and
even then I have my doubts.”
Eugene H. Spafford
a) competitive edge,
b) cash-flow,
c) profitability,
d) legal compliance and
e) commercial image.
1
Figure 1 – ISMS Road Map
• Chapter 1: Introduction
• Chapter 2: Background
Introduces the need for a new approach with a reference to attempts made by
other researchers.
2
• Chapter 7: Availability Risk Assessment Matrix
• Conclusion
• References:
• Appendix:
3
2. Background
• Quantitative Approach
• Qualitative Approach
• Knowledge-Based Approach
• Model-Based Approach
There are many model-based approaches [7],[8], which attempts to map the
infrastructure using UML and other modelling techniques, but the risk analysis is
done using either qualitative or quantitative techniques.
As the “moral hazard of the analyst has influence on the results because human
nature is subjective” [25], researchers have pointed that both quantitative and
qualitative approach have flaws as the assessment is subjective.
Irrespective of the approach the final risk analysis results in either qualitatively
measuring risk as “High”,, “Low”, “Medium” or quantitatively measuring risk as
annual loss expectancy (ALE) which is derived from the annualized rate of
occurrence (ARO) of risk incidence.
Sanjay Goel et al. [9] has proposed use of matrix based approach comprising of
“Vulnerability Matrix”, “Threat Matrix” and “Control Matrix” to perform
Information Security Risk Assessment, but has relied upon scale and weights
which are fundamentally intuitive and indirectly provides qualitative assessment.
The following are the most commonly used tools, which has found reference
across the various approaches [26].
• Questionnaire
4
Due to the inherent complexity of quantitative technique in appropriately valuing
assets [10] the most widely used method is creating Risk Assessment Matrix
using qualitative measurement.
Further, to relatively rank the Information Security risk of “Information Asset” the
Risk Priority Number (RPN) is determined for every item listed in the inventory.
RPN is an indicator for prioritising/ranking of risks that takes into account the risk
(probability of occurrence and potential impact) and the chance of non-detection.
The RPN can range between zero to 150. To illustrate the approach, if the
“likelihood” that a system will be unavailable is scored at 3 (medium chance) in a
scale of 0-6, and the corresponding “impact” is scored at 2 (significant impact) in
a scale of 0-5, then the risk is valued as 6, which may be interpreted as medium
risk. As the un-availability can be easily detected the value for “chance of non-
detection” suppose is taken as 1, then RPN in this case is 6, had the “chance of
non-detection” being extremely high say 5 then the RPN will be 30 indicating
increased emphasis.
5
It is worth noting that the values assigned for “CIA asset value”, “likelihood” and
“impact” are assigned intuitively and hence the risk of the same asset are likely
to vary across assessments if the scores are assigned by different persons. It is
also visible that the Risk Value has not been factored in the calculation of RPN
leaving behind the Information Security aspect of risk.
In order to perform in-depth “Information Security Risk Assessment” [13] the risk
assessment can be done for individual criterion viz. confidentiality, integrity and
availability. A. Morali et. al.[ 14] and E. Zambon et al, [15] have taken
Information Security Risk Assessment a step further by proposing model based
approach to Availability Risk Assessment and architecture based approach to
confidentiality risk assessment respectively.
The current trend towards Cloud computing and Software as a Service (SaaS) has
brought Service Availability to the forefront in Security Risk Assessment. E.
Zambon et al. [27] has rightly highlighted the emergence and importance of
Service Level Agreements (SLA) between the service provider (IT Service) and
the service receiver. Having a special focus on Availability Risk Assessment as
part of Information Security Risk Assessment appears all the more relevant at
this juncture and hence the thesis is focused on Availability Risk.
Risk Assessment Matrix has been chosen as the tool of choice keeping in view the
simplicity and comprehensiveness it has for presenting to stakeholders. The
fundamental step of creating the inventory of Information Asset as performed in
ISMS is maintained in the proposed methodology with slight variation, which is to
create a Service Catalogue instead of inventory of Information Asset.
• Risk identification: The process of determining what can happen, why and how.
• Risk assessment: The overall process of risk analysis and risk evaluation.
6
(quantitative or qualitative) is calculated by taking into account the “Asset Value”,
“Probability of Vulnerability being exploited” and “Business Impact”. Based on the
“Risk Assessment” data, “Risk Treatment” is suggested which would reduce the
overall risk either by reducing vulnerability, possibility of exploitation or business
impact.
When these exercises are done repeatedly year-after-year, especially when most
of the recommended “Risk Treatment” has already been implemented, the
conventional approach fails to provide prescriptive inputs. The following
illustration would substantiate this point. Say a core banking application is hosted
in an Application Server which is vulnerable to Virus attack due to OS level
vulnerabilities. In the Availability Risk Assessment the probability of successful
attack is estimated and the resultant business loss is computed. The Risk
Treatment plan gives alternate option which could be installation of Antivirus
software, Implementation of automatic OS patching solution etc. to mitigate the
risk and the estimated cost so that a cost benefit analysis can be done. Still many
questions remain unanswered.
• What if these are already in place (which in any case will happen over the
years)?
It can be seen that the above approach focused more on security vulnerabilities
which ideally should be part of “Vulnerability Assessment and Penetration
Testing” (VAPT) exercise. VAPT is technically a specialised domain and conducted
specifically to un-earth exploitable vulnerabilities, this exercise uses technical
tools to scan the environment and list exploitable vulnerabilities. Basing
Availability Risk Assessment on technical vulnerabilities dilutes the essence of
Availability Risk Assessment. The objective of Availability Risk Assessment and
“Vulnerability Assessment” are different and hence the basic approach should also
be different. The proposed model hence deviates from the traditional approach
and attempts to provide a comprehensive view of Availability Risk, including
Residual Risk (Residual risk is the risk left over after prescribed controls are
already in place).
The need for a simple and a practical quantitative approach in risk assessment
can be hardly overemphasised. It can be empirically argued that the availability
percentage of a system or service is a good measure to quantify Availability Risk
and need not be substantiated with monitory value. For example, if a system or
service is rated at 99.5 percent availability, the risk is clearly reflected. Hence the
proposed methodology attempts to derive availability percentage for quantifying
risk rather than ALE for every service listed in the Service Catalogue.
7
3. Service Oriented Approach
• IT Hardware
• Software
• Data
A comprehensive exercise is done to collect details of all the assets within the
scope of risk assessment and tabulated as shown in Table 2
8
Table 3 – Risk Assessment Verdict of ERP & e-mail System
As can be seen, the conventional approach provides a technology view and not a
business view of risk. As per the verdict shown in Table 3, the availability of ERP
Database Server is at lower risk as compared to ERP Application Server and ERP
Web Server and Mail Gateway are at the highest risk level. It fails to address
questions like:-
• What is the impact of SAN Storage on other application like e-mail system
which also might be using SAN Storage?
The Open Group Architecture Framework suggests the use of an Application Use-
Case diagram for mapping application services, according to TOGAF [17]
“Application services are consumed by actors or other application services and
9
the Application Use-Case diagram provides added richness in describing
application functionality”
The use case clearly helps in identifying the various services available to actor,
here it can be seen that the actor “account manager” uses “accounting” service,
the actor “payroll manager” uses “HR and Payroll” service etc. In this way the use
case diagram of respective IT system will enable in creating a comprehensive
Service Catalogue.
10
Table 4 —Service Catalogue
11
4. Characterisation of Service Availability
Message Delivery
A user would be able to use the service of e-mail system only when all the
components are functioning. If say mail gateway is not functioning, then user will
not be able to exchange mails with external domains and if the mail store is not
functioning then user will not be able access even a single e-mail.
In order to decide whether e-mail service is available, the availability of all the
components affecting the service needs to be assessed.
12
another related to restoration of service. Whereas, in the conventional approach
only failure is taken into consideration, in the proposed model the restoration of
service is also factored to assess overall risk including residual risk.
The availability of a service depends on how often the service fails and how much
time it takes to restore the service back. Mean Time Between Failure (MTBF)
measures average failure rate and Mean Time To Repair (MTTR) measures
average restoration time. Using MTBF and MTTR, the availability percentage can
be calculated as follows [20]:
The proposed model puts forth a methodology for deriving MTBF and MTTR by
assessing the system and support capabilities, and then using it to calculate
availability percentage.
13
5. Service Capability Maturity Assessment
As the proposed model uses restoration time of service for computation of Service
Availability, it brings into focus the Support available with respect to various
hardware and software responsible for delivering the service. Hence the
architecture modelling proposed to be used is software system’s deployment
architecture. Software system’s deployment architecture has a significant effect
on the system’s non-functional properties, availability being one of them [22].
Figure 4 illustrates the “Software Deployment Architecture” of a CRM system
which provides “Customer Subscription Registration Service”.
14
Figure 4 — Software Deployment Architecture of CRM
Initial (Level 1): There is ad-hocism and fire fighting in the actions
Repeatable (Level 2): The actions are repeatable but are intuitive
Optimised (Level 5): The measurements are reviewed and actions optimised
15
Figure 5—Five Levels of Process Maturity
The maturity levels used by Humphrey finds a universal applicability and have
been used in this thesis as a reference to evaluate the system and support
capability by assessing their architectural maturity levels. The architectural
maturity level (Figure 6) shows the different style in which an IT system can
architecturally deployed. Here a System means Hardware and Software.
System
Deployed in HA
Mode
(Level 4)
Standby System
part of
deployment
architecture
(Level 3)
Standby System
can be arranged
(Level 2)
Single System
(Level 1)
16
An IT system, the components of which are running in a set of individual
hardware is considered to be at maturity level 1. In case the environment is such
that if one or more of the hardware out of the total set fails and a standby can be
arranged by redeployment of available resources, then the system architectural
maturity is at level 2. Further, if the standby hardware has already been
provisioned as a dedicated replacement (defined in advance), then that
environment is at maturity level 3. An IT system by design, if recognises the
existence of alternate resource and the failover time is predictable and
measurable, then the architectural maturity of IT system is at Level 4, such IT
systems are also referred to as High-Availability (HA) system. Finally if there is a
process followed to review and reduce the failover time of a High-Availability
system, then the maturity is at Level 5.
Similarly, the maturity of support architecture (IT systems), can also be graded
into five levels as shown in Figure 7.
17
5.3 Service Capability Maturity Assessment
Using the maturity model defined above for system and support architecture, the
respective service is assessed using their deployment architecture and a service
wise system maturity level and support maturity level is established. This is done
by understanding the system landscape and support services available for
respective services. Say, for example, in an assessment of an e-mail system, it is
found that the Blackberry service is running on single system architecture and the
administrator demonstrates that in case of its failure a standby can be made
available to install the Blackberry application, and further it is noted that the
administrator has skills to restore the application. In such a scenario, the
Blackberry service can be presumed to be at level 2 (“standby can be arranged”)
in system architecture maturity and level 2 (“skillset available”) in support
architecture maturity.
A similar exercise for all of service catalogue item is to be done to create Service
Capability Maturity Assessment sheet as shown in Table 5.
This assessment sheet records the service-wise system and support capability
maturity levels and is later extended to create the Availability Risk Assessment
Matrix. The major advantage of using the maturity level is that there is no
subjectivity and a clear road map for improvement gets recorded for all the
identified services.
18
6. Service Capability Measurement Matrix
Operating
System MTBF
4380
hrs
Operating
System
E-mail Client
Client -
Computer
In this case, the System MTBF is 4380 hrs which commensurate with the saying
“security is only as strong as its weakest link”.
19
Support MTTR in the proposed model intends to help organisations reflect upon as
to what they consider as “enterprise grade”. The MTTR value should not be biased
by the existing system and vendor-specific experiences; rather, the value should
be an indicator of what the organization considers as acceptable resolution time
from its support service.
This matrix is therefore created by assigning MTBF in hours against each of the
system architecture maturity levels under the MTBF column. A corresponding
MTTR in hours is assigned for every support capability maturity level. The MTBF
value for the first three levels of system architecture maturity will be the same,
as effectively the service is operating on a single system. The difference in
maturity level is an indicator of the capability that exists in the environment to
arrange standby or alternate systems, in other words it reflects the maturity of
the environment to repair/restore the service. Table 6 shows the template that is
to be used for creating the MTBF and MTTR matrix.
As can be seen from the values, the repair time (MTTR) is not only dependent on
the support architectural maturity but also on system architectural maturity. This
implies that given a particular level of support maturity, the time taken to restore
a service would decrease with an increase in the system maturity level. Another
point to be noted is that the matrix would evolve as the organisation matures in
its system and support capability levels, In initial stage MTBF and MTTR value for
level 5 of System Architecture maturity may not be available.
20
7. Availability Risk Assessment Matrix
Using the MTBF and MTTR matrix, respective MTBF and MTTR values for each
Service Catalogue item is derived. Continuing with the earlier example, the
Blackberry service has System Architecture Maturity level 2, hence the
corresponding MTBF value of 4380 hours is taken and, as the Support
Architecture Maturity level is 2, the MTTR value is taken from the intersection of
the maturity levels, which in this case is 16 hours. Using the availability
percentage formula, i.e., “MTBF / (MTBF+MTTR) x 100,” the availability
percentage for Blackberry service is rated at 99.636 percent.
The preparation of Availability Risk Assessment Sheet concludes the scope of the
Availability Risk Assessment exercise. This sheet provides a comprehensive view
of overall availability against each service offerings. This not only helps
stakeholder in understanding severity of risk to business (if any) but also gives a
clear direction for improvement. Taking the case of Blackberry service further, if
the stakeholders feel that availability of this service is to be increased, they can
clearly choose between either increasing the system maturity by deploying a
additional Blackberry server as standby (Level 3) and/or can insist on defining
and documentation of service restoration process.
21
Conclusion
The model proposed in this thesis provides a quantitative approach for conducting
Availability Risk Assessment of IT services. This model provides necessary tools
and methodologies to help in engaging with management to arrive at an
acceptable level of service.
The baseline provided by the Availability Risk Assessment exercise can also be
used for benchmarking and reporting the performance of IT operations. In
addition, this methodology can assist in performing Availability Risk Assessment
of new systems that are in the design stage, thereby providing valuable input to
management at an early stage of system development.
22
Bibliography
[1] Sarbanes-Oxley Act of 2002. Assessment of Internal Control, 2002.
[3] Tipton, Harold F., and Micki Krause, Information Security Management
Handbook , Sixth Edition. Books24x7: Auerbach Publications, 2007.
[6] Stilianos Vidalis. A Critical Discussion of Risk and Threat Analysis Methods and
Methodologies[Technical Report]. Wales, UK: University of Glamorgan; 2004.
[9] Goel, S. and Chen, V., Information security risk analysis-A matrix-based
approach. In Proceedings of the 2005 Annual International Conference of the
Information Resources Management Association (IRMA). May 15-18, San Diego,
CA, 2005.
[10] Reilly F. R., Schweihs P. R., Valuing Intangible Assets. New York: McGraw-
Hill, 1999.
[12] Pfleeger, C. P., Security in Computing, Third Edition. New Jersey: Pearson
Education Inc, 2003.
23
[16] Miler J., A service-oriented approach to the identification of IT Risk, In
Proceedings of IEEETEHOSS 2005 conference. September 28-30, Gdańsk, Poland,
2005.
[17] The Open Group, TOGAFTM Version 9. Van Haren Publishing, 2009, from:
http://www.opengroup.org/togaf
[20] Z. Xu, R. Kalbarczyk, and Iyer. Networked Windows NT System Filed Failure
Data Analysis. In Proceedings of Pacific Rim International Symposium on
Dependable Computing. December 16-17, Hong Kong, China, 1999.
[21] Bass, Len & others. Software Architecture in Practice, Second Edition.
Boston: Pearson Education Inc, 2003.
[24] Paulk, M., Curtis, B., Chrissis, M., and Weber, C. Capability Maturity Model
for Software (Version 1.1)[Technical Report]. Software Engineering Institute,
1993.
[26] Jaisingh, J., Rees, J, Value at risk: A methodology for information security
risk assessment. In Proceedings of the INFORMS Conference on Information
Systems and Technology 2001. November 4-7, Miami, Florida, 2001
24
Appendix: IT Performance Reporting Case
Study
The model proposed in this thesis was practically used in the Monthly MIS Report
submitted to the management and for creating business case for improvement.
25
Figure A-2— Service Availability MIS – Document Management Service
The baseline data which emerged out of availability risk assessment for this
service also gave a clear path for improvement which was to create a well
documented process for shifting the service to standby system.
Using the availability risk assessment IT could engage with users on a SLA based
service for various services offered and report to the management (Table 8) on
its ability meet the committed SLA.
26
Checklist of items for the Final Dissertation Report
1. Is the final report properly hard bound? (Spiral bound or Soft bound or Yes / No
Perfect bound reports are not acceptable.)
2. Is the Cover page in proper format as given in Annexure A? Yes / No
3. Is the Title page (Inner cover page) in proper format? Yes / No
4. (a) Is the Certificate from the Supervisor in proper format? Yes / No
(b) Has it been signed by the Supervisor? Yes / No
5. Is the Abstract included in the report properly written within one page? Yes / No
Have the technical keywords been specified properly? Yes / No
6. Is the title of your report appropriate? The title should be adequately descriptive, Yes / No
precise and must reflect scope of the actual work done.
7. Have you included the List of abbreviations / Acronyms? Yes / No
Uncommon abbreviations / Acronyms should not be used in the title.
8. Does the Report contain a summary of the literature survey? Yes / No
9. Does the Table of Contents include page numbers?
(i). Are the Pages numbered properly? (Ch. 1 should start on Page # 1) Yes / No
(ii). Are the Figures numbered properly? (Figure Numbers and Figure Titles Yes / No
should be at the bottom of the figures)
(iii). Are the Tables numbered properly? (Table Numbers and Table Titles should Yes / No
be at the top of the tables)
(iv). Are the Captions for the Figures and Tables proper? Yes / No
(v). Are the Appendices numbered properly? Are their titles appropriate Yes / No
10. Is the conclusion of the Report based on discussion of the work? Yes / No
11. Are References or Bibliography given at the end of the Report? Yes / No
Have the References been cited properly inside the text of the Report? Yes / No
Is the citation of References in proper format? Yes / No
12. Is the report format and content according to the guidelines? The report should Yes / No
not be a mere printout of a Power Point Presentation, or a user manual. Source
code of software need not be included in the report.
Declaration by Student:
I certify that I have properly verified all the items in this checklist and ensure that
the report is in proper format as specified in the course handout.
______________________________
Place: _____________________ Signature of the Student
27