It systems have become very complex due to the prevalence of interconnected systems. As systems have evolved, most enterprise environments have leveraged on the many advantages offered by interconnected systems. This has meant that most businesses are heavily dependent on IT systems to remain operational. The majority of businesses now rely on their IT systems to provide them with a competitive advantage. The reliance on IT systems for normal operations means that if information technology systems fail in either confidentiality, integrity or availability they can cause great disruption. The resulting fall out will cause loss of revenue and reputational damage. Risks are therefore actively managed to minimise such events, while enabling enterprise networks to support vital business processes. To manage these risks actively, businesses have adapted standards such as ISO 27001 and NIST SP800-30, which recommend a risk assessment as a necessary activity as part of a security risk management framework .
The risk to IT systems cannot be underestimated as shown by incidents that have been widely published. In 2017 the National Health Service in the United Kingdom suffered widespread disruption due to the WannaCry ransomware which locked users out . This attack resulted in a number of scheduled operations being cancelled causing distress to the patients involved. In the same year, Equifax suffered a breach resulting in 145 million customer records being compromised resulting in significant reputational damage and fines are likely to follow in due course .
Information security risk management is now a pivotal part of most organisations and as part of information security management the identification and rating of risk to enable the correct prioritisation of any mitigation activities is a key part of risk management. This is a function typically performed by risk assessments. Risk assessments allow decision makers to target resources where they are most effective, in the reduction of risk, hence it is therefore vital that the risk assessment practise is rooted in sound scientific methods. If sound risk assessment methods are not followed, then the resulting decisions could prioritise the wrong risks leading to a less effective risk management and by extension an inefficient use of resources. In 2018, Gartner estimates worldwide spending on cybersecurity will be about 96.8 billion dollars an increase by 8% from the previous year . It’s vital to ensure that the methodology underpinning some of that spending is scientifically valid and correct.
Risk can be defined as a state which results in a system not operating as intended . Risk can be calculated as a product of the likelihood of a system not operating as intended and the impact of that event. Risk can also be seen as the calculation of the probability that a threat is realised and what the consequences of that threat being realised will be. A threat being defined as a possible instance that exploits a vulnerability. A vulnerability is a flaw or weakness that can be exploited  . Risk assessments have been adopted as the best way to identify areas which pose the highest potential of risk, and hence require prioritisation for remediation. While risk assessments have been widely adopted as part of information security risk management other researchers have questioned the validity of some approaches . It is therefore vital that a systematic review of the literature be conducted to gain a good understanding of the state of knowledge on quantitative risk assessment in the IT security domain. Quantitative risk assessments have been chosen due to the advantages they pose by being repeatable and more objective than qualitative risk assessments which rely more on the subjective assessment of the assessor hence liable to assessor bias .
Quantitative risk assessment is the use of objective numerical values to assess the probability of undesired state occurring. The reason why quantitative risk assessment is important is to ensure that the risk assessments are as precise as the information available allows. The alternative to quantitative risk assessments are qualitative risk assessments which relay on the expertise and ideas of the assessor and hence can yield inconsistent result as noted by Karabacak et al . It is also susceptible to bias and the heuristics of those involved in the risk assessment.
Security is a major challenge and there has been a clamour for quantitative methods that can allow for repeatable risk assessments with little to no subjectivity. It is useful to try and migrate away from knowhow methods currently adopted by most standard approaches . These methods are time consuming and require experts who are not always available. A number of risk assessment approaches have been suggested, the challenge is being able to definitively accept or reject some of the hypothesis being postulated by the various researchers in this space. Kondakci conceded the lack of sound scientific risk assessment methods . The limited standardised statistical information available in this area only compounds the challenge of being able to validate quantitative methods.
In this paper, a number of research papers are evaluated, the novelty in this paper is that it surveys literature from 2008 to present day. The paper also examines the methods and validation done reviewed for its robustness to determine if the scientific methods followed stands up to scientific rigor. The contribution made by this paper includes:
Categorisation of risk assessment approaches into groups of related approaches.
The validation of methods used in the approaches followed.
The limitations if any of the approaches followed.
The rest of this paper is organised as follows. Section II discusses related works and section III details the methodology followed in conducting this survey. In section IV, categories of quantitative risk assessment papers are noted. In Section V, the analysis of the literature is documented followed by section VI that highlights the research direction for future works. Section VII concludes this survey.
A number of systemic reviews have been conducted in a number of disciplines to better understand the current body of knowledge in any given field. This review will follow a systemic approach similar to Rudolp et al and Fernandez-Aleman et al. . The approach sets an inclusion criterion, defines the data sources used and explores then discusses the information in the current body of knowledge. A different approach to literature survey was taken by Yang et al who focussed on a descriptive approach using works to classify and discuss research into cloud computing . The survey differs in approach to that followed by Henshel et al who followed a narrative approach to surveying the body of works in their field .
The state of literature on quantitative risk assessment in IT security has been evaluated before, Verendel conducted a critical review of literature between 1981 and 2008 and concluded that most of the methods being proposed were weak and no mechanisms have been suggested to validate the effectiveness of the methods being investigated . It is beneficial to review literature since that review to understand the current body of knowledge around the subject. Shameli-Sendi et al looked at the taxonomy of information security risk assessment on published papers between 1995 and 2014, however the review did not evaluate the effectiveness of the methods used .
Accurate risk estimation is challenging, as there are several ways in which errors can be introduced into the risk assessment practice. In qualitative risk assessments and some semi quantitative risk assessment methods such as those methods which use a relative risk scale, where verbal labels are used to describe risk, the various ways people interpret labels means that they can become a source of error in risk assessments . Some risk assessments methods use ordinal scoring methods and as noted by Hubbard et al , this can introduce errors. Major bodies responsible for standards such as NIST use ordinal scoring mechanisms in their standards such as NIST 800-30 which converts an ordinal value to another quantity for calculations and hence these methods have been widely accepted in industry . Hubbard et al postulated the limitations of using these scoring methods. Hubbard et al found that the use of ordinal scoring methods fail to take into account findings of psychological research which identified cognitive bias which impair risk assessment . Hubbard et al also noted how ordinal scales are routinely interpreted inconsistently and how ordinal scales are treated as ratio scales and hence users prone to invalid inferences. Values used by ordinal scales are assigned by the assessor and research has concluded that human beings deviate from expected behaviours in a systemic fashion due to cognitive bias which means that, often risks are incorrectly estimated due to various reasons as noted by , this in turn will produce inaccuracies in risk assessments.
The following research questions have been used to shape the survey methodology followed in this paper:
- What work has been done in IT Security domain that can classified as quantitative risk assessment?
- What are the validation methods used to validate the work and methods used in quantitative risk assessment?
- What methods have been used to measure the effectiveness of risk assessment methods?
In order to answer the above-mentioned research questions, articles around quantitative risk analysis were searched in online databases containing peer reviewed published articles both journal articles and conference papers. The work contained in this paper was limited to works between 2008 and 2018. This was done due to the presence of works which evaluated the literature dating back to 1981, such as the survey by Verendel .
Searches were carried out using the following sources: Google scholar, IEEE Explore, Scopus search, Research gate, and Association of Computing Machinery (ACM) . The keywords used in the searches were: quantitative risk analysis methods in IT Security, quantitative risk analysis in cyber security, quantitative risk analysis frameworks in IT security, and quantitative risk analysis models. Articles were identified through exhaustive searching. To be included, articles needed to be in the IT security domain and satisfy one of the following conditions:
Suggested a quantitative risk assessment approach.
Provided a quantitative risk assessment model or framework.
Used quantitative metric for security assessment.
Used a machine learning approach for quantitative risk assessment.
The methodology above was followed as it allowed the broadest range of articles relevant to the area of study to be captured. One of the limitations of the methodology followed is that any works not captured in the databases searched was not included however this is likely to be a small set of papers which would not change the overall conclusions.
Categories of Quantitative Risk Assessment Methods
The surveyed papers were classified by the approach and the validation method employed to confirm the hypothesis tested and any assumptions made, this task was manually carried out. The approach taken is broken down as highlighted below:
Monetary – a risk quantification approach that concentrates on the monetary or economic values.
Framework – works that postulate a quantitative risk assessment framework.
Systemic – works that builds a quantitative risk assessment system.
Machine Learning – work which utilises machine learning methods.
Risk Assessment Approach – works that postulate a new approach for risk assessment.
A number of papers reviewed are using monetary values as a corner stone of the risk assessment approach. In most business environments, the decision to apply security controls has to be evaluated against the cost versus the benefit of those changes. A value that has found favour is the annual loss expected (ALO) . The ALO is the product of the annual rate of occurrence (ARO) and the single loss expectancy (SLE). The ARO being calculated as number of times a risk is expected to occur annually and the SLE being calculated as the total cost of an individual incident. This approach can be used in determining risk prioritisation. The limitation of calculating the annual loss expected is the challenges of calculating the true cost of an incident, it can be challenging to quantify brand damage for instance , to compound that challenge it is difficult to accurately estimate the annual rate of occurrence.
Accurately calculating a number monetary metrics used is difficult and can result in poor estimates. A popular metric that is used in many approaches is the annual loss expectancy which also requires the quantification of the frequency events are expected to occur. In the IT security domain, it is challenging to accurately calculate the frequency of incidents as noted by Ekelhart et al . Ekelhart et al sought to create a security ontology that required less specialist skills and was supported by user friendly forms to collect all the required information . Wes Sonnenreich postulated using a similar approach however his approach takes into account allowances for the percentage of risk mitigated, this can be a challenge as there is no accurate way to measure the percentage of risk that can be mitigated . Monetary approaches were also considered in .
Asosheh et al suggested calculating the return on security investment to evaluate risk remediation activities. The approach postulated combined the Microsoft and Callio Secura risk assessment methods and made some changes to postulate a new risk assessment approach . The approach postulated uses monetary parameters such as single loss expectancy and annual rate of occurrence, which for the IT security domain are difficult to quantify accurately hence the limitations of being able to extrapolate the return on security investment.
A number of papers reviewed suggested new quantitative risk assessment frameworks to improve existing approaches (e.g., ). Pham et al postulated an adaptive quantitative risk assessment framework in response to gaps identified in the analysis of advanced persistent threats as identified by the Mandiant report in 2010 . Pham et al noted that advanced persistent threats are often not well evaluated as the threat actors tend to be able to by-pass traditional defences, mostly due to investment from state actors.
Pham et al proposed a quantitative risk assessment framework centred around intrusion detection in the cloud environments . The framework however has its limitations as it only considers malicious activity that can be detected by known mechanisms, it also works on the assumption that detection mechanisms are independent of each other and can function without interference in each other’s detection capability. These assumptions are not always true in real production environments. The approach is applied to a number of case studies with no measurement of effectiveness in the case studies.
Allodi et al suggested a new framework which could determine risk by evaluating the presence of certain vulnerabilities and exploits and hence calculate the likelihood of an exploit . The approach by Allodi et al derives from the critique of risk being a product of impact and likelihood however this approach treats both parameters as unknown variables. On the whole the approach could be useful in assessing opportunistic attacks however it would not be useful for targeting attacks such as those carried out by hacktivists and organised crime groups which target specific victims with means and advanced technical expertise as noted by Pham et al .
Djemame et al proposed a risk assessment framework which included a methodology for conducting risk assessment . The framework suggested to follow a two-step methodology which splits the risk assessment into risk of event calculation and risk aggregation . The framework however only accounts denial of service, user spikes and increased failure rates in cloud environments which limits its effectiveness to evaluate other risks in IT systems which do not emanate from those sources.
Teixeira et al postulated a risk assessment framework for evaluating risks in control systems. The framework defined the normal operating parameters from the known state of a supervisory control and data acquisition (SCADA) system. The anomalous state would be any state which deviates past known good parameters. The framework is applied in case studies in the electricity grid control systems . The effectiveness of such an approach however is not validated in the work carried out however its application was shown in the paper.
Singh et al in  looked at quantifying the risk posed by network level vulnerabilities suggesting a new framework that combines the frameworks postulated by Ahmed et al  and Tripathi et al . The resulting framework models risk based on the risk of successful attack, the risk of the attack being propagated across a network and the risk based on the characteristics of the vulnerabilities identified including age .
While the frameworks suggested could be applied in the scenarios outlined no effective measurement of the effectiveness of the frameworks postulated was done in the various studies instead a number of papers only went to demonstrate whether a framework could be used in a case study . The studies did not go further into evaluating the effectiveness of the frameworks being proposed. The framework makes a number of assumptions that are not tested, the assumption that an exploitable vulnerability is more likely to be exploited, while plausible is untested by any metrics.
A number of systems to conduct quantitative risk assessments have been proposed and developed in the literature surveyed , all introduced systems to aid in the risk assessment process. Viduto et al created a tool called the Risk Assessment Optimisation Model (ROAM) which enables the selection of security controls which considers the financial costs evaluation against the residual risk which remains . The system matches threats to vulnerabilities taken broadly from the NIST Vulnerability database. The system calculates total initial risk by assigning a number to threats and vulnerabilities and using the product of these two factors and their impact. The total initial risk is then treated by a suitable control with a control and vulnerability matrix providing parameters for risk reduction. Estimated costs of the counter measures is then calculated allowing for the risk reduction and cost to be understood. The system proposed uses methodology from the NIST SP800-30 standard closely. Viduto et al postulated validating the model by applying it to an optimisation routine to assist decision makers however the effectiveness of the system is not evaluated .
Asosheh et al developed a risk assessment system by combing the risk assessment models developed by Microsoft and Callio Secura risk assessment method allowing the new approach to calculate the exposure factor, impact and resulting return on security investment . The system developed however have limitations as no accurate mechanism to calculate the security metrics has been suggested. The approach taken by Asosheh et al in validating their work is through case study with no measure of its effectiveness.
Machine learning has gained greater focus in computing recently as it has been demonstrated to provide some great results in prediction, the following papers could be classified as using machine learning approaches as part of risk assessments .
Aussel et al postulated a hard drive failure prediction model using machine learning . The approach built a classifier with 95% prediction accuracy using data provided by a hard drive manufacturer. The prediction model built a classifier using the support vector machine, random forest and gradient boosted tree algorithms. The three algorithms were evaluated, and the best performance was provided by random forest, which was tested on new data, validating the effectiveness of this approach on predicting hard drive failure. This approach was demonstrated to be effective in the context studied however the data set in the risk domain is not as complete and the challenge is expanding the approach to account for other issues.
Sulaman et al identified that risk analysis could be improved if risk analysis techniques use past incidents to estimate the likelihood of risk and possible impact of such incidents. The lack of reliable data on past incidents is a limitation Sulaman et al sought to address . The solution he suggested was to identify an online source, then use machine learning to classify the articles. In the study conducted by Sulaman et al, a suitable article source was identified, and documents retrieved. A small subset of documents is manually classified and used for learning. A classifier is built using the naïve Bayes machine learning algorithm . The classifier is then used on the rest of the documents with 10-fold validation used to enable the classification of documents. While the approach is affective, it however misses risks not recorded in the initial data source as well as any limitations that may emerge from incomplete data. It is also susceptible to false predictions if inaccuracies are contained in the learning data.
Feng et al proposed a new risk assessment method that develops a Bayesian network to define risk factors as well as identify any causal relationships that exist . The approach postulated by Feng et al used the Ant Colony Optimisation (ACO) algorithm to learn from the data and once risk factors and their causal relationship has been identified probabilities and impact calculated. In order to calculate probabilities, the approach taken used expert consensus. The approach of relying on expert consensus does have its limitations as human dynamics can skewer results. The approach was applied to a case study however no analysis of the effectiveness of the approach was conducted.
Parate et al postulated a methodology to classify data from smart grid applications into vulnerable and non-vulnerable states . The approach suggested by Parate et al used Support Vector Machine (SVM), the algorithm is used to build a classifier that is trained on labelled data and subsequently used on new data to classify accordingly . The approach is tested using simulated data. This approach while useful has limitations as its challenging to simulate cyber-attacks accurately.
Risk Assessment Approaches
A number of different risk assessment approaches were suggested. Papers postulating new risk assessment approaches included . Joh et al proposed a risk assessment approach that utilises the common vulnerability scoring system to aid in the evaluation of the criticality of unpatched vulnerabilities. There is sometimes need for prioritisation when patches are being applied hence this approach which aims to use the logarithmic scale to articulate risk . A logarithmic scale chosen due to its advantages in increased resolution. Joh et al used conditional probability of the nature of the vulnerability and how easy it is to access. The approach however has its limitations in that it can only identify issues that are related to common vulnerabilities and exposures (CVE)s so would not be able to account for issues emanating from otherwise secure technologies but implemented with flaws.
Potteiger et al postulated a new risk assessment approach, which involves threat modelling that is software centred . Potteiger et al identified the problem with current risk assessment approaches which depend on qualitative judgement of designers and hence sought to create a model to calculate component quantitative risk. The methodology leverages the following categories of system security – spoofing, tampering, non-repudiation, information disclosure and denial of service commonly referred to as STRIDE. The STRIDE categories are then used to create attack trees and use the common vulnerability scoring system (CVSS) scoring methodology to assign risk values to component attributes in the software. The effectiveness of the approach is not measured.
Lin et al postulated a risk assessment approach which used Bayesian belief network to carry out quantitative risk assessment . The approach postulated uses priori and reasonable conditional probabilities to determine whether a node is functioning correctly and hence establish trust. The method used determines the state of a node based on available information and centred around internet of things devices. No validation has been done on the method and the effectiveness not measured.
Samad et al looked at the prevailing risk assessment approaches and identified the existing approaches were not well suited to cloud environments hence postulated a new approach that would suit the complex, mobile and ad hoc nature of cloud computing . The approach identified all possible risks and of those risks sought to identify a subset of risk factors that contribute to that risk. The model then goes on to utilise a Bayesian conditional probability approach to calculate the probability. This approach however has some limitations as it relies on calculating the uncertainty of the current situation against the standard event, and in security breaches there is no robust reliable empirical evidence to model when standard events occur. Even if there was reliable data on past events it’s not a good indicator of future occurrence. The approach is applied to a case study of a mobile electrocardiogram (ECG) data analysis app – a mobile cloud system .
Fray et al recommended a new model to describe and conduct risk assessment, the work is geared towards non-profit organisations and the model complies with standards widely adopted such as the ISO 27001 . The approach sought to model an IT security system and threats to it in a formalised mathematical manner allowing for risk to be defined and a risk graph calculated, a compliance coefficient is calculated based on the analysis carried out against standards. An ordinal scale is utilised when translating qualitative parameters into quantitative values. The approach was then utilised in a case study of a Polish administrative unit . This approach makes an assumption that adhering to standards removes risk this is not always borne out in real world scenarios.
Kondakci et al postulated a casual risk assessment method (CRAM) which is based on the Bayesian Belief Network (BBN) . The BBN chosen due to its effectiveness in assisting with decision making under uncertainty. The approach relies on conditional probability similar to the approach postulated by Samad et al . Similar limitations to the approach by Samad et al exist as the approach does not address how to accurately compute probability of attacks in the IT Security domain. Huang et al also proposed a Bayesian network-based approach to conduct risk assessments for SCADA systems .
Sommestad et al suggested a probabilistic relational model (PRM) for risk assessment . The approach taken postulates abstract PRM-classes that can be utilised to create PRMs which in turn can be used to infer security risk from system architecture models. The approach is applied to a case study.
Homer et al suggested a new approach to calculating risk in an enterprise environment by using attack graph structures together with component metrics in order to calculate the probability a certain network-based privileges can be gained by an attacker .
The approach seeks to improve on work done on Bayesian belief network but leveraging on the key concept of d-separation in Bayesian Network inference and developing new algorithms for probabilistic reasoning on attack graphs. Homer et al does acknowledge the imprecise nature in security matrices however makes a reasoned conclusion that the results despite being imprecise provide some useful insight . Homer et al similar to most approaches validated the approach by an experimental case study however the limitations are that the conditions do not mirror typical real-world attacks and also fails to give indication on effectiveness of the model postulated.
Pak et al identified the challenges of accurate risk assessment due to the dynamic nature of most IT environments . The approach suggested by Pak uses the Hidden Markov Model, the model has been used to predict future state. The approach takes a system to fluctuate between mitigated, vulnerable or compromised and the object will remain in the same state unless there is interference. The risk assessment is then done using an ordinal scale that is used to calculate the probabilities and impact, this in turn adds subjectivity to the assessment.
The main aim of this survey is to understand the available research in the field of quantitative risk assessment in the IT security domain, it is however vital to note that despite an exhaustive search of relevant articles its possible some relevant articles have been missed. Analysis was carried out by categorising the papers manually using the content in the papers that have been included, in line with the survey criteria above.
The works were evaluated by looking into which validation methods they used to measure the effectiveness of the method being suggested. This was to satisfy one of the aims of the survey – to understand the validation methods in use and the validity of the methods. The works were also examined to understand the effectiveness of the methods used. The assumptions used in the surveyed works were also examined to understand their validity, limitations as well as note common approaches in current research. It is important that suggested approaches are adequately validated to ensure that the findings in the papers are correct as well as understanding how effective the methods are. It vital that approaches provide accurate results hence not lead to inadequate risk treatment prioritisation.
The works that have been reviewed provide an interesting insight into the quantitative risk assessment landscape in the IT security domain, several common recurring assumptions were identified, and it is useful to evaluate whether they are reasonable assumptions to make.
Several works that were categorised as monetary all needed to determine the frequency of events. Accurately calculating frequency of events can be challenging, historical occurrence is not an accurate indicator for IT Security rate of occurrence particularly when you factor into account that as technology ages its better understood and any unpatched vulnerabilities more exploitable . The other challenge with the monetary models was being able to accurately estimate the cost of reputational damage for instance.
The works which postulated frameworks were very targeted to specific attacks for instance  which created a framework to assess opportunistic attacks however not as relevant for targeted attacks. In the work by Djemame et al concentrated-on availability attacks however not as useful for other scenarios . The frameworks surveyed on the whole did not present adequate validation or even measure effectiveness only performing case study application and not going any further to confirm the findings are correct.
Papers surveyed that suggested systems for quantitative risk assessment similar to other categories did not validate the effectiveness of the systems for instance in Viduto et al and Ashosheh et al . In both papers, the effectiveness or accuracy of the systems was not carried out.
The works which used machine learning approaches did not always measure the effectiveness of methods used. Aussel et al did measure the effectiveness of the scheme however the method is not valid for cyber security risk currently as detailed complete statistical data is not available.
The works which suggested new approaches did not measure how effective or accurate the approaches were instead just applied the methods to a case study . While this was a recurring theme in the categories it never the less means is not possible to examine the effectiveness of the various approaches being suggested.
A number of works across all categories had ordinal scales in their method and there is an implied assumption that the ordinal scales are understood by those applying them, however considering the research that has been carried out by Hubbard et al in ordinal scale usage it is a source of error and hence reduce the effectiveness of quantitative approaches .
Validation is not successfully done in the majority of the literature in all the categories to determine whether the approaches being postulated are accurate or even effective. Several papers applied the approach to a case study however no test or validation is then performed to determine whether in the case study were the approach has been applied it can be effective or even accurate .
In several works there is an assumption that history is a good indicator of future occurrence however this is not tested in the IT security domain with the inverse mirrors reality more since as technology ages its better known and vulnerability exploits are created .
A number of works such as in  only evaluate for risk emanating from vulnerabilities in software however there are some risks that do not emanate from software vulnerabilities, such as security misconfiguration which can still allow attackers to exploit systems and hence risk posed still needs to be evaluated and mitigated.
In a number of works, case study application is carried out which is useful to evaluate whether the suggested method can be successfully applied to a given scenario. Most approaches tend to validate the proposed approach in one scenario, it is useful to test the approaches with more than one case study and hence allow for more generic findings.
A number of risk assessment approaches reviewed seem to be geared among a small subset of risks for instance in  which focuses on connection risks and  which focuses on malfunction in Scada systems and  which only focuses on availability risk caused by hard drive failure, so in order to carry out a full risk assessment of a system, a number of approaches would need to be combined. This combination would end up with a risk assessment which is time-consuming, complex and expensive. New approaches that take into account the various risks in a comprehensive manner are needed.
In the literature surveyed, a lack of consideration of relationships that may exist with vulnerabilities and hence factors tend to be considered independently which may not always give an accurate assessment of risk . The use of ordinal schemes and other scoring methods can introduce inaccuracies as discussed above.
In conclusion, the survey of works in this domain shows some key limitations in the approaches being used to validate current risk assessment approaches. The effectiveness of approaches needs to be measured, which in turn will be able to allow ineffective methods to be challenged. The importance of making sure that IT security risk assessments are accurate is that financial decisions are going to be based on the back of them and hence vital that resources are directed where they are the most effective. It is important that resources are used effectively as IT systems are now a critical part of the success of most businesses. IT systems are now an integral part of the delivery of key infrastructure services hence the impact of security failures can be wide ranging.
 A. Asosheh, B. Dehmoubed and A. Khani, “A new quantitative approach for information security risk assessment,” in 2009 2nd IEEE International Conference on Computer Science and Information Technology, pp. 222-227, 2009.
 M.S. Ahmed, E. Al-Shaer and L. Khan, “A novel quantitative approach for measuring network security,” in INFOCOM 2008. The 27th Conference on Computer Communications. IEEE, pp. 1957-1965, 2008.
 M.U. Aksu, M.H. Dilek, E.İ. Tatlı, K. Bicakci, H.İ. Dirik, M.U. Demirezen and T. Aykır, “A quantitative CVSS-based cyber security risk assessment methodology for IT systems,” in Security Technology (ICCST), 2017 International Carnahan Conference on, pp. 1-8, 2017.
 L. Allodi and F. Massacci, “Security Events and Vulnerability Data for Cybersecurity Risk Estimation,” Risk Analysis, vol. 37, pp. 1606-1627, 2017.
 N. Aussel, S. Jaulin, G. Gandon, Y. Petetin, E. Fazli and S. Chabridon, “Predictive Models of Hard Drive Failures Based on Operational Data,” in Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on, pp. 619-625, 2017.
 R. Bojanc and B. Jerman-Blažič, “Quantitative Model for Economic Analyses of information Security investment in an Enterprise information System,” Organizacija, vol. 45, pp. 276-288, 2012.
 D.L. Chen, T.J. Moskowitz and K. Shue, “Decision Making Under the Gambler’s Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires,” The Quarterly Journal of Economics, vol. 131, pp. 1181-1242, 2016.
 J. Dai, R. Hu, J. Chen and Q. Cai, “Benefit-cost analysis of security systems for multiple protected assets based on information entropy,” Entropy, vol. 14, pp. 571-580, 2012.
 K. Djemame, D. Armstrong, J. Guitart and M. Macias, “A risk assessment framework for cloud computing,” IEEE Transactions on Cloud Computing, vol. 4, pp. 265-278, 2016.
 K. Djemame, D. Armstrong, M. Kiran and M. Jiang, “A risk assessment framework and software toolkit for cloud service ecosystems,” Cloud Computing, pp. 119-126, 2011.
 A. Ekelhart, S. Fenz, M. Klemen and E. Weippl, “Security ontologies: Improving quantitative risk analysis,” in System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on, pp. 156a-156a, 2007.
 N. Feng, H.J. Wang and M. Li, “A security risk analysis model for information systems: Causal relationships of risk factors and vulnerability propagation analysis,” Inf.Sci., vol. 256, pp. 57-73, 2014.
 N. Feng, H.J. Wang and M. Li, “A security risk analysis model for information systems: Causal relationships of risk factors and vulnerability propagation analysis,” Inf.Sci., vol. 256, pp. 57-73, 2014.
 N. Feng, J. Xie and D. Fang, “A Probabilistic Estimation Model for Information Systems Security Risk Analysis,” in Management and Service Science, 2009. MASS’09. International Conference on, pp. 1-4, 2009.
 I.E. Fray, M. Kurkowski, J. Pejaś and W. Maćków, “A new mathematical model for analytical risk assessment and prediction in IT systems,” Control and Cybernetics, vol. 41, pp. 241-268, 2012.
 M. Hilbert, “Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making.” Psychol.Bull., vol. 138, pp. 211, 2012.
 J. Homer, X. Ou and D. Schmidt, “A sound and practical approach to quantifying security risk in enterprise networks,” Kansas State University Technical Report, pp. 1-15, 2009.
 K. Huang, C. Zhou, Y. Tian, W. Tu and Y. Peng, “Application of Bayesian network to data-driven cyber-security risk assessment in SCADA networks,” in Telecommunication Networks and Applications Conference (ITNAC), 2017 27th International, pp. 1-6, 2017.
 D. Hubbard and D. Evans, “Problems with scoring methods and ordinal scales in risk assessment,” IBM Journal of Research and Development, vol. 54, pp. 2: 1-2: 10, 2010.
 J. Samad, S. W. Loke and K. Reed, “Quantitative Risk Analysis for Mobile Cloud Computing: A Preliminary Approach and a Health Application Case Study,” in 2013 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, pp. 1378-1385, 2013.
 S. Jahedi and F. Méndez, “On the advantages and disadvantages of subjective measures,” Journal of Economic Behavior & Organization, vol. 98, pp. 97-114, 2014.
 B. Jerman-Blažič and M. Tekavčič, “Managing the investment in information security technology by use of a quantitative modeling,” Information Processing & Management, vol. 48, pp. 1031-1052, 2012.
 H. Joh and Y.K. Malaiya, “Defining and assessing quantitative security risk measures using vulnerability lifecycle and cvss metrics,” in The 2011 international conference on security and management (sam), pp. 10-16, 2011.
 P. Johnson, A. Vernotte, D. Gorton, M. Ekstedt and R. Lagerström, “Quantitative Information Security Risk Estimation Using Probabilistic Attack Graphs,” in International Workshop on Risk Assessment and Risk-driven Testing, pp. 37-52, 2016.
 M. Jouini, L.B.A. Rabai and R. Khedri, “A multidimensional approach towards a quantitative assessment of security threats,” Procedia Computer Science, vol. 52, pp. 507-514, 2015.
 B. Karabacak and I. Sogukpinar, “A quantitative method for ISO 17799 gap analysis,” Comput.Secur., vol. 25, pp. 413-419, 2006.
 S. Kondakci, “Network security risk assessment using Bayesian belief networks,” in Social Computing (SocialCom), 2010 IEEE Second International Conference on, pp. 952-960, 2010.
 P. Laskov and M. Kloft, “A framework for quantitative security analysis of machine learning,” in Proceedings of the 2nd ACM workshop on Security and artificial intelligence, pp. 1-4, 2009.
 Q. Lin and D. Ren, “Quantitative trust assessment method based on Bayesian network,” in Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), 2016 IEEE, pp. 1861-1864, 2016.
 C. Pak and J. Cannady, “Asset priority risk assessment using hidden markov models,” in Proceedings of the 10th ACM conference on SIG-information technology education, pp. 65-73, 2009.
 M. Parate, S. Tajane and B. Indi, “Assessment of System Vulnerability for Smart Grid Applications,” in Engineering and Technology (ICETECH), 2016 IEEE International Conference on, pp. 1083-1088, 2016.
 L.H. Pham, M. Albanese and S. Venkatesan, “A quantitative risk assessment framework for adaptive Intrusion Detection in the cloud,” in Communications and Network Security (CNS), 2016 IEEE Conference on, pp. 489-497, 2016.
 B. Potteiger, G. Martins and X. Koutsoukos, “Software and attack centric integrated threat modeling for quantitative risk assessment,” in Proceedings of the Symposium and Bootcamp on the Science of Security, pp. 99-108, 2016.
 A. Rot, “IT risk assessment: Quantitative and qualitative approach,” Resource, vol. 283, pp. 284, 2008.
 P. Saripalli and B. Walters, “Quirc: A quantitative impact and risk assessment framework for cloud security,” in Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on, pp. 280-288, 2010.
 A. Shameli-Sendi, R. Aghababaei-Barzegar and M. Cheriet, “Taxonomy of information security risk assessment (ISRA),” Comput.Secur., vol. 57, pp. 14-30, 2016.
 L. Simei, Z. Jianlin, S. Hao and L. Liming, “Security Risk Assessment Model Based on AHP/DS Evidence Theory,” in Information Technology and Applications, 2009. IFITA’09. International Forum on, pp. 530-534, 2009.
 U.K. Singh, C. Joshi and N. Gaud, “Information security assessment by quantifying risk level of network vulnerabilities,” International Journal of Computer Applications, vol. 156, 2016.
 T. Sommestad, M. Ekstedt and P. Johnson, “A probabilistic relational model for security risk analysis,” Comput.Secur., vol. 29, pp. 659-679, 2010.
 W. Sonnenreich, J. Albanese and B. Stout, “Return on security investment (ROSI)-a practical quantitative model,” Journal of Research and Practice in Information Technology, vol. 38, pp. 45-56, 2006.
 S.M. Sulaman, K. Weyns and M. Höst, “Identification of IT Incidents for Improved Risk Analysis by Using Machine Learning,” in Software Engineering and Advanced Applications (SEAA), 2015 41st Euromicro Conference on, pp. 369-373, 2015.
 A. Teixeira, K.C. Sou, H. Sandberg and K.H. Johansson, “Secure control systems: A quantitative risk management approach,” IEEE Control Systems, vol. 35, pp. 24-45, 2015.
 D. Trcek, “System Dynamics Based Risk Management for Distributed Information Systems,” in Systems, 2009. ICONS’09. Fourth International Conference on, pp. 74-79, 2009.
 A. Tripathi and U.K. Singh, “On prioritization of vulnerability categories based on CVSS scores,” in Computer Sciences and Convergence Information Technology (ICCIT), 2011 6th International Conference on, pp. 692-697, 2011.
 V. Verendel, “Quantified security is a weak hypothesis: a critical survey of results and assumptions,” in Proceedings of the 2009 workshop on New security paradigms workshop, pp. 37-50, 2009.
 V. Viduto, C. Maple, W. Huang and D. López-Peréz, “A novel risk assessment and optimisation model for a multi-objective network security countermeasure selection problem,” Decis.Support Syst., vol. 53, pp. 599-610, 2012.
 “Gartner Forecasts Worldwide Security Spending Will Reach $96 Billion in 2018, Up 8 Percent from 2017”, Gatner, 2018. [Online]. Available: https://www.gartner.com/newsroom/id/3836563. [Accessed: 09- Sep- 2018].
 “Nist special publication 800-30 Revision 1”, Nvlpubs.nist.gov, 2018. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-30r1.pdf. [Accessed: 09- Sep- 2018].
 1.27001:2013, “ISO/IEC 27001:2013 – Information technology — Security techniques — Information security management systems — Requirements”, Iso.org, 2018. [Online]. Available: https://www.iso.org/standard/54534.html. [Accessed: 09- Sep- 2018].
 “Equifax hack worse than previously thought: Biz kissed goodbye to card expiry dates, tax IDs etc”, Theregister.co.uk, 2018. [Online]. Available: https://www.theregister.co.uk/2018/02/13/equifax_security_breach_bad/. [Accessed: 09- Sep- 2018].
 “NHS trusts ‘at fault’ over cyber-attack”, BBC News, 2018. [Online]. Available: https://www.bbc.co.uk/news/technology-41753022. [Accessed: 09- Sep- 2018].
 “Google Scholar”, Scholar.google.co.uk, 2018. [Online]. Available: https://scholar.google.co.uk/. [Accessed: 09- Sep- 2018].
 “Association for Computing Machinery”, Acm.org, 2018. [Online]. Available: https://www.acm.org/. [Accessed: 09- Sep- 2018].
 “ResearchGate | Share and discover research”, ResearchGate, 2018. [Online]. Available: https://www.researchgate.net/. [Accessed: 09- Sep- 2018].
 “IEEE Xplore Digital Library”, Ieeexplore.ieee.org, 2018. [Online]. Available: https://ieeexplore.ieee.org/Xplore/home.jsp. [Accessed: 09- Sep- 2018].
 “Scopus preview – Scopus – Welcome to Scopus”, Scopus.com, 2018. [Online]. Available: https://www.scopus.com/home.uri. [Accessed: 09- Sep- 2018].
 D. Henshel et al, “Trust as a human factor in holistic cyber security risk assessment,” Procedia Manufacturing, vol. 3, pp. 1117-1124, 2015.
 H. Yang and M. Tate, “A descriptive literature review and classification of cloud computing research.” Cais, vol. 31, pp. 2, 2012.
 M. Rudolph and R. Schwarz, “A critical survey of security indicator approaches,” in Availability, Reliability and Security (ARES), 2012 Seventh International Conference On, 2012.
 J. L. Fernández-Alemán et al, “Security and privacy in electronic health records: A systematic literature review,” J. Biomed. Inform., vol. 46, (3), pp. 541-562, 2013.