This paper presents an approach to measuring computer security understood as a system property, in the category of similar properties, such as safety, reliability, dependability, resilience, etc. First, a historical discussion of measurements is presented, beginning with views of Hermann von Helmholtz in his 19-th century work “Zählen und Messen”.
Then, contemporary approaches related to the principles of measuring software properties are discussed, with emphasis on statistical, physical and software models. A distinction between metrics and measures is made to clarify the concepts. A brief overview of inadequacies of methods and techniques to evaluate computer security is presented, followed by a proposal and discussion of a practical model to conduct experimental security measurements.
WHAT IS A MEASUREMENT?
Hermann von Helmholtz Concept of Measurement
Although there are several concepts of measurement, they all seem to converge to the idea formulated in the 19-th century by Herman von Helmholtz, in his groundbreaking work “Zählen und Messen”, in which Helmholtz says:
“The special relation which can exist between the attributes of two objects and which is designated by us by the name equality is characterized by […]
Axiom I: If two magnitudes are equal to a third, they are equal to each other.”
Statistical Approach to Measurements
The contribution of von Helmholtz is significant, in terms of the logic of measurement and the associated theory. However, without questioning his work, newer theories treat the measurement processes as statistical in nature. The principal assumption of the statistical approach to measurements is that due to the inherent uncertainties in the measurement process, the result of a measurement always consists of two numbers: the value of the measured quantity and the estimation of the measurement uncertainty with which this value has been obtained (error).
Lessons from Measurements in Physics
To help realize the challenge of measuring properties, one can look closer at the extreme of measuring strictly physical properties (quantities). In addition to length, mentioned above, among physical properties we are most familiar with are time and mass. The current definition of a second, a metric (unit) of time, involves atomic radiation and reads as follows: “the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom.”
With all that has been said in the subsections above, software measurements cause a particular challenge. First of all, software is not a physical quantity, so the question arises can we really distinguish some meaningful software attributes that would have significance regarding the estimation of software quality? In other words, “Analogous to physics, there is the idea whether we can compare a software quality attribute to a norm”.
CAN SECURITY BE MEASURED?
There have been numerous publications in the last decade on security assessment, including books, research and engineering papers, government reports, and Internet sources, all of them discussing security metrics. However, a vast majority of them deal with metrics at the management level and have very little to do with measurement in a scientific sense of the term, as developed in measurement theory.
What is meant by security metrics in these publications is primarily adherence to standards, whether established industry standards or internal company standards leading to the assessment of how security policies are executed, for example, by implementing respective processes and auditing them. As one paper defines it, security metrics mean “the measurement of the effectiveness of the organization’s security efforts over time.”
MODEL FOR SECURITY ASSESSMENT
An interesting approach to modeling measurement processes is presented and involves the IDEF0 process notation specified in the Federal Information Processing Standard. This model is shown in Figure 2 and includes the phenomenon being measured, shown as a process, and the control unit representing an entity receiving measurement results and taking respective actions. A number of additional inputs to both the process and the control unit are considered as well.
We propose the adaptation of this model, making it closer to those used in control theory, which can reflect an impact of external circumstances on computer system’s security. Taking the analogy with control engineering, one would only keep interfaces relevant to security during system’s operation and, as a result, derive a model of an embedded controller (or more broadly, a cyberphysical system) subject to security threats as shown in Figure 3.
Outline of Establishing a Security Measurement Process
Thus far, we have determined the model for security assessment for one particular class of systems, cyberphysical systems, and defined security as a term. What is necessary in the next step is developing the measurement process (with metrics and measures) for measuring security in the proposed context. This is, of course, an open question and a tremendous challenge.
Overview of a Case Study in Aviation
The aircraft internal networks tied with air traffic management and airline operations bring security to the forefront, because they may adversely affect flight safety. This would fit in the model presented in Figure 3. However, the existing aircraft system safety guidance does not address airborne networks and data security issues.
This paper presents a view on addressing an enormous challenge of measuring computer security as a system property. Guided by principles of measurements introduced in the 19-th century by Hermann von Helmholtz, as well as by the statistical nature of measurements, and facing some fundamental questions whether security is a measurable property, a high-level model for security assessment is proposed.
This model is built exploiting an analogy with a control system, treating threats as disturbances to the controller. The proposed model requires identifying measured property, establish appropriate metric, developing measure and the measurement process, and finally present the results in form of a value with an associated accuracy.
This model can be only as good as the data set to which it can be applied. With a chronic lack of reliable data related to security threats and vulnerabilities, it is proposed to use the National Vulnerability Database and apply to it the Common Vulnerability Scoring Systems (CVSS) to derive security assessment using computational methods dealing with uncertainty. Comparing the process of security assessment to the development of measurement standards and processes for physical quantities, such as length or time, it is anticipated that refining and adjusting the concepts of computer security assessment may take decades and in fact is a challenge for the entire generation.
Source: Embry-Riddle University
Authors: Janusz Zalewski | Steven Drager | William McKeever | Andrew J. Kornecki