Home » Computer Science Question

Computer Science Question

You have to read an article I have attached and answer the following questions. Question: Read the following article:

Denning. (1987).

An Intrusion-Detection Model

. IEEE Transactions on Software Engineering, SE-13(2), 222–232.

Then explain:

  1. model (see II. OVERVIEW OF MODEL)
  2. tuples representing actions (see IV. AUDIT RECORDS)
  3. problems with record structures
  4. profile structure, which contains 10 components (see V. PROFILES)
  5. idea for how to check for anomalous behavior (see VI. ANOMALY RECORDS)

222
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-13, NO. 2, FEBRUARY 1987
An Intrusion-Detection Model
DOROTHY E. DENNING
Abstract-A model of a real-time intrusion-detection expert system
capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that
security violations can be detected by monitoring a system’s audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects
in terms of metrics and statistical models, and rules for acquiring
knowledge about this behavior from audit records and for detecting
anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusiondetection expert system.
Index Terms-Abnormal behavior, auditing, intrusions, monitoring, profiles, security, statistical measures.
I. INTRODUCTION
THIS paper describes a model for a real-time intrusion-
I detection expert system that aims to detect a wide
range of security violations ranging from attempted breakins by outsiders to system penetrations and abuses by insiders. The development of a real-time intrusion-detection system is motivated by four factors: 1) most existing
systems have security flaws that render them susceptible
to intrusions, penetrations, and other forms of abuse;
finding and fixing all these deficiencies is not feasible for
technical and economic reasons; 2) existing systems with
known flaws are not easily replaced by systems that are
more secure-mainly because the systems have attractive
features that are missing in the more-secure systems, or
else they cannot be replaced for economic reasons; 3) developing systems that are absolutely secure is extremely
difficult, if not generally impossible; and 4) even the most
secure systems are vulnerable to abuses by insiders who
misuse their privileges.
The model is based on the hypothesis that exploitation
of a system’s vulnerabilities involves abnormal use of the
system; therefore, security violations could be detected
from abnormal patterns of system usage. The following
examples illustrate:
* Attempted break-in: Someone attempting to break
into a system might generate an abnormally high rate of
password failures with respect to a single account or the
system as a whole.
* Masquerading or successful break-in: Someone logManuscript received December 20, 1985; revised August 1, 1986. This
work was supported by the Space and Naval Warfare Command (SPAWAR) under Contract 83F830100 and by the National Science Foundation
under Grant MCS-8313650.
The author is with SRI International, Menlo Park, CA 94025.
IEEE Log Number 8611562.
ging into a system through an unauthorized account and
password might have a different login time, location, or
connection type from that of the account’s legitimate user.
In addition, the penetrator’s behavior may differ considerably from that of the legitimate user; in particular, he
might spend most of his time browsing through directories
and executing system status commands, whereas the legitimate user might concentrate on editing or compiling
and linking programs. Many break-ins have been discovered by security officers or other users on the system who
have noticed the alleged user behaving strangely.
* Penetration by legitimate user: A user attempting to
penetrate the security mechanisms in the operating system
might execute different programs or trigger more protection violations from attempts to access unauthorized files
or programs. If his attempt succeeds, he will have access
to commands and files not normally permitted to him.
* Leakage by legitimate user: A user trying to leak
sensitive documents might log into the system at unusual
times or route data to remote printers not normally used.
* Inference by legitimate user: A user attempting to
obtain unauthorized data from a database through aggregation and inference might retrieve more records than
usual.
* Trojan horse: The behavior of a Trojan horse planted
in or substituted for a program may differ from the legitimate program in terms of its CPU time or I/O activity.
* Virus: A virus planted in a system might cause an
increase in the frequency of executable files rewritten,
storage used by executable files, or a particular program
being executed as the virus spreads.
* Denial-of-Service: An intruder able to monopolize a
resource (e.g., network) might have abnormally high activity with respect to the resource, while activity for all
other users is abnormally low.
Of course, the above forms of aberrant usage can also
be linked with actions unrelated to security. They could
be a sign of a user changing work tasks, acquiring new
skills, or making typing mistakes; software updates; or
changing workload on the system. An important objective
of our current research is to determine what activities and
statistical measures provide the best discriminating power;
that is, have a high rate of detection and a low rate of
false alarms.
II. OVERVIEW OF MODEL
The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-pur-
0098-5589/87/0200-0222$01.00 © 1987 IEEE
Authorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
223
DENNING: INTRUSION-DETECTION MODEL
pose intrusion-detection expert system, which we have
called IDES. A more detailed description of the design
and application of IDES is given in our final
report [1].
The model has six main components:
* Subjects: Initiators of activity on a target systemnornally users.
* Objects: Resources managed by’ the system-files,
commands, devices, etc.
* Audit records: Generated by the target system in response to actions performed or attempted by subjects on
objects-user login, command execution, file access, etc.
* Profiles: Structures that characterize the behavior of
subjects with respect to objecfs in terms of statistical metrics and models of observed activity. Profiles are automatically generated and initialized from templates.
* Anomaly records: Generated when abnormal behavior is detected.
* Activity rules: Actions taken when some condition is
satisfied, which update profiles, detect abnormal behavior, relate anomalies to suspected intrusions, and produce
reports.
The model can be regarded as a rule-based pattern
matching system. When an audit record is generated, it is
matched against the profiles. Type information in the
matching profiles then determines what rules to apply to
update the profiles, check for abnormal behavior, and report anomalies detected. The security officer assists in establishing profile templates for the activities to monitor,
but the rules and profile structures are largely system-independent.
The basic idea is to monitor the standard operations on
a target system: logins, command and program executions, file and device accesses, etc., looking only for deviations in usage. The model does not contain any special
features for dealing with complex actions that exploit a
known or suspected security flaw in the target system; indeed, it has no knowledge of the target system’s security
mechanisms or its deficiencies. Although a flaw-based detection mechanism may have some value, it would be
considerably more complex and would be unable to cope
with intrusions that exploit deficiencies that are not suspected or with personnel-related vulnerabilities. By detecting the intrusion, however, the security officer may be
better able to locate vulnerabilities.
The remainder of this paper describes the components
of the model in more detail.
III. SUBJECTS AND OBJECTS
are
the initiators of actions in the target sysSubjects
tem. A subject is typically a terminal user, but might also
be a process acting on behalf of users or groups of users,
or might be the system itself. All activity arises through
commands initiated by subjects. Subjects may be grouped
into different classes (e.g., user groups) for the purpose
of controlling access to objects in the system. User groups
may overlap.
Objects are the receptors of actions and typically in-
clude such entities as files, programs, messages, records,
terminals, printers, and user- or program-created structures. When subjects can be recipients of actions (e.g.,
electronic mail), then those subjects are also considered
to be objects in the model. Objects are grouped into
classes by type (program, text file, etc.). Additional structure may also be imposed, e.g., records may be grouped
into files or database relations; files may be grouped into
directories. Different environments may require different
object granularity; e.g., for some database applications,
granularity at the record level may be desired, wherea-s
for most applications, granularity at the file or directory
level may suffice.
IV. AUDIT RECORDS
Audit Records are 6-tuples representing actions performed by subjects on objects:
where
* Action: Operation performed by the subject on or
with the object, e.g., login, logout, read, execute.
* Exception-Condition: Denotes which, if any, exception condition is raised on the return. This should be the
actual exception condition raised by the system, not just
the apparent exception condition returned to the subject.
* Resource-Usage: List of quantitative elements,
where each element gives the amount used of some resource, e.g., number of lines or pages printed, number of
records read or written, CPU time or I/O units used, session elapsed time.’
* Time-stamp: Unique time/date stamp identifying
when the action took place.
We assume that each field is self-identifying, either implicitly or explicitly, e.g., the action field either implies
the type of the expected object field or else the object field
itself specifies its type. If audit records are collected for
multiple systems, then an additional field is needed for a
system identifier.
Since each audit record specifies a subject and object,
it is conceptually associated with some cell in an “audit
matrix” whose rows correspond to subjects and columns
to objects. The audit matrix is analogous to the “accessmatrix” protection model, which specinfes the rights of
subjects to access objects; that is, the actions that each
subject is authorized to perform on each object. Our intrusion-detection model differs from the access-matrix
model by substituting the concept of “action performed”
(as evidenced by an audit record associated with a cell in
the matrix) for “action authorized” (as specified by an
access right in the matrix cell). Indeed, since activity is
observed without regard for authorization, there is an implicit assumption that the access controls in the system
permitted an action to occur. The task of intrusion detection is to determine whether activity is unusual enough to
suspect an intrusion. Every statistical measure used for
this purpose is computed from audit records associated
with one or more cells in the matrix.
Authorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
224
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-13, NO. 2, FEBRUARY 1987
Most operations on a system involve multiple objects.
For example, file copying involves the copy program, the
original file, and the copy. Compiling involves the compiler, a source program file, an object program file, and
possibly intermediate files and additional source files referenced through “include” statements. Sending an electronic mail message involves the mail program, possibly
multiple destinations in the “To!” and “cc” fields, and
possibly “include” files.
Our model decomposes all activity into single-object
actions so that each audit record references only one object. File copying, for example, is decomposed into an
execute operation on the copy command, a read operation
on the source file, and a write operation on the destination
file. The following illustrates the audit records generated
in response to a command
COPY GAME.EXE TO GAME.EXE
issued by user Smith to copy an executable GAME file
into the directory; the copy is aborted because Smith does not have write permission to :
(Smith, execute, COPY. EXE, 0,
CPU =00002, 11058521678)
(Smith, read, GAME.EXE, 0,
RECORDS=O, 11058521679)
(Smith, write, < Library> GAME. EXE, write-viol,
RECORDS=O, 11058521680)
tion, but the disadvantage of not allowing immediate detection of abnormalities, especially those related to breakins and system crashes. Thus, activities such as login, execution of high risk commands (e.g., to acquire special
“superuser” privileges), or access to sensitive data should
be audited when they are attempted -so that penetrations
can be detected immediately; if resource-usage data are
also desired, additional auditing can be performed on
completion as well. ‘For example, access to a database
containing highly sensitive data may be monitored when
the access is attempted and then again when it completes
to report the number of records retrieved or updated. Most
existing audit systems monitor session activity at both initiation (login), when the time and location of login are
recorded, and termination (logout), when the resources
consumed during the session are recorded. They do not,
however, monitor both the start and finish of command
and program execution or file accesses. IBM’s System
Management Facilities (SMF) [2], for example, audit only
the completion of these activities.
Although the auditing mechanisms of existing systems
approximate the model, they are typically deficient in
terms of the activities monitored and record structures
generated. For example, Berkeley 4.2 UNIX [3] monitors
command usage but not file accesses or file protection violations. Some systems do not record all login failures.
Programs, including system programs, invoked below the
command level are not explicitly monitored (their activity
is included in that for the main program). The level at
which auditing should take place, however, is unclear,
since too much auditing could severely degrade performance on the target system or overload the intrusion-detection system.
Deficiencies in the record structures are also present.
Most SMF audit records, for example, do not contain a
subject field; the subject must be reconstructed by linking
together the records associated with a given job. Protection violations are sometimes provided through separate
record formats rather than as an exception condition in a
common record; VM password failures at login, for example, are handled this way (there are separate records
for successful logins and password failures).
Another problem with existing audit records is that they
contain little or no descriptive information to identify the
values contained therein. Every record type has its own
structure, and the exact format of each record type must
be known to interpret the values. A uniform record format
with self-identifying data would be preferable so that the
intrusion-detection software can be system-independent.
This could be achieved either by modifying the software
that produces the audit records in the target system, or by
writing a filter that translates the records into a standard
format.
Decomposing complex actions has three advantages.
First, since objects are the protectable entities of a system, the decomposition is consistent’ with the protection
mechanisms of systems. Thus, IDES can potentially discover both attempted subversions of the access controls
(by noting an abnormality in the number of exception conditions returned) and successful subversions (by noting an
abnormality in the set of objects accessible to the subject).
Second, single-object audit records greatly simplify the
model and its application. Third, the audit records produced by existing systems generally contain a single object, although some systems provide a way of linking together the audit records associated with a “job step”
(e.g., copy or compile) so that all files accessed during
execution of a program can be identified.
The target system is responsible for auditing and for
transmitting audit records to the intrusion-detection system for analysis (it may also keep an independent audit
trail). The time at which audit records are- generated determines what type of data is available. If the audit record
for some action is generated at the time an action is requested, it is possible to measure both successful and unsuccessful attempts to perform the activity, even if the
action should abort (e.g., because of a protection violation) or cause a system crash. If it is generated when the
action completes, it is possible to measure the resources
V. PROFILES
consumed by the action and exception conditions that may
An activity profile characterizes the behavior of a given
cause the action to terminate abnormally (e.g., because of
resource overflow). Thus, auditing an activity after it subject (or set of subjects) with respect to a given object
completes has the advantage of providing more informa- (or set thereof), thereby serving as a signature or descripAuthorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
225
DENNING: INTRUSION-DETECTION MODEL
tion of normal activity for its respective subject(s) and
object(s). Observed behavior is characterized in terms of
a statistical metric and model. A metric is a random variable x representing a quantitative measure accumulated
over a period. The period may be a fixed interval of time
(minute, hour, day, week, etc.), or the time between two
audit-related events (i.e., between login and logout, program initiation and program termination, file open and file
close, etc.). Observations (sample points ) xi of x obtained
from the audit records are used together with a statistical
model to determine whether a new observation is abnormal. The statistical model makes no assumptions about
the underlying distribution of x; all knowledge about x is
obtained from observations. Before describing the structure, generation, and application of profiles, we shall first
discuss statistical metrics and models.
A. Metrics
We define three types of metrics:
* Event Counter: x is the number of audit records satisfying some property occurring during a period (each audit record corresponds to an event). Examples are number
of logins during an hour, number of times some command
is executed during a login session, and number of password failures during a minute.
* Interval Timer: x is the length of time between two
related events; i.e., the difference between the timestamps in the respective audit records. An example is the
length of time between successive logins into an account.
* Resource Measure: x is the quantity of resources
consumed by some action during a period as specified in
the Resource-Usage field of the audit records. Examples
are the total number of pages printed by a user per day
and total amount of CPU time consumed by some program during a single execution. Note that a resource measure in our intrusion-detection model is implemented as
an event counter or interval timer on the target system.
For example, the number of pages printed during a login
session is implemented on the target system as an event
counter that counts the number of print events between
login and logout; CPU time consumed by a program as
an interval timer that runs between program initiation and
termination. Thus, whereas event counters and interval
timers measure events at the audit-record level, resource
measures acquire data from events on the target system
that occur at a level below the audit records. The Resource-Usage field of audit records thereby provides a
means of data reduction so that fewer events need be explicitly recorded in audit records.
comparing a new observation of x against fixed limits.
Although the previous sample points for x are not used,
presumably the limits are determined from prior observations of the same type of variable. The operational
model is most applicable to metrics where experience has
shown that certain values are frequently linked with intrusions. An example is an event counter for the number of
password failures during a brief period, where more than
10, say, suggests an attempted break-in.
2) Mean and Standard Deviation Model: This model
is based on the assumption that all we know about xl,
*x, are mean and standard deviation as determined
from its first two moments:
sum =xI +
+ Xn
sumsquares
= 2
.
x2 +
+X22
mean = sum/n
(sumsquares _mean2)
(n 1)
A new observation x. + 1 is defined to be abnormal if it
falls outside a confidence interval that is d standard deviations from the mean for some parameter d:
mean + d x stdev
By Chebyshev’s inequality, the probability of a value falling outside this interval is at most 1 /d2; for d = 4, for
example, it is at most 0.0625. Note that 0 (or null) occurrences should be included so as not to bias the data.
This model is applicable to event counters-, interval timers, and resource measures accumulated over a fixed time
interval or between two related events. It has two advantages over an operational model. First, it requires no prior
knowledge about normal activity in order to set limits;
instead, it learns what constitutes normal activity from its
observations, and the confidence intervals automatically
reflect this increased knowledge. Second, because the
confidence intervals depend on observed data, what is
considered to be normal for one user can be considerably
different from another.
A slight variation on the mean and standard deviation
model is to weight the computations, with greater weights
placed on more recent values.
3) Multivariate Model: This model is similar to the
mean and standard deviation model except that it is based
on correlations among two or more metrics. This model
would be useful if experimental data show that better discriminating power can be obtained from combinations of
related measures rather than individually-e.g., CPU time
B. Statistical Models
and I/O units used by a program, login frequency, and
Given a metric for a random variable x and n observa- session elapsed time (which may be inversely related).
tions xl, * * * , xn, the purpose of a statistical model of x
4) Markov Process Model: This model, which applies
is to determine whether a new observation xn + is abnor- only to event counters, regards each distinct type of event
mal with respect to the previous observations. The fol- (audit record) as a state variable, and uses a state transilowing models may be included in IDES:
tion matrix to characterize the transition frequencies be1) Operational Model: This model is based on the op- tween states (rather than just the frequencies of the indierational assumption that abnormality can be decided by vidual states-i.e., audit records-taken separately). A
Authorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
226
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-13, NO. 2, FEBRUARY 1987
* Value: Value of current (most recent) observation
new observation is defined to be abnormal if its probability as determined by the previous state and the transition and parameters used by the statistical model to represent
matrix is too low. This model might be useful for looking distribution of previous values. For the mean and standard
at transitions between certain commands where command deviation model, these parameters are count, sum, and
sequences were important.
sum-of-squares (first two moments). The operational
5) Time Series Model: This model, which uses an in- model requires no parameters.
A profile is uniquely identified by Variable-Name, Subterval timer together with an event counter or resource
measure, takes into account the order and interarrival ject-Pattern, and Object-Pattern. All components of a
times of the observations xl
, x,, as well as their profile are invariant except for Value.
values. A new observation is abnormal if its probability
Although the model leaves unspecified the exact format
of occumng at that time is too low. A time series has the for patterns, we have identified the following SNOBOLadvantage of measuring trends of behavior over time and like constructs as being useful:
detecting gradual but significant shifts in behavior, but the
string’
String of characters.
disadvantage of being more costly than mean and standard
*
Wild
card matching any string.
deviation.
Match
any numeric string.
Other statistical models can be considered, for examIN(Iist)
Match
any
string in list.
ple, models that use more than the first two moments but
name The string matched by p is associated with
p
less than the full set of values.
name.
Match pattern p1 followed by p2.
pl p2
C. Profile Structure
Match pattern p1 or p2.
pl ip2
An activity profile contains information that identifies
Match patternpl and p2.
pl, p2
the statistical model and metric of a random variable, as
-‘p
Match anything but pattem p.
w1 as the set of audit events measured by the variable.
Examples of patterns are:
The structure of a profile contains 10 components, the first
‘Smith’
7 of which are independent of the specific subjects and
*- User – – match any string and assign to User
objects measured:
‘< Library > *’ — match files in < Library > directory
Subject- and Object-Independent Components:
* Variable-Name: Name of variable.
* Action-Pattern: Pattern that matches zero or more
actions in the audit records, e.g., “login,” “read,” “execute.”
* Exception-Pattern: Pattern that matches on the Exception-Condition field of an audit record.
* Resource-Usage-Pattern: Pattern that matches on the
Resource-Usage field of an audit record.
* Period: Time interval for measurement, e.g., day,
hour, minute (expressed in terms of clock units). This
component is null if there is no fixed time interval; i.e.,
the period is the duration of the activity.
* Variable-Type: Name of abstract data type that defines a particular type of metric and statistical model, e.g.,
event counter with mean and standard deviation model.
* Threshold: Parameter(s) defining limit(s) used in
statistical test to determine abnornality. This field and its
interpretation is determined by the statistical model (Variable-Type). For the operational model, it is an upper (and
possibly lower) bound on the value of an observation; for
the mean and standard deviation model, it is the number
of standard deviations from the mean.
Subject- and Object-Dependent Components:
* Subject-Pattern: Pattern that matches on the Subject
field of audit records.
* Object-Pattern: Pattern that matches on the Object
field of audit records.
IN(Special-Files) — match files in Special-Files
‘CPU=’ # Amount — match string ‘CPU=’ followed by integer; assign integer to Amount
The following is a sample profile for measuring the
quantity of output to user Smith’s terminal on a session
basis. The variable type ResourceByActivity denotes a resource measure using the mean and standard deviation
model.
Variable-Name:
Action-Pattern:
SessionOutput
‘logout’
0
Exception-Pattern:
Resource-Usage-Pattern: ‘SessionOutput=’ # – Amount
Period:
Variable-Type:
Threshold:
Subject-Pattern:
Object-Pattern:
Value:
ResourceByActivity
4
‘Smith’
record of …
Whenever the intrusion-detection system receives an
audit record that matches a variable’s patterns, it updates
the variable’s distribution and checks for abnormality. The
distribution of values for a variable is thus derived-i.e.,
learned-as audit records matching the profile patterns are
processed.
D. Profiles for Classes
Profiles can be defined for individual subject-object
pairs (i.e., where the Subject and Object patterns match
specific names, e.g., Subject “Smith” and Object
“Foo”), or for aggregates of subjects and objects (i.e.,
where the Subject and Object patterns match sets of
names) as shown in Fig. 1. For example, file-activity profiles could be created for pairs of individual users and files,
Authorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
227
DENNING: INTRUSION-DETECTION MODEL
System
SubjectClass
I. \
Obj ectClaes
Subject
Obje ct
SubjectClase-ObjectClass
Subject-ObjectClass
SubjectClass-Object
Subject-Object
Fig. 1. Hierarchy of subjects and objects.
for groups of users with respect to specific files, for individual users with respect to classes of files, or for groups
of users with respect to file classes. The nodes in the lattice are interpreted as follows:
* Subject-Object: Actions performed by single subject on single object-e.g., user Smith-file Foo.
* Subject-Object Class: Actions performed by single
subject aggregated over all objects in the class. The class
of objects might be represented as a pattern match on a
subfield of the Object field that specifies the object’s type
(class), as a pattern match directly on the object’s name
(e.g., the pattern “*.EXE” for all executable files), or as
a pattern match that tests whether the object is in some
list (e.g., ” IN(hit-list)”)
* Subject Class-Object: Actions performed on single
object aggregated over all subjects in the class-e.g.,
privileged users-directory file < Library >, nonprivileged users-directory file < Library >
* Subject Class-Object Class: Actions aggregated over
all subjects in the class and objects in the class-privileged users-system files, nonprivileged users-system
files.* Subject: Actions performed.by single subject aggregated over all objects-e.g., user session activity.
* Object: Actions performed on a single object aggregated over all subjects-e.g., password file activity.
* Subject Class: Actions aggregated over all subjects
in the class-e.g., privileged user activity, nonprivileged
user activity.
* Object Class: Actions aggregated over all objects in
the class-e.g., executable file activity.
* System: Actions aggregated over all subjects and objects.
The random variable represented by a profile for a class
can aggregate activity for the class in two ways:
* Class-as-a-whole activity: The set of all subjects or
objects in the class is treated as a single entity, and each
observation of the random variable represents aggregate
activity for the entity. An example is a profile for the class
of all users representing the average number of logins into
.
the system per day, where all users are treated as a single
entity.
* Aggregate individual activity: The subjects or objects in the class are treated as distinct entities, and each
observation of the random variable represents activity for
some member of the class. An example’ is a profile for the
class of all users characterizing the average number of
logins by any one user per day. Thus, the profile represents a “typical” member of the class.
Whereas class-as-a-whole activity can be defined by an
event counter, interval timer, or resource measure for the
class, aggregate individual activity requires separate metrics for each member of the class. Thus, it is defined in
terms of the lower-level profiles (in the sense of the lattice) for the individual class members. For example, average login frequency per day is defined as the average of
the daily total frequencies in the individual user login profiles. A measure for a class-as-a-whole could also be defined in terms of lower-level profiles, but this is not necessary.
The two methods of aggregation serve difference purposes with respect to intrusion detection. Class-as-a-whole
activity reveals whether some general pattern of behavior
is normal with respect to a class. A variable that gives the
frequency with which the class of executable program files
are updated in the system per day, for example, might be
useful for detecting the injection of a virus into the system
(which causes executable files to be rewritten as the virus
spreads). A frequency distribution of remote logins into
the class of dial-up lines might be useful for detecting
attempted break-ins.
Aggregate individual activity reveals whether the behavior of a given user (or object) is consistent with that
of other users (or objects). This may be useful for detecting intrusions by new users who have deviant behavior
from the start.
E. Profile Templates
When user accounts and objects can be created dynamically, a mechanism is needed to generate activity profiles
for new subjects and objects. Three approaches are possible:
1) Manual create: The security officer explicitly creates all profiles.
2) Automatic explicit create: All profiles for a new user
or object are generated in response to a “create” record
in the audit trail.
3) First use: A profile is automatically generated when
a subject (new or old) first uses an object (new or old).
The first approach has the obvious disadvantage of requiring manual intervention on the part of the security officer. The second approach overcomes this disadvantage,
but introduces two others. The first is that it does not automatically deal with startup conditions, where there will
be many existing subjects and objects. The second is that
it requires a subject-object profile to be generated for any
pair that is a candidate for monitoring, even if the subject
never uses the particular object. This could cause many
Authorized licensed use limited to: University of the Cumberlands. Downloaded on October 28,2023 at 01:16:54 UTC from IEEE Xplore. Restrictions apply.
228
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-13, NO. 2, FEBRUARY 1987
more profiles than necessary to be generated. For example, suppose file accesses are monitored at the level of
individual users and files. Consider a system with 1000
users, where each user has an average of 200 files, giving
200 000 files total and 200 000 000 possible combinations of user-file pairs. If each user accesses at most 300
of those files, however, only 300 000 profiles are needed.
The IDES model follows the third approach, which
overcomes the disadvantages of the others by generating
profiles when they are needed from templates. A profile
template has the same structure as the profile it generates,
except that the subject and object patterns define both a
matching pattern (on the audit records) and a replacement
pattern (to place in the generated profile). The format for
the fields Subject-Pattern and Object-Pattern is thus:
matching-pattern

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code ESSAYHELP