[Prev][Next][Index][Thread]
New Article on Enforceable Security Policies
We would like to announce a new article:
Edit Automata: Enforcement Mechanisms for Run-time Security Policies
Jay Ligatti, Lujo Bauer, and David Walker
Princeton University
Abstract:
We analyze the space of security policies that can be enforced by
monitoring and modifying programs at run time. Our program monitors,
called edit automata, are abstract machines that examine the sequence
of application program actions and transform the sequence when it
deviates from a specified policy. Edit automata have a rich set of
transformational powers: They may terminate the application, thereby
truncating the program action stream; they may suppress undesired or
dangerous actions without necessarily terminating the program; and
they may also insert additional actions into the event stream.
After providing a formal definition of edit automata, we develop a
rigorous
framework for reasoning about them and their cousins: truncation
automata (which can only terminate applications), suppression automata
(which can terminate applications and suppress individual actions),
and insertion automata (which can terminate and insert). We give a
set-theoretic characterization of the policies each sort of automaton
can enforce and we provide examples of policies that can be enforced
by one sort of automaton but not another.
WHY TYPES RESEARCHERS SHOULD READ THIS PAPER:
In order to contain untrusted, potentially malicious code, any
language-based security system should be composed of at least two
components: (1) a strong type system and (2) run-time security
monitoring to perform access control or resource allocation decisions
that cannot generally be made at compile time. An excellent example
of these two complimentary components at work is the JVM or CLR
security systems which combine object-oriented type systems with stack
inspection to provide end-to-end security guarantees.
Our article presents a new general model of run-time security
monitors, what it means to enforce a run-time security policy, and
which policies are and are not enforceable by these mechanisms. Our
formalism explicitly admits the possibility that inputs to the
run-time monitor may be a priori constrained via a type system, proof
system, or other static analysis or program transformation. We hope
researchers who are actively pursuing type-theoretic solutions to
language-based security problems can use this material to understand
how their contributions relate to and can interact with run-time
security monitors.
The technical aspects of our paper use familiar operational semantics
to describe our security monitors and simple inductive proof
techniques, and hence we expect the paper to be an easy read
for types researchers.
Details:
The article will appear in the International Journal of Information
Security. Submitted Dec 2002; revised June 2003. It is a significantly
extended and revised version of a paper entitled "More Enforceable
Security
Policies" which first appeared in Foundations of Computer Security,
July, 2002.
A preliminary version of the article can be found at:
http://www.cs.princeton.edu/sip/pub/editauto.pdf
Comments, suggestions, and questions are always welcome. This research
is part of the Princeton Polymer Project, which studies both the theory
and practice of program monitoring. For further details, see
our web site (which has links to some other related types-oriented
papers
as well).
http://www.cs.princeton.edu/sip/projects/polymer/
Sincerely,
Lujo Bauer
Jay Ligatti
David Walker
{lbauer,jligatti,dpw}@cs.princeton.edu