Andrew W. Appel | Edward W. Felten | Zhong Shao | ||
Princeton University | Yale University |
This document describes a DARPA-funded research project that we are conducting in the 3-year period 1999-2002.
Our research will dramatically expand the impact of PCC in several ways: we will generalize and simplify the safety policies to allow higher assurance and more flexibility of the policies themselves; we will provide production-quality certifying compilers for mainstream programming languages such as Java and ML; and we will use PCC to express much more sophisticated security policies than the simple safety properties that were demonstrated in the first prototypes. Our proposed work on PCC is divided into three areas:
Authentication logics are formal logics that allow us to reason about the properties of systems and protocols that verify the identity of users and decide whether or not to permit various operations. Modeling such systems provides the usual benefits of formal analysis: hidden assumptions are made explicit, redundant features are exposed, and flaws in the system may be found. Authentication logics were first used to reason about distributed protocols, but they may also be used effectively to reason about the interaction of mutually suspicious software components that are linked together, as in a mobile code system.In addition, authentication logics allow us to reason about how systems or protocols can interoperate. If we can describe the semantics of two different systems in a single logic, we can interpret the statements and actions made in one system in terms of the other, or create a single new implementation that can interoperate with both systems.
Type-based certifying compilers are compilers that use static type information to help generate provably safe target code. Most modern programming languages (e.g., ML, Java) all have a strong type system: a program that passes typechecking will never ``go wrong'' at runtime (assuming the compiler is bug-free). A conventional untyped compiler discards all the type information after typechecking. A type-preserving compiler, on the other hand, translates both the program and the (inferred) type information into the intermediate and target languages. Now we can type-check the program at the back end of the compiler; any compiler bug that compromises type safety will be detected. Since compilers are complex programs, this end-to-end checking is a significant advantage.Maintaining type information during compilation has many other advantages: types can be used to guide and support advanced optimizations; typechecking of intermediate code is invaluable for debugging new compiler transformations; types can also act as a glue to connect language features that complicate interoperability, such as mixed data representations, multiple function calling conventions, and different memory management protocols.
Proof-carrying code as pioneered by Necula and Lee, allows a code producer to provide a (compiled) program to a host, along with a formal proof of safety. The host can specify a safety policy and a set of axioms for reasoning about safety; the producer's proof must be in terms of those axioms.PCC relies on the same formal methods as does program verification; but it has the significant advantage that safety properties are much easier to prove than program correctness. The producer's formal proof will not, in general, prove that the code produces a correct or meaningful result; but it guarantees that execution of the code can do no harm. Thus, it cannot replace other methods of program assurance. On the other hand, the proofs can be mechanically checked by the host; the producer need not be trusted at all, since a valid proof is incontrovertible evidence of safety.
This figure shows how proof-carrying code is used. The safety policy is encoded into the verification-condition generator (VCgen), which maps each machine-language program to a safety theorem. The producer and consumer each execute VCgen to get the theorem. The producer then finds a proof of the theorem and sends it with the program. The consumer checks that the theorem matches the program and that the proof matches the theorem. If so, it's safe to execute the program.
Checking proofs is a simple and mechanical process, about as simple as programming-language type-checking. This means that the trusted computing base (TCB) -- the part of the system in which an error can lead to breached security -- of the entire system is quite small and efficient, comprising only the VCgen and Checker components. Entirely outside the TCB are the producer's software engineering methodology, program source code, optimizing compiler, network transmission hardware/software, and even the theorem prover itself. Only the VCgen (safety policy) and Checker are in the TCB.
On the other hand, finding proofs of theorems is in general intractable. For the proving process to be automatic at the code producer, we must exploit the structure of the problem. The safety proofs will typically be based on type-checking or dataflow properties of the source program, so it will be useful for the compiler to provide type and dataflow information to the prover; these are the ``Hints'' shown in Figure 1.
Another significant advantage of PCC is that it does not require the use of a particular compiler, or of any compiler! As long as the code producer can provide the proof, the code consumer can be assured of safety. This significantly increases the flexibility available to system designers.
Outline of the work
Figure 2: Subprojects and their relationships.
This figure is an overview of the component parts of this research project. Security Logic is a new protocol for secure distributed computing; Prover/Checker is a proof-carrying code infrastructure; Certifying Compiler is a certifying compiler for Standard ML based on the FLINT intermediate-language technology; FLINT PCC is a set of technical lemmas enabling the construction of a proof-carrying code system for ML; Security PCC is the implementation of the security logic as a proof-carrying code system; ML PCC is a proof-carrying code certifying compiler for Standard ML; ML+Security PCC is a system allowing program components written in ML to interact using a high-level security policy; FLINT/Java sub is a certifying compiler for a Java subset; JavasubPCC is a proof-carrying certifying compiler for subset Java; and the components in grey indicate a potential path for further system integration.
Language independence, scalability, and high-assurance safety policies
Appel will do research related to engineering the foundations of proof-carrying code. He has begun a preliminary research project in this technology, jointly with Amy Felty of Bell Laboratories, that already indicates the feasibility of several of the following research goals: engineering proofs to be sized proportionately to programs; finding universal, high-assurance safety policies; implementing a realistic set of programming-language features; and specifying higher-level security policies.Theorem and proof engineering
A foundational result of modern mathematics is Gödel's incompleteness theorem, which implies that some theorems may have extremely long proofs. But for a practical proof-carrying code system, we cannot afford to transmit and check huge proofs.How can we keep proofs small? Necula has done some preliminary work in this area: he uses loop invariants based on simple typechecking properties, and he avoids duplication of certain terms in the representation of proofs. But there are still several ways in which proofs can blow up exponentially in size.
For example, if a program contains a DAG of dataflow -- where a computed data value is used for more than one purpose -- a proof in the standard methodology is likely to reprove results about that data value for each of the places where it is used. This is a natural consequence of the way substitution is used in Hoare-logic program verification. We have found systematic ways to factor the proofs of programs to avoid this duplication. In effect, we use common subexpression elimination -- a compiler technology -- on proofs as well as on programs.
Another example concerns a DAG of control flow. If a program address L can be reached (by a control-flow jump) from several places J1, J2, J3, then the verification condition for L becomes part of the verification conditions for each of the statements J1,J2,J3. Now there are three copies of this formula, in which different substitutions will be performed in the proof process. A conventional theorem prover will quickly lose track of the inherent similarity of these formulas, and will reprove similar results several times. But we have found that another compiler technology -- called static single-assignment form -- is useful for factoring proofs of programs to avoid this kind of blowup.
Another way to eliminate redundant parts of proofs is to factor theorems into subtheorems (lemmas). This way we can replace many similar subproofs with a single one. It will then be necessary to send the lemmas and their proofs along with the main proof. This kind of ``software engineering and modularization'' of logic is often done informally, but the engineering issues related to simultaneously minimizing the trusted computing base (i.e. axiom set) and minimizing proof size have not been adequately addressed in the literature.
Proofs still tend to contain many redundant components, such as copies of formulas and more copies of their subformulas. Necula has already found ways to remove some of these redundancies, but we are investigating more general mechanisms. By letting lemmas contain (restricted) computations that regenerate subterms on the fly, we can trade off proof size with checking time in a controlled way.
Finally, we will investigate the choice of logical infrastructure best suited to work in proof-carrying code. Necula and Lee have used the Edinburgh LF syntax, both in the Elf theorem prover and in a custom-made prover implemented directly in Standard ML. We have been working in Lambda Prolog with rather more explicit proof representations, but we expect to implement some prototypes in each system to directly compare the advantages and disadvantages of each system.
(This research relates to the Prover/checker component of Figure 2)
Universal, high-assurance safety policies
The specification of a safety property should not overspecify how that property is to be achieved. The demonstrations of PCC by Necula and Lee have achieved safety by expressing ML and C typing judgements as logic formulas, so that a code consumer can check the typing judgements relevant to compiled ML or C code. The logic for compiled C programs would contain inference rules specifying under what conditions a bit-pattern is an (int), under what conditions a (float), and so on. There would also be rules to reason about structured data; for example:If p is the address of a struct {int x; int y;}, then mem[p+4] is an int (the y field).This statement deals with memory words and the layout of (structs) in memory. It allows us to prove properties of the machine code itself, not just about some abstract intermediate representation. However, this logical inference rule is specialized to a particular programming language (C), a particular word size (4 bytes), a particular way of aligning and laying out data structures. A code consumer with this rule in its safety policy will not be able to accept programs written in a different language, or using different size integers, or even a C program compiled with a compiler that uses different alignment rules for data structures!We plan to use much more general inference rules. Instead of building specific data-structure-layout rules (such as the one shown above) into our safety policy, we will allow the code producer to prove such rules as little theorems (lemmas). These lemmas can then be attached to the proof (carried with the code) to justify the use of a particular type system and layout.
This will allow the code producer to choose the programming language most appropriate to a particular application for mobile code. The producer will have much more flexibility in the manner in which the mobile code is proved safe.
Furthermore, by removing the type-layout inference rules from the safety policy, they are also removed from the trusted computing base. This leads to a higher-assurance system: any assumptions about the safety of a particular code-producer's type system must be proved.
(This research relates to the FLINT PCC component of Figure 2, which also includes work by Shao described below.)
Implementation of a realistic set of language features
Necula's PCC system for machine-language subroutines that operate on ML-like data structures included only traversal, not allocation or update. Allocation is significantly more difficult to reason about than traversal, but it is essential for a PCC system for a full-scale programming language. Other necessary programming-language features include dynamic function dispatch (to implement objects or function closures), tagged union data types, and program modules. We will develop safety conditions and proof techniques for these, in order to allow the use of real programming languages. (This research relates to the ML PCC component of Figure 2, which also includes work by Shao described below.)An example of an allocation rule is as follows:
If mem[p] is an int (the x field), and mem[p+4] is an int (the y field), then p is the address of a struct {int x; int y;}.But to use this rule we also need rules about when we are allowed to store integers into unallocated memory, how unallocated memory becomes allocated memory, and so on. Necula's PCC system does not have enough of these rules to permit fully general allocation of data structures. This reasoning is as much about dataflow as it is about types, so a PCC system must be equally at home with flow analysis as with type-checking. We will specify and implement all the logic necessary for safe allocation and initialization.Specification of higher-level security policies
A significant strength of PCC is the flexibility it allows in specifying safety policies. In contrast to virtual-memory protection -- which allows only read or read/write priviliges on page-size units -- we can easily specify and implement policies such as this one:You may read or write any address in the range alow to ahigh, and you may access the object at address p1 according to the layout of the public fields Java class X, and you may traverse the tree data structure (according to recursive type t2) at address p2. In addition you may call the functions at addresses f1 and f2 with arguments of the following types, .... Your add instructions must not overflow, you may execute no more than 150 consecutive instructions before polling the address apoll, and you may store a 0 into address s1 or s2 but not both.Thus far, PCC has been proposed for the verification of memory safety, type safety, respect of private variables, and bounded resource consumption. We intend to work in those areas, as well as the implementation in PCC of higher-level security policies described below. (This research relates to the Security PCC component of Figure 2, which also includes work by Felten described below.)Production-quality certifying compilers for real programming languages
PCC gains the most leverage when it is based on typed intermediate languages. Recent work by others at CMU and Cornell, and by us at Yale and Princeton, has produced practical compilers that propagate types in soundly axiomatized systems; these type systems can be used as the basis of safety proofs.FLINT is a secure, efficient, language- and platform-independent representation for mobile code, under development by Shao and his students at Yale University. FLINT is intended as a typed common intermediate format for object-oriented languages, functional languages, and imperative languages. Unlike some previous efforts of this kind, FLINT takes advantage of recent developments in type theory to use higher-order lambda calculus to encode the type systems of the widely varying source languages. This allows type checking to take place at any phase of compilation, even after optimizations and register allocation. The TIL ML compiler of Harper and Morrisett is another excellent example of a type-preserving compiler, but FLINT is the only such system that can already accommodate full ML-style higher-order modules (with separate compilation), can compile efficiently within a full-scale production-quality compiler, and is designed from the beginning with the goals of accommodating multiple source languages (such as Java, Modula-3) and smoothly interoperating with C. The current research on the TIL compiler (now named TILT) at CMU is still targeted towards ML-like languages only, while our proposed work on FLINT will emphasize support for the widely varying source languages and efficient inter-language operation.
Figure 3: An overview of the FLINT system
Before describing our research plan, we first give an overview of the FLINT system. Figure 3 shows its top-level structure. The entire compiler is organized around a high-level intermediate language (i.e., high-level FLINT) and a low-level intermediate language (i.e., low-level FLINT). Programs written in various source languages are first fed into a language-specific ``Front End'' which does parsing, elaboration, type-checking, and pattern-match compilation; the source program is then translated into the high-level FLINT intermediate format. The language-independent middle end does a series of code optimizations and transformations (e.g., dataflow and loop optimizations, cross-module specializations, representation analysis, closure conversion) before translating it into low-level FLINT. Low-level FLINT is a platform-independent, strongly typed virtual machine language; it could serve as an excellent mobile executable content language.
A low-level FLINT program can be verified and executed in two ways. One is to directly verify its safety property by running the FLINT verifier (see FLINT Verifier in Figure 3); the certified FLINT program can then be passed to the MLRISC code generator [Geo97] to generate the native machine code. The FLINT program can also be passed to a special code generator (with embedded proof generator) to generate machine-specific proof carrying code (type information in FLINT should be crucial for proof generation). The resulting PCC machine code can then be linked and checked (see PCC Checker in Figure 3) before being executed on the runtime execution engine.
Shao (in consultation with Appel) will carry out the following research related to FLINT: design of FLINT as a type-based PCC; certifying compiler front-ends for FLINT; FLINT virtual machine and runtime system; and safe and principled language interoperation.
Design of FLINT as a type-based PCC
FLINT is designed as a strongly typed common intermediate format for multiple advanced languages, based on the polymorphic lambda-calculus. We will evolve and extend FLINT based on the following principles:We will pursue the following extensions to make FLINT into a truly language- and platform-independent mobile code:
- Strong and explicit typing. Type inference and static overloading can be left to the language-specific front end.
- Simple and well-defined semantics.
- Expressiveness, to support advanced features such as closures, ML-like polymorphism and modules, and Java-like classes and objects.
- Pay-as-you-go efficiency, so that programs that do not use a particular feature should not be implemented less efficiently simply because they may interact with those that do use the feature.
- Suitability for standard program optimization.
- Support for system programming: primitives such as bit-manipulation, safe type-cast, dynamic types, safe marshalling, arbitrary checkpointing, and real-time memory management.
- Extensibility.
(This research relates to the Certifying Compiler and FLINT PCC components of Figure 2.)
- [Support for abstract datatypes.]
- Two different abstract types (e.g., queue and stack) could use the same concrete representation, but exhibit very different runtime behaviors. Today's compilers enforce type abstraction only during typechecking (of the source program) but then translate abstract values into their concrete representations; the abstraction is lost during the back-end compilation! We believe that abstract types should be preserved and propagated all the way into the low-level FLINT language and the machine-level PCC; the specific property embedded inside each abstract type is often crucial to establish advanced safety property of the underlying mobile code.
- [Extensible primitive types and primitive operations.]
- Most programming languages have a fixed set of built-in types and functions that are often treated specially by the compiler. For example, most compilers support strength reductions on integer operations. We plan to treat built-in types and functions as ordinary abstract datatypes annotated with a set of axioms that can be used by PCC technology. The axioms can also be used to automatically generate the code optimizer and to help the PCC prover establish new properties about the built-in types. Modelling built-in types as ADT's also makes FLINT more extensible---we do not need to choose a fixed set of primitives types for FLINT.
- [Full runtime type reflection with lightweight objects.]
- Sophisticated runtime services---dynamic linking and loading, safe program checkpointing, and marshalling arbitrary runtime objects (including checkpoints) into byte streams to send over the Internet---require the use of runtime type information and are often extremely hard to implement safely. We expect they will also be hard to certify. We plan to support these services directly inside the intermediate language (via program transformation and instrumentation). The key challenge is to achieve what we call full runtime type reflection: all types accessible at compile time must have a corresponding type dictionary accessible at runtime. Object-oriented languages attach a dictionary (or vtable) to each data object, but this is inefficient and unacceptable for statically typed languages such as ML and Safe C---what we want is a notion of lightweight object where dictionaries are not packaged inside each data value but rather accessed from the current runtime environment (on the stack or in the heap).
- [Full-fledged object-oriented features.]
- Typing object-oriented languages is an active research area, but most previous work has focused on theoretical languages rather than mainstream languages such as Java, with its explicit class hierarchy and dynamic class-loading. We plan to unify the hierarchy for lightweight objects with the Java class hierarchy, to establish a link between static values (i.e., lightweight object, no dictionaries are packaged inside) and dynamic objects (with dictionaries packaged inside). This should facilitate safe interoperation between functional and object-oriented languages.
- [Control, effects, and other extensions.]
- The FLINT ``primitive'' library will include N-bit integers N-bit words, ASCII and Unicode characters, N-bit floating-point numbers, strings, boolean types, boxed reference cells, array, packed arrays, vectors, packed vectors, mono arrays and mono vectors, ML-like immutable records, first-class continuations, control continuations (used by CML [Rep91]), and suspensions (for lazy evaluation), and a unified exception model in FLINT (to handle both ML-like exceptions and Java-like exceptions).
Certifying compiler front-ends for FLINT
The certifying compiler front-ends will translate ML, Java (and Java Virtual Machine Language), and other surface languages into high-level FLINT, the code optimization and transformation phases from high-level FLINT into low-level FLINT, and include the implementation of all extensions mentioned in the FLINT-design section.A key problem for type-based certifying compilers is how to efficiently maintain and manipulate complete type information at compile time and at run time. It is crucial to optimize the run-time costs of representing and handling large types. We will look into various type-specialization and type-lifting techniques. We believe that the implementation of FLINT (or any PCC system) requires special effort to efficiently represent and manipulate types. We will design and evaluate a series of new techniques to cut down the costs of handling large types.
Our current implementation of FLINT uses runtime type passing and intensional type analysis to support ML-like polymorphism. Previous work on such systems [HM95] relies on typecase, which does not scale well when there are many object types. We will use dictionary passing which does not require modifying existing code when a new type is introduced. Dictionary passing has been used to implement Haskell type classes, but it has never been used to support ML-like parameteric polymorphism [Ler92][Sha97].
With our plan to unify lightweight objects and regular heavyweight objects, we believe that dictionary passing can provide a clean, uniform, and efficient framework for both ML-like polymorphism and Java-like method invocation. Dictionary passing can incur large runtime costs, but our recent result on optimal type lifting [SS98] shows that all dictionaries can be constructed at link time.
Assuming FLINT will support Java-like class hierarchies, we plan to apply type-based techniques (developed in the context of ML) to object-oriented languages such as Java. In the meantime, there are numerous sophisticated techniques in optimizing object-oriented languages [Cha92][DDG+96]; we plan to see if they can be incorporated into FLINT and how effective they can be when performed on ML-like programs.
We will also develop safe and efficient marshalling and checkpointing tools via FLINT program transformations. Marshalling has been notoriously bug-prone and insecure, partly because language type systems cannot fully describe the structure of complex objects. The expressiveness of FLINT (compared to other mobile-code languages) is a big help in supporting safe and principled marshalling and checkpointing.
(This research relates to the Certifying Compiler and FLINT/Javasub components of Figure 2.)
FLINT virtual machine and runtime system
The output of the front-end transformations is the strongly typed low-level FLINT language. This language can serve as an excellent, platform-independent mobile executable content, just like the the Java Virtual Machine Language (JVML, or ``bytecode''). FLINT, however, has several distinct advantages over JVML:FLINT will allow three different configurations for safe mobile code:
- JVML is Java-specific; FLINT is language-independent, allowing Java and FLINT applets to coexist (since JVML can be compiled into FLINT).
- Unlike JVML, FLINT has a clean and well-founded semantics so it is more trustworthy. Cutting-edge research on PCC and secure information flow can be more easily incorporated into FLINT.
(This research relates to the Certifying Compiler, ML PCC, and Javasub PCC components of Figure 2.)
- A lightweight bytecode interpreter that can interoperate with compiled code. The FLINT low-level bytecodes can be verified for safety in a simple and principled way.
- A just-in-time compiler that translates (verified) bytecodes to native machine code. This JIT would be in the trusted computing base, but it is much simpler and more trustworthy than a Java JIT.
- Proof-carrying code: a compiler (operating at the code producer's site) that translates FLINT to native code with a proof of safety. This leads to the smallest TCB and probably the most efficient execution.
Safe and principled language interoperation
Large software is often developed using multiple programming languages, whose features may interact in surprising ways. We will develop new type-based techniques [TS99] to support principled interoperation among languages with different features selected from: mutable store, exceptions, first-class continuation, garbage collection, and heap and stack allocation of activation records. Our framework will allow programs written in multiple languages with overlapping features to interact with each other safely and reliably, yet without restricting the expressiveness of each language.Our goal is to design an interoperability scheme that is:
We'll formalize these requirements using a type system built upon previous work on effect systems [JG91][TJ92][TJ94]. We will relate effects to the machine resources they require. Since different models of languages provide different sets of resources, and the same effect may be possible with various resources, both an effect and a resource annotation are needed: the resource annotation indicates the specific requirements of a code fragment, while the effect annotation determines whether an alternative set of resources can be coerced to match these requirements.
- Safe: it should not be possible to violate the runtime safety of a language by calling a foreign function, even if this function is defined in a language with different features; and
- Efficient: a language implementation should not be forced to use suboptimal methods for its own features in order to provide support for other language's features. For instance the implementation of a language that does not have exceptions should not have to know about the exception handling mechanism(s) used in interoperating implementations of other languages.
The resulting system avoids the safety traps and allows for the interoperation of efficient implementations by restricting, in some cases, which foreign functions may be invoked, and imposing conditions the caller must satisfy before the invocation.
(This research is based relates to the Certifying Compiler, ML PCC, and Javasub PCCS components of Figure 2.)
Security policies based on composable assurances
Felten (in consultation with Appel) will carry out research related to security policies for mobile code.The goal of this research is to bring together ideas from proof-carrying code with ideas from authentication logics. These two sets of ideas complement each other well, since each can compensate for the weaknesses of the other. Proof-carrying code is implementable but so far it cannot address high-level security policies; authentication logics handle high-level security policies well but so far have not been directly implementable. By bringing them together, we can build a system that is practical and handles sophisticated high-level security policies.
Current proof-carrying code technology does not attempt to establish high-level security properties, such as proper authorization of access to resources, but we will extend it to do so. Current research on authorization often models systems using a formal logic that includes axioms for authorization and delegation. We will apply the methods of proof-carrying code to such a logic. A code producer may attach to the code a set of digitally signed statements that certify that the code has certain properties, along with a proof that these statements may be legitimately combined to authorize certain actions.
Conventionally, authentication logics are used to reason about the properties of systems, but they are not used directly in building systems. The most expressive logics are undecidable: there is no way in general to determine whether a given set of assumptions implies a given conclusion. In practice, this means that a program receiving a request cannot determine whether the facts known to it imply that the request should be allowed. To get around this problem, most systems restrict themselves to a decidable subset of a logic, giving up much of the power of logic-based design in the process. For example, the authentication mechanism in the Taos operating system [LABW92], whose design is based on an expressive logic, restricts itself to a much less expressive, but decidable, subset of the logic. Similarly, our work on stack inspection in Java [WF98] uses a different decidable subset of the same logic.
Using the proof-carrying code approach allows us to use an expressive logic, while making access control decisions tractable. Making such a decision will no longer require proving a theorem; it will require only verifying a proof, which is much easier. The proof will be generated by the requestor, who presumably has more knowledge about why the request is permissible.
We envision a high-level security logic that shares some properties with the SDSI security infrastructure [RL] designed by Rivest and Lampson. Like SDSI, we will use decentralized naming rather than distinguished names, and periodic reconfirmation rather than revocation. However, our use of logic as a building block makes many of the more complicated constructs and rules of SDSI unnecessary. We believe a logic with a small number of security-specific axioms and inference rules can be considerably more expressive than SDSI.
Security policies are often written in terms of properties of programs that cannot be entirely proven from the bottom up, as proof-carrying code does. For example, a user interface component might need to have the property ``never pops up a misleading dialog box.'' Properties like this cannot be proven but may be accepted as true because of assurances made by the (trusted) code producer, or by a trusted third party because of the result of a security audit or some other certification process. Such assurances can be encoded as digitally signed statements in a logic. If the signer is sufficiently trusted, the statement may be accepted as true.
When a program is composed of separately developed components, assurances about one component can be dependent on properties of other components that it uses. For example, the security of a cryptographic key generator might depend on properties of the random number generator that it is linked with. Such dependences can easily be encoded in logic; for example, the author of the key generator might sign an assurance saying ``If random-generator has property P, then key-generator has property Q.'' A code producer offering these two components would then need to establish the properties of the random-generator, in order to prove that the key-generator is secure.
Our work in this area will proceed in three phases. First, in the Security Logic subtask, we will specify a set of logical axioms and inference rules sufficient to describe a complete system for distributed authentication and access control. Second, in the Security PCC task, we will implement this specification, using the proof-carrying code infrastructure developed by the Prover/checker subtask; the result will be a working infrastructure for distributed authentication and access control. Finally, in the ML+Security PCC task, we will integrate this infrastructure with the ML proof-carrying code system, to produce a system that allows high-level security policies to be enforced between modules of the same program, as well as between separate programs.
The security logic
To make the discussion of security logic more concrete, we will now sketch several features that we envision for the logic. Our discussion here is informal; the security logic will of course be defined more formally.The full security logic will be defined in the Security Logic subtask. It will be implemented in the Security PCC subtask, and integrated with the ML PCC compiler in the ML+Security PCC task.
- [The says operator]
- models any statement made by a principal. A statement can be either a message sent by the principal, or any other logical formula that we can infer the principal believes. Of course, a principal may lie by choosing to say something that is false. Following Abadi et al. [ABLP93] we will assume that if a principal says both ``A'' and ``A implies B'', we can infer that the principal says ``B''; we will also assume that every principal says every theorem of the logic.
- [Delegation]
- allows one principal to allow another principal to make (possibly limited) statements on the first principal's behalf. This is a very general notion that can be used to support granting of rights to another, group membership, voting, and cryptographic certificates. Delegation does not require any new operators or axioms; a principal can delegate to another by making certain kinds of ordinary statements. For example, suppose that principal A says ``For all S, ((B says S) and pred(S)) implies S''. Now if B makes a statement T such that pred(T) is true, then we can infer that A says T. In other words, A's delegation gave B the right to make a limited class of statements (those satisfying pred) on A's behalf.
- [Namespaces]
- allow each principal to bind certain names to the identities of other principals. Statements can refer to the values of names in any principal's namespace. This follow's SDSI's naming policy: there is no distinguished root namespace, but every principal acts as a local root. Of course, we can also model a single global namespace by designating one special principal as the root, and establishing a convention of naming principals relative to the root.
- [Time]
- is modeled in order to allow expiration and renewal of certificates, as well as time-dependent security policies. Time is modeled by defining a time datatype admitting comparison and arithmetic operators, and a function now() which is defined to always return the current time. This allows principals to make time-dependent statements. Although we do not elaborate here, we will support the notion of clock skew between principals.
- [Cryptographic operations]
- can be represented in the logic by adding primitives such as signed-with, encrypted-with, and hash-of, and then adding axioms such as ``(S signed-with K) implies (K says S)''. Cryptography thus allows keys to make statements. Digital certificates are delegations that allow keys to speak on behalf of principals; for example, Alice's digital certificate says that Alice's key can speak on behalf of Alice.
Conclusion of technical approach
These are the right technologies to provide security for hosts that run mobile code: authentication logics, certifying compilers, and proof-carrying code. These technologies fit together naturally. We will combine them, engineer them, and demonstrate that they can be scaled up for practical applications. We have the expertise and the right software base to do this.Comparison with Other Research
This section compares our approach with other ongoing efforts. It is organized around the three major themes of the proposal.Proof-carrying code and the Fox project
Parts of our proposed research are inspired by the work conducted by Robert Harper and Peter Lee at Carnegie Mellon University. Their project has focused on the use of Standard ML for systems software and the use of typed intermediate languages in compilers to allow high-assurance typechecking at late stages of compilation; they also developed the concept of proof-carrying code.This is an excellent and productive project. Their initial decision to use Standard ML was made possible by our own software system, Standard ML of New Jersey. Not only did SML/NJ's design show that production-quality compilers could be directly based on programming language semantics, but the software itself is robust, efficient, well documented, and free of license restrictions so that it can be used by large projects such as their FoxNet system.
The work they have done in typed intermediate languages [HM95][Mor95a][TMC+96] is a major scientific advance, and has inspired our recent work in SML/NJ, namely the FLINT typed intermediate representation. But we have designed FLINT to make the intermediate representation sufficiently general to support not just ML but a wide variety of other programming languages such as Java and C, and with attention to engineering issues [SLM98], so that FLINT already serves as the basis for a widely distributed compiler used by hundreds of users for large-scale software development. Furthermore, our SML/NJ system supports a comprehensive and useful standard library of operating-system interface modules for Unix and Microsoft Windows, comes with profilers and compilation management tools, and supports many hardware architectures and operating-system platforms.
Necula and Lee's development of proof-carrying code [Nec97][Nec98] is also a significant theoretical achievement. As we have explained in the body of our proposal, we believe this concept can be applied in new domains (i.e., implementation of security/authentication policies for distributed computation and mobile code) and engineered for application to full-scale programming languages and language-independent code consumers.
Morrisett [MWCG98] at Cornell University has developed a typed assembly language, which allows a certifying compiler to produce native code with type annotations that can be checked to ensure safety. Typed assembly language (TAL) is less general than proof-carrying code, since proofs can be based both on types and on dataflow properties of the program. But Morrisett's TAL can be used as one of the proof techniques in a PCC system, and we intend to use this technique where appropriate.
Type-based certifying compilers
Common intermediate format has been an active research area in the past. Many existing compilers such as GNU's GCC, Stanford's SUIF [HAA+96], and U. Washington's Vortex [DDG+96] all use some kind of shared intermediate representations for multiple source languages. In addition, the C programming language has been used as the de facto common intermediate format for a long time. Unfortunately, none of these intermediate languages support strong static typing, and neither do they support advanced languages such as ML.One example of typed intermediate format is the Java VM Language (JVML) [LY97]. Like FLINT, the Java bytecode can be statically type-checked, though its type system is not defined as formally as the F-omega calculus. Because the Java bytecode is originally designed for Java only, it does not directly support advanced language features such as higher-order functions and polymorphic typings.
Typed intermediate languages have received much attention lately, especially in the ML community. Several ML compilers [Ler92][SA95][TMC+96] maintain explicit type information inside their intermediate languages. However, none of them tried to extend the intermediate language to support multiple advanced languages. Neither have they seriously addressed the problems of how to scale up to handle large types, how to support efficient run-time type passing, and how to support efficient and principled language interoperations.
The use of polymorphic lambda-calculus (F-omega) to model the type-theoretic semantics of Standard ML was pioneered by Harper and Mitchell [HM93]. Their XML calculus also includes dependent types to characterize ML module constructs. Harper and Morrisett [HM95][Mor95b] later proposed to use a predicative variant of F-omega extended with typerec to compile ML-like polymorphism. Recently, Harper and Stone [HS96] gave a new type theoretic account for the entire SML'97; the internal language they use still contain a separate module calculus and translucent types. The FLINT language, on the other hand, uses a much simpler calculus (plain F-omega with existentials) [Sha98] to support SML'97 with higher-order modules. Intensional type analysis in FLINT is supported through dictionary passing (with optimal type lifting [SS98]) rather than runtime type inspections [HM95].
Typing object-oriented languages has been an active research area in the past decade. Most recent work is focused on specifying the formal semantics and proving soundness of various aspects of Java and JVML [DE97][Qia98][JvdB98][SA98][NvO98] . These approaches are important building blocks for helping us understand the formal underpinnings of Java, but these calculi are formulated specifically for convenience of proving theorems rather than for compiler optimization and code generation. They preserve the name-based class hierarchy in Java, but use implicit self parameters and treat method invocation as primitive.
More general object calculi (as in [AC96], for example) are, for the same reasons, generally not suitable for use as intermediate representations. Moreover, since they were not formulated specifically for Java, they tend to use structural subtyping rather than name-based subclassing. They do, however, support fancier object-oriented features than required by Java.
Encodings of object-oriented features using typed lambda-calculi [BCP96][FM98a] are somewhat more relevant, since our own typed intermediate language is based on polymorphic lambda calculus. These encodings, however, does not encode Java-like name-based hierarchies. Furthermore the implicit subtyping typically used in these calculi can make them rather complex, in some cases rendering type checking undecidable. We want a more conservative extension, less powerful than the more general encoding, but well-tuned to support fast type-checking, explicit name-based class hierarchy, and efficient implementation.
High-level security policies
Several distributed security infrastructures have based their designs on formal logics. For example, the distributed authentication and access control mechanisms in the Taos operating system [LABW92] are based on the authentication logic of Abadi, Burrows, Lampson, and Plotkin [ABLP93]. Systems like Taos have been limited to a decidable subset of the logic, which significantly reduces the expressiveness of the system.Our previous work [WF98] on understanding new Java security architectures used the same logic to model the interactions between software components that are linked together into a single program. Though this work suggested the possibility of modeling inter-program and intra-program interactions using a unified framework, it did not suggest the direct use of logic as a vehicle for building a unified implementation, as we are doing.
Systems such as PolicyMaker [BFL96] allow security programmers to express complex predicates that are checked before allowing access. However, these predicates are written in an imperative programming language, so the party receiving the request must know how to use the facts known to it to determine whether to allow an access. Applying the proof-carrying code idea to access control, as we are doing, greatly simplifies the verifier's job and reduces the trusted computing base.
In recent years, several distributed security infrastructures have been proposed, including SDSI [RL], SPKI [Ell96], and the security architectures of distributed programming systems like CORBA [Sie96] and DCE [Fou92]. None of these systems uses an expressive logic as a building block, so each is less expressive than we would like.
Key Personnel
Andrew Appel, Principal Investigator
Andrew Appel is Professor of Computer Science at Princeton University. He is a designer of the Standard ML of New Jersey (SML/NJ) compiler, a very widely used compiler for the functional language ML. It is one of the first high-performance optimizing compilers to be explicitly based on semantic principles, and has been distributed freely (with source code) to universities and industrial users for a decade. Many academic and industrial research projects, and some industrial applications, are implemented using ML and compiled using SML/NJ.Appel is a co-PI on the DARPA-sponsored National Compiler Infrastructure (NCI) project; he designed the Abstract Syntax Description Language (ASDL) toolkit, allowing compiler components (front-ends, back-ends, optimizers) to interoperate cleanly even if implemented in in different programming languages, and the SUIF-to-VPO bridge, which uses ASDL to connect the Stanford SUIF front-end to the Virginia VPO back-end.
He is the author of a leading textbook on compilers, Modern Compiler Implementation in Java (also ... in C and ... in ML), as well as a graduate text on functional-language implementation entitled Compiling with Continuations, and over forty papers published in competitive journals and conferences. He has wide expertise in programming-language design and implementation. He is particularly interested in how programming languages and environments can efficiently support safe and well-structured engineering of modular software.
Zhong Shao, Co-PI
Zhong Shao is Assistant Professor of Computer Science at Yale University, author of many key compilation phases used in SML/NJ, and designer of the new, higher-order typed intermediate representation (named FLINT) now used in this compiler. Shao was one of the first to build a type-based intermediate representation in a functional-language compiler; typed intermediate languages are now a fundamental part of the technology of proof-carrying code, and Shao is an expert in the relevant type theory and its practical application to production compilers.Shao is a co-PI on the DARPA-sponsored EDCS project on Software Evolution using HOT Language Technology; he designed and developed the first production-quality type-preserving compiler for the entire Standard ML 1997 language extended with higher-order modules. The resulting compiler has been incorporated into SML/NJ and is now used extensively by many researchers worldwide.
Shao currently leads Yale FLINT Project which develops the state-of-the-art systems software (compiler infrastructure, runtime environment, and operating system) for advanced type-safe languages (e.g., ML, Java). He is a recipient of the NSF CAREER Award in 1995--1998 and has served on the program committee for the most prestigious ACM conferences on compilers and programming languages (POPL'96 and PLDI'99).
Edward Felten, Co-PI
Edward Felten is Associate Professor of Computer Science at Princeton University. He is an expert in operating systems, distributed systems, and in system security. He has built high-performance distributed file systems and designed the software architecture for the SHRIMP parallel machine. He is also well known for his analysis of security problems in the design of Java mobile code, and for new security architectures to solve these problems. His most recent work uses logic to model the newest security features of Java and to optimize their implementation. He is the coauthor of the leading book on Java security. Felten is the director of the Secure Internet Programming Laboratory at Princeton, and is a co-PI on the DARPA-sponsored SHRIMP project.
Bibliography
- [ABLP93]
- Martin Abadi, Michael Burrows, Butler Lampson, and Gordon D. Plotkin. A calculus for access control in distributed systems. ACM Trans. on Programming Languages and Systems, 15(4):706--734, September 1993.
- [AC96]
- Martin Abadi and Luca Cardelli. A Theory of Objects. Spring Verlag, New York, 1996.
- [BCP96]
- Kim B. Bruce, Luca Cardelli, and Bejamin C. Pierce. Comparing object encodings. In Proc. Third Workshop on Foundations of Object Oriented Languages, July 1996.
- [BFL96]
- Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized trust management. In Proc. of 17th IEEE Symposium on Security and Privacy, pages 164--173, May 1996.
- [Cha92]
- Craig Chambers. The Design and Implementation of the SELF Compiler, an Optimizing Compiler for Object-Oriented Programming Languages. PhD thesis, Stanford University, Stanford, California, March 1992.
- [DDG+96]
- Jeffery Dean, Greg DeFouw, David Grove, Vassily Litvinov, and Craig Chambers. Vortex: An optimizing compiler for object-oriented languages. In Proc. ACM SIGPLAN '96 Conf. on Object-Oriented Programming Systems, Languages, and applications, pages 83--100, New York, October 1996. ACM Press.
- [DE97]
- Sophia Drossopoulou and Susan Eisenbach. Is the Java type system sound. In Proc. Fourth Workshop on Foundations of Object Oriented Languages, January 1997.
- [Ell96]
- Carl M. Ellison. Establishing identity without certification authorities. In Proc. of 6th USENIX Security Symposium, July 1996.
- [FM98a]
- Kathleen Fisher and John Mitchell. On the relationship between classes, objects, and data abstraction. Theory and Practice of Object Systems, 4(1):3--25, January 1998.
- [FM98b]
- Stephen N. Freund and John C. Mitchell. A type system for object initialization in the Java bytecode language. In Proc. ACM SIGPLAN 1998 Conf. on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pages 310--327, Vancouver, October 1998. ACM Press.
- [Fou92]
- Open Software Foundation. Introduction to OSF DCE. Prentice Hall, Englewood Cliffs, New Jersey, 1992.
- [Geo97]
- Lal George. MLRISC: Customizable and reusable code generators. Technical memorandum, AT&T Bell Laboratories, Murray Hill, NJ, 1997.
- [HAA+96]
- M.W. Hall, J.M. Anderson, S.P. Amarasinghe, B.R. Murphy, S.W. Liao, E. Bugnion, and M.S. Lam. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, December 1996.
- [HM93]
- Robert Harper and John C. Mitchell. On the type structure of Standard ML. ACM Trans. on Programming Languages and Systems, 15(2):211--252, April 1993.
- [HM95]
- Robert Harper and Greg Morrisett. Compiling polymorphism using intensional type analysis. In Twenty-second Annual ACM Symp. on Principles of Prog. Languages, pages 130--141, New York, Jan 1995. ACM Press.
- [HS96]
- Robert Harper and Chris Stone. A type-theoretic account of Standard ML 1996 (version 2). Technical Report CMU-CS-96-136R, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, September 1996.
- [JG91]
- Pierre Jouvelot and David K. Gifford. Algebraic reconstruction of types and effects. In Eighteenth Annual ACM Symp. on Principles of Prog. Languages, pages 303--310, New York, Jan 1991. ACM Press.
- [JvdB98]
- Bart Jacobs and Joachim van der Berg. Reasoning about Java classes (preliminary report). In Proc. ACM SIGPLAN 1998 Conf. on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pages 329--340, Vancouver, January 1998. ACM Press.
- [LABW92]
- Butler Lampson, Martin Abadi, Michael Burrows, and Edward Wobber. Authentication in distributed systems: Theory and practice. ACM Transactions on Computer Systems, 10(4):265--310, November 1992.
- [Ler92]
- Xavier Leroy. Unboxed objects and polymorphic typing. In Nineteenth Annual ACM Symp. on Principles of Prog. Languages, pages 177--188, New York, Jan 1992. ACM Press. Longer version available as INRIA Tech Report.
- [LY97]
- Tim Lindholm and Frank Yellin. The Java Virtual Machine Specification. Addison-Wesley, 1997.
- [Mor95a]
- Greg Morrisett. Compiling with Types. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, December 1995.
- [Mor95b]
- Greg Morrisett. Compiling with Types. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, December 1995. Tech Report CMU-CS-95-226.
- [MWCG98]
- Greg Morrisett, David Walker, Karl Crary, and Neal Glew. From system F to typed assembly language. In Proc. 25rd Annual ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages, pages 85--97. ACM Press, 1998.
- [Nec97]
- George Necula. Proof-carrying code. In Twenty-Fourth Annual ACM Symp. on Principles of Prog. Languages, New York, Jan 1997. ACM Press.
- [Nec98]
- George Ciprian Necula. Compiling with Proofs. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, September 1998.
- [NL97]
- George Necula and Peter Lee. Efficient representation and validation of logical proofs. Technical Report CMU-CS-97-xxx, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, January 1997.
- [NvO98]
- Tobias Nipkow and David von Oheimb. Java light is type-safe --- definitely. In Proc. 25th ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages (POPL'98), San Diego, January 1998. ACM Press.
- [Qia98]
- Zhenyu Qian. A formal specification for Java Virtual Machine instructions for objects, methods, and subroutines. Technical report, Bremen Institute for Safe Systems, 1998. To appear in Jim Alves-Foss (ed.) ``Formal Syntax and Semantics of Java,'' Springer-Verlag LNCS, 1998.
- [Rep91]
- John H. Reppy. CML: A higher-order concurrent language. In Proc. ACM SIGPLAN '91 Conf. on Prog. Lang. Design and Implementation, pages 293--305. ACM Press, 1991.
- [RL]
- Ronald L. Rivest and Butler Lampson. SDSI --- a simple distributed security infrastructure.
- [SA95]
- Zhong Shao and Andrew W. Appel. A type-based compiler for Standard ML. In Proc. ACM SIGPLAN '95 Conf. on Prog. Lang. Design and Implementation, pages 116--129, New York, 1995. ACM Press.
- [SA98]
- Raymie Stata and Martin Abadi. A type system for Java bytecode subroutines. In Proc. 25th ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages (POPL'98), pages 149--160. ACM Press, January 1998.
- [Sha97]
- Zhong Shao. Flexible representation analysis. In Proc. 1997 ACM SIGPLAN International Conference on Functional Programming (ICFP'97), pages 85--98, New York, June 1997. ACM Press.
- [Sha98]
- Zhong Shao. Typed cross-module compilation. In Proc. 1998 ACM SIGPLAN International Conference on Functional Programming (ICFP'98), pages 141--152. ACM Press, September 1998.
- [Sie96]
- Jon Siegel, editor. CORBA Fundamentals and Programming. John Wiley and Sons, New York, 1996.
- [SLM98]
- Zhong Shao, Christopher League, and Stefan Monnier. Implementing typed intermediate languages. In Proc. 1998 ACM SIGPLAN International Conference on Functional Programming (ICFP'98), pages 313--323. ACM Press, September 1998.
- [SS98]
- Bratin Saha and Zhong Shao. Optimal type lifting. In Xavier Leroy and Atsushi Ohori, editors, Proc. 1998 International Workshop on Types in Compilation: LNCS Vol 1473, pages 156--177, Kyoto, Japan, March 1998. Springer-Verlag.
- [TJ92]
- Jean-Pierre Talpin and Pierre Jouvelot. Polymorphic type, region, and effect inference. Journal of Functional Programming, 2(3), 1992.
- [TJ94]
- Jean-Pierre Talpin and Pierre Jouvelot. The type and effect discipline. Information and Computation, 111(2):245--296, June 1994.
- [TMC+96]
- D. Tarditi, G. Morrisett, P. Cheng, C. Stone, R. Harper, and P. Lee. TIL: A type-directed optimizing compiler for ML. In Proc. ACM SIGPLAN '96 Conf. on Prog. Lang. Design and Implementation, pages 181--192. ACM Press, 1996.
- [TS99]
- Valery Trifonov and Zhong Shao. Safe and principled language interoperation. In Proceedings of the European Symposium on Programming, Amsterdam, March 1999.
- [WF98]
- Dan S. Wallach and Edward W. Felten. Understanding Java stack inspection. In Proc. of 19th IEEE Symposium on Security and Privacy, pages 52--63, May 1998.