Skip to main content
MIT Corporate Relations
MIT Corporate Relations
Search
×
Read
Watch
Attend
About
Connect
MIT Startup Exchange
Search
Sign-In
Register
Search
×
MIT ILP Home
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
User Menu and Search
Search
Sign-In
Register
MIT ILP Home
Toggle menu
Search
Sign-in
Register
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
Back to Faculty/Researchers
Prof. Martin C Rinard
Professor of Computer Science and Engineering
Primary DLC
Department of Electrical Engineering and Computer Science
MIT Room:
32-G828
(617) 258-6922
rinard@csail.mit.edu
Areas of Interest and Expertise
Compiler Support for Object-Oriented Computations on Parallel and Distributed Computing Platforms
Hardware Compilation
Oxygen Project
Compilers
Program Analysis
Reconfigurable Computing
Self-Healing Software Systems
Robust Software Systems
Functional Genomics
Cybersecurity
Software Engineering
Computer Systems
Compilers
Programming Languages
Software Engineering
Program Analysis
Program Verification
Real-Time Systems
Embedded Systems
Distributed Systems
Parallel Systems
Research Summary
Much of Professor Rinard's current research focuses on two areas. First, how to build secure, robust systems. Second, tradeoffs between the resources (time, power, etc.) that a computation consumes and the accuracy of the result that the computation produces.
(*) Reasoning About Approximate Computations -- Many computations have the freedom to, within reasonable accuracy bounds, produce a range of results. It is often possible to exploit this freedom to improve performance, reduce resource consumption, and give systems the ability to execute at a variety of points in an underlying accuracy versus performance tradeoff space. We have developed several techniques for reasoning about approximate computations and transformations for approximate computations.
(*) Performance, Power, and Accuracy Trade Offs -- Many computations have an inherent trade off between accuracy, performance, and power - if the client of the computation is willing to accept a slightly less accurate result, it is often possible to adapt the computation to deliver that result while consuming substantially less power or compute time. But most implementations of these computations operate at only one point in this underlying trade-off space. In many cases the implementor of the computation is not even aware that this trade-off space exists and that the computation has the ability to operate successfully at a variety of points in this space.
Rinard's research group has developed a variety of techniques that automatically discover and exploit tradeoffs between accuracy, performance, and power. These techniques use rich program transformations (program transformations that may change the semantics of the program) to induce a search space of programs surrounding the original program. They then search this space to find programs with desirable accuracy, performance, and power consumption characteristics.
(*) Robust, Secure Software -- Software systems are often perceived to be fragile and brittle, vulnerable to security exploits and errors that cause the system to crash. My research group has developed a set of techniques that make software systems more robust, resilient, reliable, and secure by enabling them to survive a range of otherwise fatal errors. The overall goal is to make the software survive any error, continue to execute, and provide acceptable service to its users. One particularly important application of this technology is to eliminate security vulnerabilities such as errors that enable remote software injection attacks.
(*) Program Verification -- Rinard's research group has developed two systems for verifying sophisticated properties of linked data structures.
(1) The Hob System: Hob focuses both on internal consistency properties of linked data structures (such as the consistency of the links in a tree or list) and consistency properties that involve multiple data structures (for example, the set of objects in one data structure must be a subset of the set of objects in another data structure). Hob uses sets of abstract objects in its specifications. It can therefore model sophisticated typestate properties of objects as well as important data structure consistency constraints. One particularly noteworthy aspect of the Hob system is its modular design: it includes a variety of techniques that enable modular specification and verification of data structures, including the use of pluggable, arbitrarily sophisticated program analysis and verification systems.
(2) The Jahob System: Jahob is a comprehensive program specification and verification system. It uses integrated reasoning to solve verification problems - it combines a broad collection of automated reasoning systems (with varying performance, reasoning power, and scope characteristics) that work together within the Jahob framework to verify the automatically generated verification conditions. Jahob has been used to verify the full functional correctness of a collection of linked data structures. By full functional correctness we mean that the system verifies every property (with the exception of resource consumption properties such as running time and memory consumption) that the client relies on to use the data structure correctly.
(*) Analysis and Optimization of Multithreaded Programs -- The key problem with analyzing multithreaded programs is the need to characterize the effect of all potential interleavings of statements from parallel threads. Rinard's research group pioneered a compositional approach to this problem in which the algorithm analyzes each thread once to obtain a result that characterizes all potential interactions between that thread and other parallel threads. We have used this approach to develop a variety of analyses for different kinds of multithreaded programs.
(*) Analysis and Optimization of Divide and Conquer Programs -- Divide and conquer programs present an imposing set of analysis challenges. The natural formulation of these programs is recursive. For efficiency reasons, programs often use dynamic memory allocation to match the sizes of the data structures to the problem size, and often use pointers into arrays and pointer arithmetic to identify subproblems. Traditional analyses from the field of parallelizing compilers are of little or no help for this class of programs -- they are designed to analyze loop nests that access dense matrices using affine array index expressions, not recursive procedures that use pointers and pointer arithmetic.
(*) Synchronization Optimizations -- Rinard's research group has investigated a variety of ways of eliminating synchronization overhead in multithreaded and automatically parallelized programs. These optimizations eliminate mutual exclusion synchronization, reduce its frequency, replace it with more efficient optimistic synchronization primitives, and replicate objects to eliminate the need for synchronization.
(*) Automatic Parallelization -- Traditional parallelizing compilers uses data dependence analysis to parallelize loop nests that access dense matrices using affine access functions. While this approach works well in its target domain, it is not designed for irregular, object-based programs whose parallel tasks include the computation of many different procedures.
(*) Future Directions -- Rinard is currently exploring ways to relate the referencing properties of objects to the changing roles that they play in the computation. Conceptually, each object's role depends, in part, on the data structures in which it participates. As the object moves between data structures, its role often changes to reflect its new relationships. We are developing dynamic analyses to discover these roles, and static analyses that verify that the program uses each object correctly in its current role. We expect to use the results to enhance the programmer's understanding of large programs and to optimize various aspects of distributed programs such as communication and failure propagation. One early result is a new type system for ensuring that multithreaded programs are free of data races. This system allows the programmer to identify how threads obtain and relinquish control of different kinds of objects, ensuring that the program observes the correct acquisition protocol for each object. See our OOPSLA 2001 paper on this topic for more details.
In collaboration with Daniel Jackson, Professor Rinard is investigating ways to check that a program correctly implements high-level design properties expressed using object models. My contribution focuses on new, very precise, program analyses that leverage the design information to focus the analysis on properties of interest to the programmer. The National Science Foundation is currently supporting this research through an ITR grant.
Finally, Rinard has a range of Java analysis and optimization projects underway. These include the exploration of techniques for implementing Real-Time Java, space optimizations for Java programs, techniques that leverage program analysis information to provide very fast atomic transactions, and a range of memory system optimizations such as region-based allocation, elimination of write barriers, and (leveraging program analysis information about object lifetimes) explicit alloc/free optimizations.
Recent Work
Related Faculty
M Abraham Davis
Graduate Student
Prof. Saman P Amarasinghe
Professor of Computer Science and Engineering
Mehul Tikekar
Graduate Student