Scalable Synchronization


  • The primary goal of the Scalable Synchronization Research Group (SSRG) is to make it much easier to develop concurrent programs that are scalable, efficient, and correct.

    We attack this problem from many directions and target various contexts and timeframes. In the short term, we can enhance performance and scalability of existing code bases by improving system support for synchronization. For example, today's lock implementations often cause poor scalability on large multisocket, multicore systems, and designing new locks with these architectures in mind can alleviate the problem.

    Similarly, we can contribute improvements to implementations of existing library functionality, such as in Java's concurrency libraries, so that users can benefit from them without even knowing about it. However, existing interfaces and use cases often preclude use of some of the algorithmic techniques we would use to improve performance. In such cases, we can explore extending existing interfaces, or offering new functionality that is specified and optimized for a given class of use cases.

    For the longer term, we are also exploring new programming paradigms for making concurrency easier for programmers. This involves not only exploring good programming features and interfaces and how they can be effectively supported, but also gaining feedback from users and incorporating this feedback into the proposed features. An example of our work in this area is our collaboration with other industry researchers to specify transactional language extensions for C++. We believe that allowing programmers to use transactions to specify what should be done atomically---leaving determination of how this should be achieved to the system---can bring similar benefits to shared memory programmers as it has been delivering to database programmers for decades.

    We are very interested in hardware support for transactional memory (HTM). We have experimented extensively with the HTM feature of Sun's prototype Rock processor, which supports a form of HTM. We have shown that HTM has strong potential to make it much easier to write concurrent code that is faster, simpler, and better in other ways such as memory consumption than the existing alternatives. However, we have also found that Rock's HTM was subject to a number of limitations that make it significantly more difficult to exploit successfully than we would like. We have been working not only to demonstrate the significant potential of HTM, but also to highlight requirements of any future HTM implementations in order to achieve this potential.

    Our work in all of the above-described areas involves study of properties of concurrent data structure implementations under various assumptions about the environment, such as what hardware support for synchronization is available. This involves work both on implementations of specific concurrent data structures and algorithms, as well as on more general frameworks for supporting their development. We are also interested in exploring and understanding the fundamental ramifications of various levels of hardware support on what properties can be achieved by concurrent data structure implementations.

    Finally, we also have strong expertise in the specification and verification of concurrent algorithms, which is important because the kinds of algorithms we use to improve concurrent data structures are often intricate and subtle.




The Scalable Synchronization Research Group at Oracle Labs is exploring hardware and software mechansisms for facilitating the easy development of correct, efficient, and scalable concurrent programs. This goal is increasingly important as multicore computing becomes ubiquitous, and increasingly difficult as systems become larger. Since being acquired by Oracle in 2010, we have continued our research in these areas, and we are also exploring ways in which techniques developed by us and others may be successfully exploited in Oracle's products, particularly databases.


  • Tim Merrifield, intern (summer '13)
  • Yujie Liu, intern (summer '12 - '13)
  • Nir Shavit, full-time member (on and off for more than a decade), now at MIT
  • Mohsen Lesani, intern (winter '12)
  • Irina Calciu, intern (summer '11)
  • Aleksandar Dragojevic, intern (summer '10)
  • Yossi Lev, intern ('04 - '10)
  • Dan Nussbaum, full-time member ('04 - '10)
  • Marek Olszewski, extern (January '09), intern (summer '09)
  • Kevin Moore, full-time member ('07 - '08)
  • Alexandra Fedorova, intern ('03 - '06)
  • Ori Shalev, intern ('04 - '06)
  • Virendra J. Marathe, intern (summer '05)
  • Simon Doherty, intern (summer '03)
  • Bill Scherer, intern (summer '02)