Towards Whatever-Scale Abstractions for Data-Driven Parallelism

Towards Whatever-Scale Abstractions for Data-Driven Parallelism

Mark Moir, Maurice Herlihy, Tim Harris, Victor Luchangco, Virendra Marathe, Yossi Lev, Yujie Liu

13 April 2014

Increasing diversity in computing systems often requires problems to be solved in quite different ways depending on the workload, data size, and resources available. This kind of diversity is becoming increasingly broad in terms of the organization, communication mechanisms, and the performance and cost characteristics of individual machines and clusters. Researchers have thus been motivated to design abstractions that allow programmers to express solutions independently of target execution platforms, enabling programs to scale from small shared memory systems to distributed systems comprising thousands of processors.We call these abstractions “Whatever-Scale Computing”. In prior work, we have found data-driven parallelism to be a promising approach for solving many problems on shared memory machines. In this paper, we describe ongoing work towards extending our previous abstractions to support data-driven parallelism for Whatever-Scale Computing.We plan to target rack-scale distributed systems. As an intermediate step, we have implemented a runtime system that treats a NUMA shared memory system as if each NUMA domain were a node in a distributed system, using shared memory to implement communication between nodes.


Venue : First Workshop on Rack-Scale Computing