Daniel Goodman
Principal Member of Technical Staff
Daniel Goodman
Daniel Goodman is a member of the Oracle Labs Burlington and works on user friendly Probabilistic Machine Learning, and Probabilistic Programming that can target high performance devices such as GPUs and Clusters in the form of the Sandwood project. His wider research interests are novel computation models and user friendly programming models.
Before migrating to machine learning he worked with Tim Harris on program performance prediction resulting in the Pandia system for predicting program performance, technology from this was then included in the Smart Collections project.
Prior to joining Oracle in 2014 he worked on the TeraFlux project, looking at programming models for combining dataflow and transactional Memory. Within this project he produced a suite of software transactional memories for Scala, Manchester University Transactions for Scala (MUTS) and a Scala based dataflow library, DFScala, supported by tooling for memory analysis and categorization of the resulting model.
He has also held positions as a senior researcher with Fujitsu Laboratories, and as a Research Associate and Junior Research Fellow at the Oxford e-Research Centre, Oxford University and Pembroke College, Oxford University respectively. During this he investigated tooling and programming models for high performance computing on machines ranging from small clusters, to clusters of GPGPU's, to Japan's K super computer. In all cases this work was based on constructing tools and programming constructs/models that made high performance computing more accessible to application scientists in areas ranging from astronomy, to medical imagery, to simulating the visual cortex.
Daniel graduated from Oxford University in 2003 where he was awarded the Hoare Prize for the highest 1st in Computer Science that year. He then started a doctorate working with ClimatePrediction.net looking at techniques for the analysis of large quantities of distributed data in a Grid environment. This resulted in the development of the Martlet workflow language. Building on ideas from dataflow and functional programming Martlet abstracts from the user the distribution and partitioning of large data sets, allowing them to construct functions that would automatically adjust to the changing environment without the user having to be aware of the underlying topology. This work was well received with best student paper at UK e-Science AHM 2006 a nomination for best student paper at WWW2007 and the awarding of his doctorate in 2007.