ContainerStress: Autonomous Cloud-Node Scoping Framework for Big-Data ML Use Cases

ContainerStress: Autonomous Cloud-Node Scoping Framework for Big-Data ML Use Cases

Guang Wang, Kenny Gross, Akshay Subramaniam

06 December 2019

Deploying big-data Machine Learning (ML) services in a cloud environment presents a challenge to the cloud vendor with respect to the cloud container configuration sizing for any given customer use cases. OracleLabs has developed an automated framework that uses nested-loop Monte Carlo simulation to autonomously scale any size customer ML use cases across the range of cloud CPU-GPU “Shapes” (configurations of CPUs and/or GPUs in Cloud containers available to end customers). Moreover, the OracleLabs and NVidia authors have collaborated on a ML benchmark study which analyzes the compute cost and GPU acceleration of any ML prognostic algorithm and assesses the reduction of compute cost in a cloud container comprising conventional CPUs and NVidia GPUs.


Venue : IEEE 2019 Intn'l Symposium on Big Data and Data Science (CSCI-ISBD)

File Name : ContainerStress_CSCI_r9.pdf