It’s time for our June meetup, a monthly series, where we get together to discuss Ray and Ray’s native libraries for scaling machine learning workloads. This month, we have invited Ray community speakers from IBM and Stanford/ETH to share how they use Ray to solve challenging ML problems. Join us to learn about Ray community use cases.
AGENDA:
Talk 1: AI/ML-infused digital IC design workflows on the hybrid cloud, IBM
Talk 2: Pushing the boundaries of material design with RLlib, Stanford/ETH
Abstract: As the complexity of modern hardware systems explodes, fast and effective design space explorations for better integrated circuit (IC) implementations is becoming more and more difficult to achieve due to higher demands of computational resources. Recent years have seen increasing use of decision intelligence in IC design flows to navigate the design solution space in a more systematic and intelligent manner. To address these problems, we have been working on AI/ML-infused IC design orchestration in order 1) to enable the IC design environment on hybrid cloud platform so that we can easily scale up/down the workloads according to the computation demands; and 2) to produce higher quality of results (QoRs) in shorter total turnaround time (TAT). In this work, we will illustrate how we provide a scalable IC design workload execution that produces higher performance designs by utilizing AI/ML-driven automatic parameter tuning capability. We first demonstrate that we can build a cloud-based IC design environment including containerized digital design flow on Kubernetes clusters. Then, we extend the containerized design flow with the automatic parameter tuning capability using AI/ML techniques. Finally, we demonstrate that the automatic parameter tuning can be executed in a more scalable and distributable manner using the Ray platform. We will use the actual design environment setups, the code snippets, and results from the product IC designs as evidence that the proposed method can produce a higher quality of IC designs using the Ray-based automatic parameter tuning methodologies.
Speakers: Gi-Joon Nam & Jinwook Jung
Abstract: Improving the design and properties of biomedical devices is fundamental to both academic research and the commercialization of such devices. However, improvement of the designs and their physical properties often relies on heuristics, ad-hoc choices, or in the best case iterative topology optimization methods.
We combine material simulation and reinforcement learning to create new optimized designs. The reinforcement learner’s goal is to reduce the weight of an object, but it has to withstand various types of physical forces such as stretching, twisting, compressing, etc. It does so by iteratively pruning a full block of material to reduce the weight. Due to the considerable number of learning iteration steps required, it is vital that the system simulates every iteration in as little time as possible.
The use of RLlib and Ray Tune enables broad-scale parallelization of the reinforcement learning pipeline and deployment on a decentralized computing platform. This allows us to cut the training time by orders of magnitude and the resulting design outperforms the baseline case with several unique designs.
Speaker: Tomasz Zaluska is a visiting graduate student at Stanford. He focuses on applied ML to neuroscience.
Gi-Joon Nam is a Research Staff Member with the IBM’s T.J. Watson Research Center, Yorktown Heights, NY, USA. His research interests include high-performance system architecture, VLSI designs and design methodologies, and hardware accelerator technologies particularly for big data applications. He has a PhD in computer science and engineering from the University of Michigan, Ann Arbor, MI, USA.
Jinwook is a Research Staff Member at IBM TJ Watson Research Center, Yorktown Heights, NY. At IBM, he works to advance design methodologies for AI accelerators and high-performance microprocessors, leveraging machine learning and cloud computing.
Tomasz Zaluska is a visiting graduate student at Stanford. He focuses on applied ML to neuroscience.