Ray: The Next Generation Compute Runtime for ML Applications

Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray. Ray is equipped with a powerful distributed scheduling mechanism which launches stateful Actors and stateless Tasks in a much more granular and lightweight fashion than existing frameworks. Meanwhile it also has an embedded distributed in-memory object store to drastically reduce data exchange overhead. These architectural advantages make Ray the ideal compute substrate for cutting-edge ML use cases including Graph Neural Networks, Online Learning, Reinforcement Learning, and so forth.

This talk will introduce the basic API and architectural concepts of Ray, as well as diving deeper into some of its innovative ML use cases.


Speaker

Zhe Zhang

Head of Open Source Engineering @anyscalecompute

Zhe is currently Head of Open Source Engineering (Ray.io project) at Anyscale. Before Anyscale, Zhe spent 4.5 years at LinkedIn where he managed the Hadoop/Spark infra team. Zhe has been working on open source for about 10 years; he's a committer and PMC member of the Apache Hadoop project, and a member of the Apache Software Foundation.

Read more
Find Zhe Zhang at:

Date

Wednesday Dec 7 / 09:00AM PST ( 50 minutes )

Track

MLOps

Topics

Machine Learning Open-Source Graph Neutral Networks Online Learning Reinforement

Share

From the same track

Session Machine Learning

Fabricator: End-to-End Declarative Feature Engineering Platform

Wednesday Dec 7 / 10:10AM PST

At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.

Speaker image - Kunal Shah
Kunal Shah

ML Platform Engineering Manager @DoorDash

Session Machine Learning

An Open Source Infrastructure for PyTorch

Wednesday Dec 7 / 11:20AM PST

In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.

Speaker image - Mark Saroufim
Mark Saroufim

Applied AI Engineer @Meta

Session Machine Learning

Real-Time Machine Learning: Architecture and Challenges

Wednesday Dec 7 / 12:30PM PST

Fresh data beats stale data for machine learning applications. This talk discusses the value of fresh data as well as different types of architecture and challenges of online prediction.  

Speaker image - Chip Huyen
Chip Huyen

Co-founder @Claypot AI

Session Machine Learning

Declarative Machine Learning: A Flexible, Modular and Scalable Approach for Building Production ML Models

Wednesday Dec 7 / 01:40PM PST

Building ML solutions from scratch is challenging because of a variety of reasons: the long development cycles of writing low level machine learning code and the fast pace of state-of-the-art ML methods to name a few.

Speaker image - Shreya Rajpal
Shreya Rajpal

Founding Engineer @Predibase