MLOps

MLOps is an emerging engineering discipline that combines ML, DevOps, and Data Engineering to provide automation and infrastructure to speed up the AI/ML development lifecycle and bring models to production faster. It is one of the widely discussed topics in the ML practitioner community.  

In this track, we will explore the best practices and innovations the ML community is developing and creating.  Key areas of focus include declarative ML systems, distributed model training, scalable and low latency model inference, and ML observability to protect the downsides and ROI.


From this track

Session

Ray: The Next Generation Compute Runtime for ML Applications

Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray.

Zhe Zhang

Head of Open Source Engineering @anyscalecompute

Session

Fabricator: End-to-End Declarative Feature Engineering Platform

At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.

Kunal Shah

ML Platform Engineering Manager @DoorDash

Session

An Open Source Infrastructure for PyTorch

In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.

Mark Saroufim

Applied AI Engineer @Meta

Session

Metrics for MLOps Platforms

Many companies are investing heavily into their ML platforms, either building something in-house or working with vendors. How do we know that an ML platform is any good? How do we compare different platforms?

Chip Huyen

Co-founder @Claypot AI

Session

Declarative Machine Learning: A Flexible, Modular and Scalable Approach for Building Production ML Models

Building ML solutions from scratch is challenging because of a variety of reasons: the long development cycles of writing low level machine learning code and the fast pace of state-of-the-art ML methods to name a few.

Shreya Rajpal

Founding Engineer @Predibase

Date

Friday Dec 2 / 09:00AM PST

Share

Register

QCon Plus 2022
Nov 29 - Dec 9, 2022

Register

Track Host

Hien Luu

Sr. Engineering Manager @DoorDash

Hien Luu is a Sr. Engineering Manager at DoorDash, leading the Machine Learning Platform team. He is particularly passionate about the intersection between Big Data and Artificial Intelligence. He is the author of the Beginning Apache Spark 3 book. He has given presentations at various conferences such as Data+AI Summit, XAI 21 Summit, MLOps World, YOW Data!, appy(), QCon (SF,NY, London).

Read more