Building ML solutions from scratch is challenging because of a variety of reasons: the long development cycles of writing low level machine learning code and the fast pace of state-of-the-art ML methods to name a few. On the other hand, solutions that automate the ML model development process are often opaque and hard to iterate on, resulting in users churning out. In this talk I’ll cover declarative ML systems, and how they address key issues that help shorten the time taken to bring ML models to production.
Declarative Machine Learning: A Flexible, Modular and Scalable Approach for Building Production ML Models
From the same track
Ray: The Next Generation Compute Runtime for ML Applications
Ray is an open source project that makes it simple to scale any compute-intensive Python workload. Industry leaders like Uber, Shopify, Spotify are building their next generation ML platforms on top of Ray.
Head of Open Source Engineering @anyscalecompute
Fabricator: End-to-End Declarative Feature Engineering Platform
At Doordash, the last year has seen a surge in applications of machine learning to various product verticals in our growing business. However, with this growth, our data scientists have had increasing bottlenecks in their development cycle because of our existing feature engineering process.
ML Platform Engineering Manager @DoorDash
An Open Source Infrastructure for PyTorch
In this talk we’ll go over tools and techniques to deploy PyTorch in production. The PyTorch organization maintains and supports open source tools for efficient inference like pytorch/serve, job management pytorch/torchx and streaming datasets like pytorch/data.
Applied AI Engineer @Meta
Metrics for MLOps Platforms
Many companies are investing heavily into their ML platforms, either building something in-house or working with vendors. How do we know that an ML platform is any good? How do we compare different platforms?
Co-founder @Claypot AI