Slides of a presentation of Ted Dunning at the SF Bay ACM meetup on March 26th 2018.
"One of the common surprises that practitioners face in machine learning is that the logistical difficulties of fielding a machine learning system - such as managing multiple models through evaluation and into production - typically outweigh the actual machine learning itself. The audience will learn how streaming architecture can substantially simplify those logistics, including:
- How to design a rendezvous server to make model deployment easier and more flexible
- How to build a more reliable machine learning system that meets service-level agreements
- How to do better model-to-model comparisons
- How to integrate data engineers and ops into a machine learning team. This presentation will be self-contained and will not require any particular knowledge of streaming architecture or machine learning. We assume that attendees have a basic knowledge of software and familiarity with basic software engineering will be helpful. While accessible, this talk should also be useful for more advanced practitioners of the trade."