ML Workshop 2: Machine Learning Model Comparison & Evaluation - Nov. 28, 2017

Document created by maprcommunity Employee on Nov 15, 2017
Version 1Show Document
  • View in full screen mode

SUMMARY

Date:

November 28, 2017 

10am PT / 1pm ET / 6pm BST

Topic:Machine Learning Model Comparison & Evaluation
Registration:

ML Workshop 2: Machine Learning Model Comparison & Evaluation  

ABOUT THE EVENT

How Rendezvous Architecture Improves Evaluation in the Real World

 

Building on the ideas of the key requirements for effective management of machine learning logistics presented in the Overview webinar and in Part I Workshop, we will dive into what model evaluation really can and should be. We will talk about how the rendezvous architecture makes evaluation more effective and also easier. Specifically we’ll cover multi-model comparison, how rendezvous helps you handle metrics, and how it provides query-by-query comparison. A key issue for real world success that is often overlooked by data scientists is latency and system reliability. Conversely, accuracy is often difficult for SysOps team members to address. The rendezvous approach has a built-in way to include latency and accuracy as systematic parts of evaluation, thus addressing key concern of all parts of a DataOps team. Finally, we will discuss how the containerization of models and system components in a rendezvous architecture makes security auditing easier.

 

SPEAKER

Ted Dunning, Chief Application Architect of MapR

 

RELATED

The Exchange

[Book Discussion] - Model Management in the Real World  

1 person found this helpful

Attachments

    Outcomes