Stop Thinking, Just Do!

Sungsoo Kim's Blog

KServe (Kubeflow KFServing) Live Coding Session

tagsTags

22 June 2022


Article Source


KServe (Kubeflow KFServing) Live Coding Session

KFserving graduated from Kubeflow and is now Kserve. MLOps Community Meetup #83! Last Wednesday Demetrios Brinkmann and Alexey Grigorev talked to Theofilos Papapanagiotou, Data Science Architect of Prosus.

This is the practical session following up on MLOps Meetup #40 Hands-on Serving Models Using KFserving (https://youtu.be/VtZ9LWyJPdc).

Abstract

We start with the serialization of TensorFlow/PyTorch/SKLearn models into files and the deployment of an inference service on a Kubernetes cluster. Great MLOps means great model monitoring, so then we look at inference service metrics, model server metrics, payload logs, class distributions.

For AI ethics on production, we use the explainers pattern with many different explainers, fairness detectors, and adversarial attacks. For integrations, we use the transformer pattern to process as well as to enrich the inference request with online features from a feature store.

Bio

Theo is a recovering Unix Engineer with 20 years of work experience in Telcos, on internet services, video delivery, and cybersecurity. He is also a university student for life; BSc in CS 1999, MSc in Data Coms 2008, and MSc in AI 2017.

Nowadays he calls himself an ML Engineer, as he expresses through this role his passion for System Engineering and Machine Learning. His analytical thinking is driven by curiosity and a hacker’s spirit. He has skills that span a variety of different areas: Statistics, Programming, Databases, Distributed Systems, and Visualization.

MODEL SERVING IN PYTORCH

Hands-on Serving Models Using KFserving


comments powered by Disqus