Saturday, September 21, 2024
Google search engine
HomeData Modelling & AIOS for AI: How Serverless Computing Enables the Next Gen of ML

OS for AI: How Serverless Computing Enables the Next Gen of ML

Jon Peck is a Full Spectrum Developer & Advocate for Algorithmia, an open marketplace for algorithms. At ODSC West 2018, he delivered a talk “OS for AI” which discussed how serverless computing enables the next generation of machine learning. The slides for Peck’s presentation can be found HERE

The title of the talk is a little misleading. What Peck is talking about here is not a literal operating system, i.e. not a competitor to Linux or Windows. Instead, he looks at how operating systems have evolved over time in terms of functionality and design to make computing more accessible, and then extends that notion with an analogy to the development of AI. In essence, he’s describing what’s known in the industry as DevOps, MLOps, or AIOps. 

[Related Article: Trends in AI: Towards Learning Systems That Require Less Annotation]

Slide copyright Jon Peck, ODSC West 2018

Although the general nature of the talk is about all the difficult problems Algorithmia solved with making machine learning algorithms-as-a-service production-ready, the speaker does an excellent job of abstracting all the necessary details so you get a very strong sense for how machine learning and AI algorithms can be deployed at scale. 

I enjoyed hearing about all the lessons learned about building large scale AI systems which is essentially a blueprint for what an OS for AI might look like. Specifically, this includes the right set of high-level abstractions needed, so that AI can be accessed in a standardized way, regardless of the languages, frameworks, or hardware in play.

Speaker Peck then does something that delighted me. He used the fictional “SeeFood” hot dog classifier example developed by the Jian-Yang character on HBO’s hilarious (and quite accurate in its depiction of the “Valley”) show “Silicon Valley.” This is an AI smart phone application that allows the user to take a picture of food and then classifies it as “a hot dog” or “not a hot dog.” Jian-Yang uses his favorite deep learning framework to build the classification model, probably locally on his laptop, or on a rented GPU instance from his favorite cloud provider.

Peck then goes through all the aspects of deploying the See-Food app at scale: training, deploying the training model to a GPU server, making inference available as a service, using Kubernetes for elastic scale and running, provisioning, and scaling services, using load balancing for maximum concurrency, and improved latency for reducing response times for a global user base.

[Related Article: AI and the Visual Revolution]

The talk makes a point of highlighting two important paradigms, “microservices” and “serverless,” both extremely important parts of an OS for AI. The serverless approach allows you to design for minimizing the cost of resources. As requests come in you can quickly spin up models to serve them while making sure you only pay for the calls being made. Automatically scaling up and down the demand is highly efficient and a serverless architecture is perfect for this.

Slide copyright Jon Peck, ODSC West 2018

This talk is very clear in its presentation and makes it seem almost simple to deploy machine learning algorithms in a production environment. If you’re a data scientist who is faced with the task of productionizing your models, this presentation is a great starting point in understanding what’s involved.

To take a deeper dive into devising an OS for AI, check out Peck’s compelling talk from ODSC West 2018

Key Takeaways:

What makes an OS for AI?

  • Stack agnostic 
  • Composability 
  • Self-optimizing 
  • Auto-scaling 
  • Monitorable
  • Discoverability

RELATED ARTICLES

Most Popular

Recent Comments