Saturday, September 21, 2024
Google search engine
HomeData Modelling & AISimplifying MLOps with Model Registry

Simplifying MLOps with Model Registry

Your iPhone tells you exactly what app you’d like to access each morning, Netflix shows you movie previews that are tailored precisely for you, and Grammarly fixes your writing so you sound like your best self! Each of these applications is the product of a team of data scientists and engineers collaborating closely to bring the latest and greatest AI goodness to you.

What you don’t see, however, is the mess of dozens of hacky scripts, Jupyter notebooks, cleaned (and dirty) data, Slack messages, and panicked emails that actually led to the application utilizing this AI goodness. All innovative AI companies today rely on a set of tools and processes to tame this chaos and ship intelligent products faster. Referred to as MLOps (Machine Learning Operations), these tools and processes support key needs of the ML operations lifecycle such including versioning (or experiment management), testing, deployment, monitoring, etc. A key part of the MLOps toolchain, and the topic of this post, is a system to manage artifacts or models that make up these intelligent applications. Enter, the Model Registry. 

https://odsc.com/boston/#registerWhat is a Model Registry?

You’re probably familiar with container registries (like DockerHub and ECR) or Python library registries (e.g., PyPi and conda-forge). These are places where you can find, install and publish software artifacts like containers and Python libraries. A model registry is no different; it is a place to find, publish, and use ML models (or model pipeline components.)

Simplifying MLOpsWhy should you care about a model registry for MLOps?

I’ve been a hands-on data scientist (optimizing your Twitter news feed, actually!) and I can tell you that my data science process was, ahem, “ad-hoc” to say the least: (creatively named) scripts to train my models, a spreadsheet to track hyperparams and results, folders and folders of data, and very long chat histories with my colleagues about the different models.

So essentially this:


MLOps registry

(and I built an open-source project at MIT to fix this: https://github.com/VertaAI/modeldb)

But this was only during the model development phase. However, the model has to make its way to production to actually be useful beyond a research paper. As the (un)popular statistic goes, 87% of models never make their way to production

When a model makes its way to production, having tracked thousands of experiments is not helpful; what is helpful is that one or more “final” models are identified, polished and published for software engineering to consume. Documentation exists for how to use the model, a central place where software engineering can get the latest and greatest, and a place that records what responsible AI practices were used to build the model,  a.k.a., a model registry.

So what does a model registry get you?

  1. Easier handoff to consumers of your models. As we saw above, data science creates a large exhaust. You want the product team (or your research colleagues) to use only your best model, e.g., recommender:version2 (vs. euclidean_distance_100_20000_50_50.pkl) Also, a central model repository allows you to stop sending these types of messages on Slack…MLOps example
  2. Governance and controls in one place. As an organization, you need to determine what models are ok to put into production. Hopefully, you only ship those that pass some offline performance tests, block ones that fail bias and fairness checks, and promote only those that have documentation and explainability hooks!
  3. Automate your model release processes. All software products typically go through a sophisticated release process (e.g., release to DEV, then UAT, and then PROD.) Traditional software uses a rich set of tools (e.g., Jenkins) and practices (e.g., GitOps) that make product releases safe and reliable. With a model registry, you can reuse existing software development processes to deploy your models as well.

Ok, so what do I do now?

Hopefully, you’re now seeing how a model registry would make your life better! What can you do about it?

  • Learn more: This blog just scratched the surface of what model registries have to offer. Check out my talk at ODSC East 2021 (April 1 at 3:10-3:55 PM Eastern Time), this webinar, or download this awesome infographic. JFrog has a great set of resources on binary registries in general.
  • Build a simple model registry: If your application has basic requirements, you can likely get started with a S3-based model registry where you use key:value tags to manage your models. This won’t give you governance or controls (and you’ll have to code!) but it will get you started.
  • Evaluate a hosted model registry! Few hosted options exist and include Verta (disclaimer: my team built this, so I’m biased!), AWS SageMaker Model Registry, Google AI Platform.

Conclusion

Building software utilizing the latest and greatest models is challenging. It requires close collaboration between data science and software engineering; it requires that MLOps machinery is in place to enable rapid iteration on the product, and it requires that the right controls are in place to ensure the release of high-quality models. A Model Registry is the key piece of the MLOps toolchain that enables collaboration, automation, and control. Without a model registry, your AI application may just get lost in Slack messages and messy Jupyter notebooks. On the other hand, with a bit of MLOps magic, you could have the next Netflix or Alexa in your hands! ✨

About the author/ODSC East 2021 Speaker on Simplifying MLOps:

ImageManasi Vartak (@DataCereal) is the founder and CEO of Verta (www.verta.ai), a platform that allows ML practitioners to rapidly version, deploy, and monitor ML models at scale. Manasi holds a Ph.D. in computer science from MIT where she created ModelDB, the first modern experiment management system that is now deployed at startups, research labs, and Fortune-500 companies. She previously worked on deep learning for content recommendation as part of the feed-ranking team at Twitter and dynamic ad-targeting at Google. Manasi is passionate about building intuitive data tools, helping companies become AI-first, and figuring out how data scientists and the organizations they support can be more effective. Manasi has spoken at several top research and industry conferences such as Strata, SIGMOD, VLDB, SparkSummit, TWIMLCon, Data Science Salon, and AnacondaCon, and has authored a course on model management.

RELATED ARTICLES

Most Popular

Recent Comments