Monday, November 18, 2024
Google search engine
HomeData Modelling & AIML Operationalization: From What and Why? to How and Who?

ML Operationalization: From What and Why? to How and Who?


Operationalization may be the newest 18 letter word in AI, but there are specific steps to removing your AI initiative from the silos and putting it into production at scale.
Sivan Metzger of ParallelM is here to share his experiences, mistakes and all, deploying machine learning and building a scalable AI initiative

[Related Article: The Best Machine Learning Research of 2019 So Far]

The AI Impediment

Enterprises are scrambling for AI initiatives, but largely, few projects reach full deployment. This could be due to misunderstandings in what AI actually is or lack of funding, but the industry is lacking scalable, functional practice for machine learning. Like any industry, it’s going to through a period of necessary growth.

Moving from the academics of ML to the deployment of it is difficult. There’s a wall between the data science experts and the technology to produce ML initiatives and the implementation or economics of the projects. Months go by with tweaking, building, experimenting, and soon your stakeholders are getting impatient, thinking the final product will never show up.

Unfortunately, your data science team may not be familiar with the real world IT operations happening in a business setting. There are legal aspects, compliance, data architecture, stakeholders, or other elements that aren’t in the loop with the data science team, and it’s gumming up your works.

Challenges of ML Development and Deployment:

  • The dissonance between what researchers think is the point of business ML models, and the actual point of the model is a clear difficulty. 
  • The silos between the departments in your ML initiatives is making deployment impossible.
  • Fear surrounding the ML product keeps your production team from using what your data science team creates.
  • No one knows what the other is doing.

MLOPs: Automating the Product Lifecycle

There are five core elements to the operationalization of your AI production models. These operate in a continuous loop, each one informing the next for continuous deployment:

  1. Continuous integration (marginal model and incremental models)
  2. ML orchestration (all models will have at least two pipelines and most many more)
  3. ML health (qualitative aspects of those models)
  4. Business IMpact (compared to your KPIs)
  5. Model Governance (make it front and center) 

MLOps Platform

It can be challenging to decide when to launch a dedicated platform for your ML initiatives. The implementation of models could be a lot easier with the right platform, but deciding when and how to choose one can be daunting.

Do I need one?

Deciding to get a platform will depend on a few logistics. Ask yourself these questions:

  • Have you created tons of models but none are in production?
  • Is there a breakdown between data scientists and Ops when a model “doesn’t work”?
  • Are you asking your data science to drive models into production and they don’t know how?
  • Are you loading down your best talent managing models instead of creating new ones?

If you answered yes to any of these, a platform is in your future.

What To Look For

You’ll need deep customization for your platform to allow your best data science talent to operate. On the flip side, it shouldn’t make life more difficult. Simple package management with reuse components prevents overloading with code problems. 

You should be able to customize the health triggers for your models, allowing your production team to see what’s working and what isn’t. In line with that, visualization of those metric democratizes your AI initiative and helps breakdown silos.

Finally, automated timeline captures ensure your documentation is in place and ready for evaluation. Downloading logs should be simple, and anyone should be able to get all the details without having to go through red tape in Ops.

What About Architecture?

Think of architecture as an abstraction layer between your initiative and your production. In other words, it’s how you send models into production. The architecture feeds from your models to the upline programs designed to house and deploy your models, i.e., where you’re getting your data and how you’re storing it. This upline should be clear and accessible because there’s no chance a single product will provide all the capabilities you need for your models. 

Make sure you’ve streamlined the system by understanding what products are making your modeling and deployment possible and accessible to your entire team. This is a personalized solution dependent on what products you already have, what your data scientists are already familiar with, and what features you need for your pipeline. Your architecture may not look like your competitor’s, but if it’s fulfilling those model health initiatives, it’s the right one for you.

[Related Article: Watch: Challenges and Opportunities in Applying Machine Learning]

MLOps At Scale

Deployment to scale in production is fundamental for progress. Business sponsorship needs to pull people to get through the process and bring in all relevant stakeholders. Many times, it’s the first time everyone is sitting in the room to discuss, but it’s necessary to get your data science team on board and the business side. Make the time to pull your team out and integrate management between departments so that everyone has access to the model and to deployment.

RELATED ARTICLES

Most Popular

Recent Comments