Sunday, September 22, 2024
Google search engine
HomeData Modelling & AIStitching Open Source Components Together to Build an End to End Computer...

Stitching Open Source Components Together to Build an End to End Computer Vision Platform for Your Enterprise

The three authors are speaking at ODSC APAC 2021. Be sure to check out their talk, “Stitching Open Source Components Together to Build an End to End Computer Vision Platform for Your Enterprise,” there!

Imagine a scenario where you just got into a meeting with your senior leadership team.

They are excited about AI and the kind of opportunities it brings to the table.

They have asked for your help in setting up a computer vision platform. 

What are your initial thoughts? 

A sample of the questions we got as we surveyed multiple people posing the preceding scenario are: 

  • What is a platform?
  • Who are my end-users?
  • What are the problems/ use cases that a platform solves?
  • How do I identify the problems to solve through the platform?
  • How would I go about building the platform?
  • What are the key KPIs that I should measure?
  • Who are the people that should support me?
  • How do I know that the platform is solving the problem of end-users?

We have gone through such a scenario and want to make sure that you do not have to go through the same hoops as we did.

https://odsc.com/apac/#register

Let’s understand some of the components, problems encountered, and their users of a generic computer vision platform: 

  • Labeling ground truth for a certain set of images (Manual annotators)
    • Typically the data scientist does not know how many images to annotate
    • The workflow of annotation is not optimized to make the model training faster
  • Building different types of models (Data Scientists)
    • Classification
    • Detection
    • Segmentation
    • OCR
      • An array of pre-trained models are present to solve a given problem
      • No standardized way across data scientists
  • Testing a model (Data Scientists & end-users)
    • Integration test
    • Input data test
    • Output data test
    • Smoke test
    • Regression testing
  • Packaging the model (Data scientist & DevOps)
    • Multiple choices of building an application – FastAPI, Flask
    • Standard ways of dockerizing an application
  • Pushing the model to production (DevOps)
    • On-premises
      • Leveraging Kubernetes
      • Leveraging serverless functions
    • Cloud
      • Multiple cloud vendors & little standardization across deployments
    • On-device
      • No standardization across projects
  • Protecting the model (DevOps)
    • Ensure the right people have access
  • Monitoring the model (Data Scientists & DevOps)
    • Detect changes in input images sooner
    • Detect changes in class distribution sooner
      • Maintain a dashboard of how the input changes
  • Auto re-training

 We have packaged all the components in such a way that it is standardized and the end-user will have to interact with only the configuration files for training and deploying a model.

Below, is the architecture diagram of what we are going to cover in the workshop:

Open Source ComponentsJoin us for this talk, as we show you our version of implementing a computer vision platform that we have open-sourced to realize more models in production.

Article by Kishore Ayyadevara and Yeshwanth Reddy of Modern Computer Vision with PyTorch, and Nilav Ghosh – Computer Vision Expert.

RELATED ARTICLES

Most Popular

Recent Comments