Saturday, December 28, 2024
Google search engine
HomeData Modelling & AIWhy is Adversarial Robustness Critical for Machine Learning? 

Why is Adversarial Robustness Critical for Machine Learning? 

As machine learning (ML) gets adopted in every field and every possible use case, a threat lurks underneath. In technology, security doesn’t become a primary concern until adoption reaches a tipping point like it did with consumer software three decades ago and the internet over two. It’s no different for ML models.

A scammer, hacker, or spy can trick machine learning models into giving the wrong prediction or revealing private information in the name of fraud, sabotage, and espionage. For instance:

  • a crime syndicate could find a way to fool a biometric authentication system and that way steal a lot of money banks that use it
  • a competitor of Tesla could trick their self-driving cars to veer in the wrong lane to sully their reputation
  • a hacker could extract protected details from healthcare models
  • a social media user could bypass content moderation systems to promote dangerous misinformation

Researchers have known for over 15 years that models are vulnerable to many kinds of security risks, yet ML adoption is so relatively new tipping point hasn’t been reached. Proof of this is that there are no entries for adversarial attacks for machine learning out of the over 150,00 records in the NIST National Vulnerability Database!

https://odsc.com/boston

However, we shouldn’t wait as we did with prior technological revolutions and security threats start eroding user trust. Instead, ML practitioners ought to embed trustworthiness in the models they build regardless of risk perception.

It’s not just a data science problem. It’s a security and engineering problem. And as in these fields, one way to solve this is by stress-testing their models against any credible threat. Then, address vulnerabilities accordingly. Would structural engineers for a skyscraper in Tokyo skip the seismic performance testing? Would plane manufacturers avoid fatigue testing the wings? Would a financial transfer system forgo penetration testing? Likewise, it’s irresponsible to release AI systems into production without adversarial testing.

By doing this, we can ensure robustness against adversaries and also changes in the environment. After all, in production, ML models operate outside of their training environment, so they will encounter data unlike they had already seen before. Therefore, if we stress-test them against perturbations, they will be robust towards these perturbations regardless if they came from the environment or an adversary!

Adversarial robustness seems technically challenging, and it can be. Thankfully, AI researchers have been building tools to protect machine learning models from attacks. In my Open Data Science Conference East 2022 workshop, Adversarial Robustness: How to Make Artificial Intelligence Models Attack-proof!, I will demonstrate how to leverage these tools to protect machine learning models against the most common type of adversarial attack: evasion attacks, with two defense methods and one certification method. Join the fun!

About the author/ODSC East 2022 speaker on Adversarial Robustness:

Serg Masís is a Data Scientist in agriculture with a lengthy background in entrepreneurship and web/app development, and the author of the bestselling book “Interpretable Machine Learning with Python.” Passionate about machine learning interpretability, responsible AI, behavioral economics, and causal inference.

RELATED ARTICLES

Most Popular

Recent Comments