Saturday, November 16, 2024
Google search engine
HomeData Modelling & AILooking Back on the O’Reilly Artificial Intelligence Conference

Looking Back on the O’Reilly Artificial Intelligence Conference

At the start of neveropen’s Artificial Intelligence Conference in New York this year, Intel’s Gadi Singer made a point that resonated through the conference: “Machine learning and deep learning are being put to work now.” They’re no longer experimental; they’re being put to use in key business applications. The only real question is what we’re going to get out of it. Will we be able to put it to use effectively? Will we find appropriate uses for it?

Now that AI is moving out of the laboratory and into offices and homes, a number of questions are more important than ever. What kinds of tools will make it easier to build AI and ML systems? How will we make AI safe for humans? And what kinds of systems will augment human capabilities, rather than replace them? In short, as Aleksander Madry said in his talk, we are now at AI 1.0. How do we get to AI 2.0?

Learn faster. Dig deeper. See farther.

Join the O’Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Madry emphasized the importance of making AI ready for us: secure, reliable, ethical, and understandable. It’s easy to see the shortcomings of AI now. Madry showed how easy it was to turn a pig into a jet by adding noise, or to become a movie star by changing your glasses. Getting to the next step won’t be easy: training models will probably become more difficult, and those models may be more complex. We might need even more training data than we need now; and currently, one of the biggest barriers to widespread use of AI is the lack of training data. But the work that it takes to get to AI 2.0 will benefit us. We’ll never have AI systems that don’t make mistakes; but mistakes will be fewer, and they’ll be more like the mistakes that humans make, rather than mistakes that are nonsensical. No more flying pigs. And that commonality might make it easier for those systems to work alongside us.

We saw many new tools for building AI systems: tools designed to make building these systems easier, allowing subject experts to play a bigger role. Danielle Dean of Microsoft showed how they built a recommendation system for machine learning pipelines; it sampled the space of possible pipelines and made recommendations about which to try. This approach drastically reduced the “trial and error” loop that characterizes a lot of AI development.

Stanford’s Chris Ré demonstrated Snorkel, an open source tool for automating the process of tagging training data. An AI system has three components: a model, training data, and hardware. Advanced hardware for building AI systems is getting faster and cheaper; it’s becoming a commodity. So are models: systems like Dean’s, or like Intel’s Nauta, simplify and democratize the task of building models. Training data is the one component that stubbornly resists commoditization. Acquiring and labeling data is labor intensive. Researchers have used low-cost labor from Mechanical Turk (or grad students) to label data, or gathered pre-labeled data from online sites like Flickr. But those approaches won’t work in industry. Can we use AI to eliminate most of the work of tagging and turn it into a relatively simple programming problem? It looks like we can; if Ré is right, Snorkel and tools like it are a big step toward AI 2.0.

We saw many glimpses of the future. Olga Troyanska showed how deep learning is helping to decode the most difficult parts of the human genome: the logic that controls gene expression and, hence, cell differentiation. There are many diseases we know are genetic; we just don’t know what parts of the genome are responsible. We will only be able to diagnose and treat those diseases when we understand how the language of DNA works.

CMU’s Martial Hebert’s lab is taking AI into the human world by building systems that can reason about intent. If we want robots that can assist people, they need to be able to understand and predict human behavior in real time. He demonstrated how an AI system can help a paralyzed person perform tasks that would otherwise be impossible—but only by reasoning about intent. Without this understanding, without knowing the goal was to pick up something or to open a door, the system was only able to twitch uselessly. All of this reasoning has to happen in hard real time: an autonomous vehicle needs to be able to predict whether a person will stand on the curb or run into the street, and it needs to do so with enough time to apply the brakes if needed.

Any conference on AI needs to recognize the extraordinary messes and problems that automation can create. Sean Gourley of Primer talked about the arms race in disinformation. In the past year, we’ve gained the ability to create realistic images of fake people, and we’ve made tremendous progress in generating realistic language. We won’t be able to handle these growing threats without the assistance of AI. Andrew Zaldivar talked about work at Google Jigsaw that tries to detect online abuse and harassment. Kurt Mehmel from Dataiku talked about progress toward ethical artificial intelligence, a goal we will only reach if we build teams that are radically inclusive. The saying goes, “with enough eyes, all bugs are shallow”; but that’s only true if those eyes are all different eyes, looking at problems in different ways. The solution isn’t to build better technology; rather, it’s making sure the people most likely to be impacted by the technology are included at all steps of the process.

The conference sessions covered everything from advanced AI techniques in reinforcement learning and natural language processing to business applications, to deploying AI applications at scale. AI is quickly moving beyond the hype: it’s becoming part of the everyday working world. But as Aleksander Madry said, we need to make AI human-ready. We need to get to AI 2.0. More than anything else, neveropen’s AI Conference was about making that leap.

Post topics: AI & ML
Post tags: O’Reilly Radar Analysis
Share:

RELATED ARTICLES

Most Popular

Recent Comments