Sunday, December 29, 2024
Google search engine
HomeData Modelling & AIHeroes of Deep Learning: Top Takeaways for Aspiring Data Scientists from Andrew...

Heroes of Deep Learning: Top Takeaways for Aspiring Data Scientists from Andrew Ng’s Interview Series

Introduction

Andrew Ng is the most recognizable personality of the modern deep learning world. His machine learning course is cited as the starting point for anyone looking to understand the math behind algorithms. But even the great Andrew Ng looks up to and takes inspiration from other experts.

In this amazing and in-depth video series, he has interviewed some of the most eminent personalities in the world of deep learning (eight heroes, to be precise). The interviews span the length and breadth of deep learning, including topics like backpropogation, GANs, transfer learning, etc. Even artificial intelligence crops up in between conversations. But don’t worry if these terms sound overwhelming, we have listed down the key takeaways from each interview just for you.

Source: Forbes

The “heroes” Andrew Ng has interviewed are:

  • Geoffrey Hinton
  • Ian Goodfellow
  • Yoshua Bengio
  • Pieter Abbeel
  • Yuanqing Lin
  • Andrej Karpathy
  • Ruslan Salakhutdinov
  • Yann LeCun

What a stellar cast of experts! Now it’s time to dive in and look at the top takeaways from each video.

 

Geoffrey Hinton

Geoffrey Hinton is best known for his work on artificial neural networks (ANNs). His contributions in the field of deep learning are the main reason behind the success of the field and he is often called the “Godfather of Deep Learning” (with good reason). His research on the backpropagation algorithm brought about a drastic change in the performance of deep learning models.

 

Key takeaways from the video

  • Read the literature behind deep learning algorithms but don’t get lost in them. Focusing on understanding and implementing things instead of dedicating most of the time to theory will help you grasp concepts more clearly
  • Keep practicing programming which will help you reach solutions at a far more quicker pace
  • Mr. Hinton also mentioned some of the key topics which any deep learning enthusiast should work on:
    • Learning about capsule networks
    • Unsupervised learning algorithms and approaches

 

Ian Goodfellow

Ian Goodfellow is a rockstar in the deep learning space and is currently working as a research scientist at Google Brain. He is best known for his invention of generative adversarial networks (GANs). His book on “Deep Learning” covers a broad range of topics like mathematical and conceptual backgrounds and deep learning techniques used in the industry, which can be a good starting point for any deep learning enthusiast. We strongly recommend reading that book, it’s free!

 

Key takeaways from the video

  • Linear algebra and probability are critical subjects for a data scientist to master if he/she wants to become an expert in this field
  • Work on projects that you find interesting, and always open source your code on GitHub
  • Write blogs/papers about what you are learning in this field. This will help in solidifying your understanding and might even help other people out
  • Read books and implement your interpretations on some projects at the same time

 

Yoshua Bengio

Yoshua Bengio is a computer scientist, well known for his work on ANN and deep learning. He is the co-founder of Element AI, a Montreal-based business incubator that seeks to transform AI research into real-world business applications.

 

Key takeaways from the video

  • The idea of drawing connections between human intelligence and computers has always fascinated Yoshua Bengio. As a result he engrossed himself into books and research papers, and that has helped build his extremely strong base in this field
  • Back in 1985 when Mr. Bengio was still nascent to deep learning, the focus was more on performing experiments and building intuitions while theories came later (what we call applied learning these days)
  • His advice – read a lot and practice as often as possible. Those two things are key to mastering a subject like Deep Learning. Simply using a software and not understanding how it works is not really. The trick is to mix programming with mathematical concepts

 

Pieter Abbeel

Pieter Abbeel is the Director of the UC Berkeley Robot Learning Lab. His work in reinforcement learning is often cited by scholars as the best in the modern era. He has previously worked in a senior role at OpenAI.

 

Key takeaways from the video

  • Professor Abbeel has always been interested in understanding how things work and in trying to build new things. It was only later in his career when he developed a keen interest in understanding how a machine can think.
  • This is as good a time as any for people to get into AI. According to Mr. Abbeel, getting a start in the field is not difficult as there are a number or resources available out there, most of them free of cost
  • Doing self study and taking online courses is a decent way to get started. Along with that, try and implement your learning because only reading articles and watching videos will only take you so far

 

Yuanquing Lin

Yuanquing Lin is the Director at the Institute of Deep Learning at Baidu. He has a background in mathematics and physics, and holds a Ph.D in machine learning. A word of caution – the English might be a little hard to understand in the video as it’s not Mr. Yuanquing’s first language.

 

Key takeaways from the video

  • Building algorithms from scratch and learning new things every day is something every data scientist should aspire to
  • At present, there is a really good community of researchers, a number of open source frameworks, and publicly available benchmarks out there. For the newbies, he suggests learning from open-source deep learning frameworks and resources

 

Andrej Karpathy

Andrej Karpathy is the director of artificial intelligence and Autopilot Vision at Tesla. Like Pieter Abbeel, Andrej previously worked at OpenAI, but as a research scientist. He is a widely considered and cited as a leading expert in the field of computer vision, especially image recognition (though of course he’s an expert in quite a lot of deep learning areas).

This is one the most intriguing videos in the series!

 

Key takeaways from the video

  • Andrej discusses why he established a human benchmark, or baseline, for the ImageNet computer vision challenge. He felt the need to see how machine learning algorithms would fare against humans in such computer vision tasks. Here, the key takeaway is there should always be a human benchmark for any task we are trying to solve using machine learning or deep learning
  • He is particularly proud of his course, CS231n (this course is a deep dive into details of the deep learning architectures), through which he aims to share his knowledge of deep learning (computer vision in particular), with emerging and aspiring data scientists
  • The emergence of pretrained networks and their applications across a variety of domains is something that Andrej is excited about
  • He believes that the field of AI will eventually split into two categories— Applied AI and Artificial General Intelligence (AGI)

 

Ruslan Salakhutdinov

Ruslan Salakhutdinov is the director of AI Research at Apple and is known as the developer of Bayesian Program Learning. His areas of specialization are many, but are listed as probabilistic graphical models, large-scale optimization, and of course, deep learning. Here’s a fun fact – his doctoral adviser? None other than Geoffrey Hinton!

Key takeaways from the interview

  • Boltzmann machines and Deep Boltzmann machines carry a lot of untapped potential. Additionally, we haven’t really figured out how to make Unsupervised, Generative Unsupervised and Semi-Supervised modeling work properly
  • Do not to be afraid to try new things in the field of deep learning
  • One should code each concept from scratch to truly understand and learn it
  • He also tackles the subject of PhD vs joining a company. He is slightly more in favor of PhD, or research in academia, as it gives more freedom to work on a variety of problems (something which the industry tends to throttle)

 

Yann LeCun

Yann LeCun is the founding father of convolutional nets. He is currently the Chief AI Scientist and VP at Facebook. He is a professor, researcher, and R&D manager with academic and industry experience in AI, machine learning, deep learning, computer vision, intelligent data analysis, data mining, data compression, digital library systems, and robotics. And that’s just scraping the surface of what this expert is capable of.

Key takeaways from the video

  • The scenario of working with neural networks has completely changed since the 1980s. The resources that were not present then are in abundance nowadays
  • He also touches upon the interesting topic of corporate research. More freedom should be given to the researchers working in the corporate sector, and that all data science/deep learning practitioners should take time to do research, and give back to the community
  • Given the easy availability of open-source tools like Tensorflow, Pytorch, and Keras, people should start working and experimenting with deep learning, rather than getting mired in theoretical concepts

 

End Notes

This is easily the most fascinating interview series on YouTube concerning deep learning. There is SO MUCH to learn from each of these seven heroes. If you haven’t seen these videos before, we’re glad you stopped by because this will feel like hitting the jackpot.

Andrew Ng is a wonderful interviewer and him conversing with other experts feels like a dream. Grab your pen and notebook because there’s a whole host of things for you to learn.

avcontentteam

07 May 2019

Dominic Rubhabha-Wardslaus
Dominic Rubhabha-Wardslaushttp://wardslaus.com
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

Most Popular

Recent Comments