Sunday, September 22, 2024
Google search engine
HomeData Modelling & AIThe Best Machine Learning Research of Summer 2019

The Best Machine Learning Research of Summer 2019

Academic institutions, AI labs, and research departments of other organizations are constantly generating novel insights into data science, whether it’s machine learning, deep learning, NLP, or other disciplines. Summer 2019 generated some interesting machine learning research, and here are a few of our top picks.

[Related Article: The Best Machine Learning Research of June 2019]

Machine Learning and Behavioral Economics for Personalized Choice Architecture

Emir Hrnjic (of NUS Business School) and Nikodem Tomczak (from the National University of Singapore), recently created a machine learning program which is trained with behavioral economics theory to create better individualized “nudges.” Within the field of personalized choice architecture, this technology can and will change the way that consumers are reminded about products and procedures. The research cited in the paper outlines applications to various industries from e-commerce to personal health and more

TensorDIMM: A Practical Near-Memory Processing Architecture for Embeddings and Tensor Operations in Deep Learning

Youngeun Kwon, Yunjae Lee, and Minsoo Rhu (all of the KAIST School of Electrical Engineering) discuss, “the memory capacity and bandwidth challenges of embedding layers and the associated tensor operations” in their recent paper. They also present a new hardware/software co-design, which includes a DIMM module which is made specifically for deep learning tensor operations. 

Bayesian Volumetric Autoregressive Generative Models for Better Semisupervised Learning

This research paper by Guilherme Pombo, Robert Gray, John Ashburner, Parashkev Nachev and Tom Varsavsky (all from the Institute of Neurology at UCL, Varsavsky is also of the School of Biomedical Engineering and Imaging Sciences at Kings College London) challenges many of the problems seen in current deep generative models. It does this twofold, first, they, “extend PixelCNN to work with volumetric brain magnetic resonance imaging data.” So too, they, “show that reformulating this model to approximate a deep Gaussian process yields a measure of uncertainty that improves the performance of semi-supervised learning, in particular classification performance in settings where the proportion of labelled data is low.”

An Approximate Bayesian Approach to Surprise-Based Learning

This paper by Vasiliki Liakoni, Alireza Modirshanechi, Wulfram Gerstner, and Johanni Brea (all of the EPFL in Lausanne, Switzerland) suggests a bayesian approach to surprise-based learning, which, “allows agents to adapt quickly in non-stationary stochastic environments.” They use empirical results and theoretical insights to support their claims and offer potential uses in reinforcement learning in “non-stationary environments and in the analysis of animal and human behavior.”

Making AI Forget You: Data Deletion in Machine Learning

With all the conversations surrounding the accessibility (and issues therein) of personal data, this paper by Antonio Ginart, Melody Guan, Gregory Valiant, and James Zou (all of Stanford University) offers a way for machine learning to efficiently delete someone’s data from its system, should it be required to. Before this technology, you’d have to completely retrain the model from scratch to remove someone’s data, rather than just delete your information. 

machine learning research

Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

In a recent paper by Yao Qin, Garrison Cottrell (both of UC San Diego), Nicholas Frosst, Sara Sabour, Colin Raffel, and Geoffrey Hinton (the rest from Google Brain), these researchers looked at the ways adversarial examples make us wonder if neural networks are privy to the same visual features as people. They explain how most of the attempts to prevent these examples have been defeated, and suggest instead that they, “detect adversarial examples based on class-conditional reconstructions of the input.” They specifically use Capsule Networks in this paper, but designed the program to work for standard ConvNets also.

[Related Article: What Are a Few AI Research Labs on the West Coast?]

RELATED ARTICLES

Most Popular

Recent Comments