Friday, November 15, 2024
Google search engine
HomeData Modelling & AIThe Best Machine Learning Research of September 2019

The Best Machine Learning Research of September 2019

Every month brings its own wave of exciting research, and September was as busy a month as ever for developments in machine learning. To help sort through everything, we’ve compiled our five favorite machine learning research papers of the month, check them out below. 

[Related Article: The Best Open Source Research at DeepMind in 2019 So Far]

Better AI through Logical Scaffolding

This paper, by Nikos Arechiga, Jonathan DeCastro, Soonho Kong (of the Toyota Research Institute in Los Altos, CA and Cambridge, MA), and Karen Leung (of Stanford University), explains the purpose and importance of what they call “logical scaffolds,” which are used to improve software that relies on AI. They look at existing runtime monitors for perception systems, and show how these logical scaffolds may be useful for improving AI programs even beyond those perception systems. This development may be used to push AI systems past what they’ve been able to do so far on their own. 

Clustering Uncertain Data via Representative Possible Worlds with Consistency Learning

A recent paper by Xianchao Zhang (of Dalian University of Technology, China), Han Liu, Xiaotong Zhang, Qimai Li, Xiao-Ming Wu (all from the Hong Kong Polytechnic University), discusses the various options available for clustering uncertain data—which results in things like randomness in data generation or collection, or even privacy concerns. While usually you would use world-based algorithms, the team suggests that that method is at fault due to the fact that they treat all possible worlds equally, despite the negative effects some may cause, and that they do not well-utilize the consistency among possible worlds that is there. The team introduces a representative possible world-based consistent clustering algorithm for this type of uncertain data, with results showing better than other state-of-the-art algorithms.

Alleviating Privacy Attacks via Causal Learning

This paper by a team at Microsoft Research, comprised of Shruti Tople, Amit Sharma, and Aditya Nori, proposes an option of causal learning—”a model learns the causal relationship between the input features and the outcome”—to reduce potential for privacy breaches. This can be a particularly important feature for machine learning applications in healthcare or other instances where the data is highly sensitive. 

MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis

In this paper, Margherita Rosnati and Vincent Fortuin (both in the Department of Computer Science ETH Zürich, Switzerland) contribute to the research effort of detecting sepsis automatically with an open-access medical data set. They propose: “MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model” to predict the occurence of sepsis early. So too, with all the discussion surrounding the black box problem, they’ve made their model interpretable, even as it outperforms state-of-the-art models. 

Learning in Confusion: Batch Active Learning with Noisy Oracle

The team of Gaurav Gupta (at USC), Anit Kumar Sahu, and Wan-Yi Lin (both at Bosch Center for AI), recently released this paper, discussing the options for solving the issues that have been arising with active learning, due to noisy oracle and data. Their main contribution is the suggestion of batch learning (with specific requirements in the batches), to avoid training models off of faulty original models. Their requirements are: “(i) select important samples from the available pool for the current model, and (ii) select a diverse batch to avoid repetitive samples” and have shown that this method outperforms other active learning options. 

Conclusion on Machine Learning Research

[Related Article: The Best Machine Learning Research of Summer 2019]

As we can see, there are some really exciting things happening in machine learning research, all of which are pushing what we currently define as state-of-the-art. From better privacy features to better training methods, it’s only a matter of time before we see the developments that build off of these ones.

RELATED ARTICLES

Most Popular

Recent Comments