Saturday, November 16, 2024
Google search engine
HomeData Modelling & AIThe Best Machine Learning Research of June 2019

The Best Machine Learning Research of June 2019

Machine Learning and the data science industry is always changing. To keep you updated on the most recent discoveries, we’ve compiled the 5 most exciting machine learning research pieces that expand what we thought we knew about machine learning and the industries to which it relates. 

[Related Article: The Best Machine Learning Research of 2019 So Far]

Transfer of Machine Learning Fairness across Domains

Fairness in machine learning has been a heavy topic of discussion since the beginnings of the technology, but now, in a paper by Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, and Ed H. Chi we have some theoretical models to ensure fairness across different applications of one machine learning model. They frame this issue as “domain adaptation problems: how can we use what we have learned in a source domain to debias in a new target domain, without directly debiasing on the target domain as if it is a completely new problem?” In the paper, they also offer “a modeling approach to transfer to data-sparse target domains… [and] empirical results validating the theory and showing that these modeling approaches can improve fairness metrics with less data”

Calibrated Model-Based Deep Reinforcement Learning

In a recent paper by Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, and Stefano Ermon, they explore “which uncertainties are needed for model-based reinforcement learning and argues that good uncertainties must be calibrated.” Their research suggests that “calibration can improve the performance of model-based reinforcement learning with minimal computational and implementation overhead.” 

Software Engineering Practices for Machine Learning

In recent years, the possibilities for machine learning has grown dramatically; this paper by Peter Kriens and Tim Verbelen explore some of those possibilities in software engineering. They offer an “overview of current techniques to manage complex software, and how this applies to [machine learning] models.”

A Review on Deep Learning in Medical Image Reconstruction

While medical imaging is an important component to clinics all over the world, medical image reconstruction remains one of the most important aspects of this process. In a paper overviewing the developments of deep learning in medical image reconstruction, as well as the challenges, Haimiao Zhang and Bin Dong offer the “unrolling dynamics viewpoint.”

A Theoretical Connection Between Statistical Physics and Reinforcement Learning

In a recent paper, Jad Rahme and Ryan P. Adams “explore a different dimension to this relationship, examining reinforcement learning using the tools and abstractions of statistical physics.” They say “sequential decision making in the presence of uncertainty and stochastic dynamics gives rise to distributions over state/action trajectories in reinforcement learning (RL) and optimal control problems.”

[Related Article: The Most Exciting Natural Language Processing Research of 2019 So Far]

Conclusion

Machine learning research, and thus, the researchers that contribute to it are always pushing the boundaries of where machine learning can go next. From improved fairness to medical imagery, there will always be a next step. What do you think it will be?

RELATED ARTICLES

Most Popular

Recent Comments