In this work, we show the need for hard negative examples and provide a method to generate them. We propose to augment the negative sampler in NCE with an adversarially learned adaptive sampler that finds harder negative examples.
We took Prof. Balaraman Ravindran, professor of Reinforcement Learning at the Indian Institute of Technology at Madras and head of the school’s Robert Bosch Centre for Data Science and Artificial Intelligence, to a noisy Edmonton arcade. In between Pac Man battles, we discussed why games are a crucial way to learn how to solve much bigger, more complex problems, the pros and cons of scalar rewards, and the natural link between RL and deep learning.
Administrative Coordinator – MTL Borealis AI is a team of researchers and developers dedicated to solving today’s leading problems in machine learning and artificial intelligence. Our researchers are dedicated to pushing the boundaries of theoretical and applied science, while our development team transforms state-of-the-art technologies and algorithms into impactful products with the potential [...]
A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in Generative Adversarial Networks
Popular methods for stabilizing GAN training, such as gradient penalty and spectral normalization, essentially control the learning signal magnitude for G. Our ICLR2018 paper proposes a complementary approach to this methodology by encouraging the learning signal diversity for G.
Prof. White details some of her recent work in representation learning, the reason why reinforcement learning often gets (erroneously) lumped in with robots, and the value of learning representations incrementally.
Our research team attended the ICLR conference in Vancouver with a paper on Improving GAN Training via Binarized Representation Entropy (BRE) Regularization. Of course, while we were there we also took advantage of some of the interesting talks happening at the event. Here are a few trends and themes that stood out to us.