Administrative Coordinator – MTL Borealis AI is a team of researchers and developers dedicated to solving today’s leading problems in machine learning and artificial intelligence. Our researchers are dedicated to pushing the boundaries of theoretical and applied science, while our development team transforms state-of-the-art technologies and algorithms into impactful products with the potential [...]
A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in Generative Adversarial Networks
Popular methods for stabilizing GAN training, such as gradient penalty and spectral normalization, essentially control the learning signal magnitude for G. Our ICLR2018 paper proposes a complementary approach to this methodology by encouraging the learning signal diversity for G.
Prof. White details some of her recent work in representation learning, the reason why reinforcement learning often gets (erroneously) lumped in with robots, and the value of learning representations incrementally.
Our research team attended the ICLR conference in Vancouver with a paper on Improving GAN Training via Binarized Representation Entropy (BRE) Regularization. Of course, while we were there we also took advantage of some of the interesting talks happening at the event. Here are a few trends and themes that stood out to us.