In this work, we show the need for hard negative examples and provide a method to generate them. We propose to augment the negative sampler in NCE with an adversarially learned adaptive sampler that finds harder negative examples.
A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in Generative Adversarial Networks
Popular methods for stabilizing GAN training, such as gradient penalty and spectral normalization, essentially control the learning signal magnitude for G. Our ICLR2018 paper proposes a complementary approach to this methodology by encouraging the learning signal diversity for G.
Our research team attended the ICLR conference in Vancouver with a paper on Improving GAN Training via Binarized Representation Entropy (BRE) Regularization. Of course, while we were there we also took advantage of some of the interesting talks happening at the event. Here are a few trends and themes that stood out to us.
Building off its recent investments in the Canadian artificial intelligence (AI) ecosystem, Borealis AI today announced it is expanding its network of labs across Canada into Vancouver. The new research centre will focus on computer vision, a subfield of machine learning that trains computers to see, process and understand the visual world. It is expected to open in the fall of this year.
Canada’s “Big Three” AI cities (Toronto, Montreal, Edmonton) have dominated headlines for the past year. But with our country's reputation for academic excellence in machine learning research, it was only a matter of time before a new crop of technical towns emerged.