Canvas fail!
Blog 2018-02-21T21:04:26+00:00

A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in Generative Adversarial Networks

Popular methods for stabilizing GAN training, such as gradient penalty and spectral normalization, essentially control the learning signal magnitude for G. Our ICLR2018 paper proposes a complementary approach to this methodology by encouraging the learning signal diversity for G.

Our Key Takeaways from ICLR 2018

Our research team attended the ICLR conference in Vancouver with a paper on Improving GAN Training via Binarized Representation Entropy (BRE) Regularization. Of course, while we were there we also took advantage of some of the interesting talks happening at the event. Here are a few trends and themes that stood out to us.

Borealis AI expands lab network into Vancouver

Building off its recent investments in the Canadian artificial intelligence (AI) ecosystem, Borealis AI today announced it is expanding its network of labs across Canada into Vancouver. The new research centre will focus on computer vision, a subfield of machine learning that trains computers to see, process and understand the visual world. It is expected to open in the fall of this year.

Bitnami