A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in Generative Adversarial Networks
Popular methods for stabilizing GAN training, such as gradient penalty and spectral normalization, essentially control the learning signal magnitude for G. Our ICLR2018 paper proposes a complementary approach to this methodology by encouraging the learning signal diversity for G.
Our research team attended the ICLR conference in Vancouver with a paper on Improving GAN Training via Binarized Representation Entropy (BRE) Regularization. Of course, while we were there we also took advantage of some of the interesting talks happening at the event. Here are a few trends and themes that stood out to us.
Building off its recent investments in the Canadian artificial intelligence (AI) ecosystem, Borealis AI today announced it is expanding its network of labs across Canada into Vancouver. The new research centre will focus on computer vision, a subfield of machine learning that trains computers to see, process and understand the visual world. It is expected to open in the fall of this year.
Canada’s “Big Three” AI cities (Toronto, Montreal, Edmonton) have dominated headlines for the past year. But with our country's reputation for academic excellence in machine learning research, it was only a matter of time before a new crop of technical towns emerged.
Continuing its commitment to academic excellence in fundamental research in artificial intelligence (AI), Borealis AI is launching the Borealis AI Graduate Fellowship Program, which will offer financial support to domestic and international students wishing to pursue graduate-level work in the fields of machine learning or artificial intelligence at a Canadian university.
The Machine Learning frameworks we’ve used at Borealis AI have varied according to individual preference. But as our applied team grows, we’re finding that a preference-based system has certain shortcomings that have led to inefficiencies and delays in our research projects.