13 May 2019
Generative Autoencoders Beyond VAEs: (Sliced) Wasserstein Autoencoders
In this post we look at Wasserstein and Sliced Wasserstein Autoencoders as lucrative alternatives to Variational Autoencoders and maybe even GANs! Specifically, we will compare the ability of VAE, WAE, and SWAE to model 2 dimensional Gaussian Mixture Models of varying complexities both qualitatively (by visualizing distribution of samples drawn from trained models) and quantitatively (comparing sampled and ground truth distribution using anderson-darling statistic).
09 July 2017
What does it mean to reason in AI?
Reasoning in AI agents is a farily complex, frequently occuring but often less (directly) touched upon concept with many flavors and interesting connections to psychology, logic, and pattern recognition. In this post, we will aim to gain a handle on reasoning.
21 April 2017
What 2D data reveals about deep nets?
Simple 2D synthetic data with well defined decision boundaries can tell us a lot about how neural network architectural choices affect learning. In this post I will visualize the prediction surfaces of neural networks with different activations, depth, width, normalization, and skip connections.
03 July 2016
CVPR16 Highlights @ Caesar Palace, Vegas
Hightlight of papers from CVPR2016. Topics range from oldies like object detection and 3D parsing to relatively newer kids on the block like image captioning and VQA.