- The latest super-resolution paper: http://arxiv.org/pdf/1609.04802.pdf Over the past few years super-resolution with convnets has become really good.
- The latest in WaveNet-like research is a raw waveform-based speech recognizer from Facebook: http://arxiv.org/pdf/1609.03193v2.pdf What’s interesting is that the authors don’t cite WaveNet. Independent discovery?
- Here’s a very nice intuitive/geometric description of various gradient descent algorithms. Definitely recommended for beginners.
- A recent paper uses convnets to infer facial geometry from single photos. What’s more interesting to me than facial recognition is 1. The fact that this is an end-to-end method, and 2. the training data is generated synthetically. They take realistic 3d facial geometries and render them, training on the results. With near-photo-realistic rendering of 3D geometries becoming faster each day, approaches based on training on generated visual data are becoming really popular. See, for example, this work on training self-driving cars using car simulator video games.
- A paper on extending translation-invariant convolutions to other types of convolutions (e.g. rotation-invariance) efficiently. I’m not really sure about the novelty of this work. I already used similar methods in my own thesis, two years ago. And even then I thought it was too obvious to publish. However they do offer some proofs in their article which might be interesting to examine.
- Another interesting piece of work in the area of semi-supervised learning is this paper. In it, the authors pre-train on texture and shape cues using unlabelled data, and then use a smaller amount of labelled data to train the final classifier.