Posts
Deep Learning Book Notes Ch10
RNNs are specialized architectures for sequential data. The key idea is to leverage parameter sharing which has two benefits : the sequence length can be one not found in the training set and that we maintain the statistical power of learned features across all time steps (we don’t have to learn the same features again and again).
Advanced Robotics - Spring 2019 Notes
8/2/2019
Linear Least Squares
Neuro-Symbolic Program Synthesis - Parisotto et al Notes
Code2Inv - Si et al Notes
Learning to represent programs as graphs - Allamanis et al Notes
Thinking Functionally in Haskell Notes 1
Notes from the book Thinking Funtionally in Haskell
Netflix Architecture
This article is mostly adapted from here. However, the article is laboriously long and not enough meat. This is, what I hope, a condensed version.
GitHub Pages Tutorials
I used the following tutorials to create this site:
subscribe via RSS