# Note for CS224W - Machine Learning with Graphs

Some notes for the Machine Learning with Graphs class taught by Prof. Jure Leskovec at Stanford University in the Fall semester of 2019. Basics, Structure of Graphs...

# Note 18 of Deep Learning: Confronting the Partition Function

# Note 17 of Deep Learning: Monte Carlo Methods

Las Vegas algorithms and Monte Carlo algorithms are two rough categories of randomized algorithms. Las Vegas algorithms always return a precisely correct answer (or...

# Note 16 of Deep Learning: Structured Probabilistic Models for Deep Learning

A structured probabilistic model is a way of describing a probability distribution, using a graph to describe which random variables in the probability distribution...

# Note 14 of Deep Learning: Autoencoders

Autoencoder is a neural network trained to copy its input to its output. It can be seen as two parts: an encoder $h =...

# Note 13 of Deep Learning: Linear Factor Models

A linear factor model is defined by using a stochastic, linear decoder function that generates $x$ by adding noise to a linear transformation of...

# Note 11 of Deep Learning: Practical Methodology

Practical design process: Determine goals: what error metric to use and corresponding target value, both of which should be driven by the problem...

# Notes of COMP551 Applied Machine Learning

Lecture 2&3: Linear Regression i.i.d assumption: the examples, $x_i$, in the training set are independently and identically distributed: Independently: each $x_i$ is...

# Compositional Language Understanding with Text-based Relational Reasoning

Study of Reasoning Inductive Logic Programming Relational Reasoning Propositional Satisfiability (SAT Solver) Proposal: CLUTRR In the dataset, the...

# Note 10 of Deep Learning: Recurrent and Recursive Neural Network

A recurrent neural network generates the output of time step $t$ not only according to $x^{(t)}$, but also according to the previous history, represented...

# Relational Inductive Biases, Deep Learning, and Graph Networks

This is a note for the paper: Relational inductive biases, deep learning, and graph networks. Introduction Combinatorial generalization: construct new inferences,...

# Note 9 of Deep Learning: Convolutional Neural Network

Convolution Operation Suppose we want to locate a spaceship at time t, which can be described by $x(t)$, but sometimes our monitor may have some noise. To obtain a...

# Note 8 of Deep Learning: Optimization

Optimization techniques for neural network training. Challenges in Neural Network Optimization Local Minima A convex optimization problem can be seen as finding a local minimum. Any...

# Note 7 of Deep Learning: Regularization

Parameter norm penalties A loss function with parameter norm penalty typically has a form of where...

# Note 3 of PRML: Linear Models for Regression

Linear models: linear functions of the adjustable parameters (instead of input variables, which is just the simplest form of linear models). From linear regression to linear models...