Energy based Model
the probability distribution (softmax function):
[p(x)=frac{exp(-E(x))}{sumlimits_x{exp(-E(x))}}]
when there are hidden units,
[P(x)=sumlimits_h{P(x,h)}=frac{1}{sum_xexp(-E(x))}sumlimits_h{exp(-E(x,h))}]
now, we define the free energy function:
[F(x)=-log sumlimits_h exp(-E(x,h))]
so that,
[sumlimits_h exp(-E(x,h))=-exp( F(x))]
now, we rewrite the probability distribution for simpilification:
[P(x)=frac{exp(-F(x))}{sum_x{exp(-F(x))}}]
then, we define the overall cost function:
[mathcal{L}( heta,D)=-frac{1}{N}sumlimits_{x^{(i)} in D}{log p(x^{(i)})}]
we firstly calculate the parcial gradient of $log p(x)$ with respect to $ heta$:
[-log P(x)=F(x) + logleft(sumlimits_x{exp(-F(x))} ight)]
[-frac{partial log P(x)}{partial heta}=frac{partial F(x)}{partial heta}-sumlimits_{hat x}{p(hat x)frac{partial F(hat x)}{partial heta}}]
note that, the gradient contains two terms, which is called the positive phase and the negative phase. The first term increase the probability of training data, and the second term decrease the probability of samples generated by the model.
It's difficult to determine this gradient analytically, as we can't calculate $E_P[frac{partial F(x)}{partial heta}]$. So we might estimate the expectation using sample method.
we would like elements $ ilde x$ of $mathcal{N}$ to be sampled according to $P( ilde x)$, where $mathcal{N}$ is called negative particles.
Given that, the gradient can then be written as:
[ - frac{partial log p(x)}{partial heta}approx frac{partial F(x)}{partial heta} - frac{1}{|mathcal{N}|} sumlimits_{ ilde x in mathcal{N}}frac{partial F( ilde x)}{partial heta}]
RBM
the energy function $E(v,h)$ of RBM is defined as :
[E(v,h)=-b'v-c'h-h'Wv]
where
- $W$ represents the weights connecting hidden and visble units.
- $b,c$ are bias terms of visible and hidden layers respectively.