This is a learning note of this series of videos.
Paper Link: https://arxiv.org/abs/2011.13456
1. Why we use SDE to describe the diffusion process? We want to perturb the data with multiple noise scales(why??). The idea is to use SDE to provide an infinite number of noise scales(continuously changing).
The SDE (which is continuous) can be used for theoretical analysis. In practice, we discretize the SDE for numerical computation.
Goal : construct a diffusion process { x ( t ) } t = 0 T \{\bold{x}(t)\}^T_{t=0} { x ( t ) } t = 0 T indexed by a continuous time variable t ∈ [ 0 , T ] t \in [0, T] t ∈ [ 0 , T ] , such that x ( 0 ) ∼ p 0 \bold{x}(0) \sim p_0 x ( 0 ) ∼ p 0 , for which we have a dataset of i.i.d. samples, and x ( T ) ∼ p T \bold{x}(T) \sim p_T x ( T ) ∼ p T , for which we have a tractable form to generate samples efficiently. This diffusion process can be modeled as the solution to an Ito SDE:
d x = f ( x , t ) d t + g ( t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + g(t) d \bold{w} d x = f ( x , t ) d t + g ( t ) d w
w w w : a Brownian Motion whose increments follow the gaussian distribution, and variance increase with time:
w ( t + Δ t ) − w ( t ) ∼ N ( 0 , Δ t ) \bold{w}(t+\Delta t) - \bold{w}(t) \sim \mathcal{N}(0, \Delta t) w ( t + Δ t ) − w ( t ) ∼ N ( 0 , Δ t ) d w = 0 + Δ t ϵ where ϵ ∼ N ( 0 , I ) d \bold{w} = 0 + \sqrt{\Delta t} \epsilon \qquad \text{where } \epsilon \sim \mathcal{N}(0, \bold{I}) d w = 0 + Δ t ϵ where ϵ ∼ N ( 0 , I ) f ( ⋅ , t ) : R d → R d \bold{f} ( \cdot, t): \mathbb{R}^d \rightarrow \mathbb{R}^d f ( ⋅ , t ) : R d → R d is a vector valued function called the drift coefficient of x ( t ) \bold{x}(t) x ( t ) , and g ( ⋅ ) : R → R g(\cdot): \mathbb{R} \rightarrow \mathbb{R} g ( ⋅ ) : R → R is a scalar function known as the diffusion coefficient of x ( t ) \bold{x}(t) x ( t ) .
Starting from samples of x ( T ) ∼ p T \bold{x}(T) \sim p_T x ( T ) ∼ p T and reversing the process, we can obtain samples x ( 0 ) ∼ p 0 \bold{x}(0) \sim p_0 x ( 0 ) ∼ p 0 . It is proved that the reverse of a diffusion process is also a diffusion process, running backward in time and given by the reverse-time SDE:
d x = [ f ( x , t ) − g ( t ) 2 ∇ x log p t ( x ) ] d t + g ( t ) d w ˉ d\bold{x} = [\bold{f}(\bold{x}, t) - g(t)^2 \nabla_{\bold{x}} \log p_t(\bold{x})] dt + g(t) d \bar{\bold{w}} d x = [ f ( x , t ) − g ( t ) 2 ∇ x log p t ( x )] d t + g ( t ) d w ˉ where w ˉ \bar{\bold{w}} w ˉ is a standard Wiener process when time flows backwards from T T T to 0, and d t dt d t is a n infinitesimal negative timestep. Once the score of each marginal distribution, ∇ x log p t ( x ) \nabla_{\bold{x}} \log p_t(\bold{x}) ∇ x log p t ( x ) , is known for all t, we can derive the above reverse diffusion process and simulate it to sample from p 0 p_0 p 0 .
2. How to derive the reverse-time SDE? Forward SDE: d x = f ( x , t ) d t + g ( t ) d w d\bold{x} = \bold{f}(\bold{x}, t) dt + g(t) d \bold{w} d x = f ( x , t ) d t + g ( t ) d w
Reverse SDE: d x = [ f ( x , t ) − g ( t ) 2 ∇ x log p t ( x ) ] d t + g ( t ) d w ˉ d\bold{x} = [\bold{f}(\bold{x}, t) - g(t)^2 \nabla_{\bold{x}} \log p_t(\bold{x})] dt + g(t) d \bar{\bold{w}} d x = [ f ( x , t ) − g ( t ) 2 ∇ x log p t ( x )] d t + g ( t ) d w ˉ
Proof 1 :
Important Assumption : the diffusion coefficient is g ( t ) g(t) g ( t ) rather than g ( x , t ) g(\bold{x}, t) g ( x , t ) .
Characteristics of the Brownian Motion { w ( t ) , t ≥ 0 } \{ w(t), t\geq 0\} { w ( t ) , t ≥ 0 } :
Gaussian Increments: w ( t ) − w ( s ) ∼ N ( 0 , t − s ) w(t) - w(s) \sim \mathcal{N}(0, t-s) w ( t ) − w ( s ) ∼ N ( 0 , t − s ) ; w ( t ) − w ( 0 ) ∼ N ( 0 , t ) w(t) - w(0) \sim \mathcal{N}(0, t) w ( t ) − w ( 0 ) ∼ N ( 0 , t ) Independent Increments: If 0 ≤ u ≤ s ≤ t 0 \leq u \leq s \leq t 0 ≤ u ≤ s ≤ t , then w ( t ) − w ( s ) w(t) - w(s) w ( t ) − w ( s ) and w ( s ) − w ( u ) w(s) - w(u) w ( s ) − w ( u ) are independent. Path Continuity: w ( t ) w(t) w ( t ) is a continuous function of t t t . Discretization of the forward SDE gives:
x t + Δ t − x t = f ( x t , t ) Δ t + g ( t ) Δ t ϵ ϵ ∼ N ( 0 , I ) \bold{x}_{t + \Delta t} - \bold{x}_t = \bold{f}(\bold{x}_t, t) \Delta t + g(t) \sqrt{\Delta t} \epsilon \qquad \epsilon \sim \mathcal{N}(\bold{0}, \bold{I}) x t + Δ t − x t = f ( x t , t ) Δ t + g ( t ) Δ t ϵ ϵ ∼ N ( 0 , I ) ⇔ x t + Δ t = x t + f ( x t , t ) Δ t + g ( t ) Δ t ϵ \Leftrightarrow \bold{x}_{t + \Delta t} = \bold{x}_t + \bold{f}(\bold{x}_t, t) \Delta t + g(t) \sqrt{\Delta t} \epsilon ⇔ x t + Δ t = x t + f ( x t , t ) Δ t + g ( t ) Δ t ϵ This indicates that:
p ( x t + Δ t ∣ x t ) = N ( x t + Δ t ∣ x t + f ( x t , t ) Δ t , g ( t ) 2 Δ t ⋅ I ) ∝ exp ( − ∣ ∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t ) \begin{align*} p(\bold{x}_{t + \Delta t} | \bold{x}_t) &= \mathcal{N}(\bold{x}_{t + \Delta t} | \bold{x}_t + \bold{f}(\bold{x}_t, t) \Delta t, g(t)^2 \Delta t \cdot \bold{I}) \\ & \propto \exp \left( - \frac{||\bold{x}_{t + \Delta t} - \bold{x}_t - \bold{f}(\bold{x}_t, t) \Delta t||^2}{2 g(t)^2 \Delta t} \right)\end{align*} p ( x t + Δ t ∣ x t ) = N ( x t + Δ t ∣ x t + f ( x t , t ) Δ t , g ( t ) 2 Δ t ⋅ I ) ∝ exp ( − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 ) According to Bayesian theorem:
p ( x t ∣ x t + Δ t ) = p ( x t + Δ t ∣ x t ) p ( x t ) p ( x t + Δ t ) = p ( x t + Δ t ∣ x t ) exp ( log p ( x t ) − log p ( x t + Δ t ) ) ∝ exp ( − ∣ ∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t + log p ( x t ) − log p ( x t + Δ t ) ) \begin{align*} p(\bold{x}_t | \bold{x}_{t+\Delta t}) &= \frac{p(\bold{x}_{t + \Delta t} | \bold{x}_t) p(\bold{x}_t)}{p(\bold{x}_{t + \Delta t)}} = p(\bold{x}_{t + \Delta t} | \bold{x}_t) \exp \left( \log p(\bold{x}_t) - \log p(\bold{x}_{t+\Delta t}) \right) \\ &\propto \exp \left( -\frac{||\bold{x}_{t + \Delta t} - \bold{x}_t - \bold{f}(\bold{x}_t, t) \Delta t||^2}{2 g(t)^2 \Delta t} + \log p(\bold{x}_t) - \log p(\bold{x}_{t+\Delta t})\right) \end{align*} p ( x t ∣ x t + Δ t ) = p ( x t + Δ t ) p ( x t + Δ t ∣ x t ) p ( x t ) = p ( x t + Δ t ∣ x t ) exp ( log p ( x t ) − log p ( x t + Δ t ) ) ∝ exp ( − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 + log p ( x t ) − log p ( x t + Δ t ) ) Note: First order Taylor Expansion of f ( x ) f(x) f ( x ) at x 0 x_0 x 0 is f ( x ) ≈ f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) f(x) \approx f(x_0) + f'(x_0) (x - x_0) f ( x ) ≈ f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 )
Note: log p ( x t ) \log p(x_t) log p ( x t ) is actually a function of both x t x_t x t and t t t , therefore when taking derivatives, both of them should be considered.
By Taylor Expansion, we have
log p ( x t + Δ t ) ≈ log p ( x t ) + ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) + Δ t ∂ ∂ t log p ( x t ) \log p(\bold{x}_{t + \Delta t}) \approx \log p(\bold{x}_t) + (\bold{x}_{t + \Delta t} - \bold{x}_t) \cdot \nabla_{\bold{x}_t} \log p(\bold{x}_t) + \Delta t \frac{\partial}{\partial t} \log p(\bold{x}_t) log p ( x t + Δ t ) ≈ log p ( x t ) + ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) + Δ t ∂ t ∂ log p ( x t ) Plug in the above equation into p ( x t ∣ x t + Δ t ) p(\bold{x}_t | \bold{x}_{t + \Delta t}) p ( x t ∣ x t + Δ t ) :
p ( x t ∣ x t + Δ t ) ∝ exp ( − ∣ ∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t − ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) − Δ t ∂ ∂ t log p ( x t ) ) \begin{align*} p(\bold{x}_t | \bold{x}_{t + \Delta t}) &\propto \exp \left( -\frac{||\bold{x}_{t + \Delta t} - \bold{x}_t - \bold{f}(\bold{x}_t, t) \Delta t||^2}{2 g(t)^2 \Delta t} - (\bold{x}_{t + \Delta t} - \bold{x}_t) \cdot \nabla_{\bold{x}_t} \log p(\bold{x}_t) - \Delta t \frac{\partial}{\partial t} \log p(\bold{x}_t) \right)\end{align*} p ( x t ∣ x t + Δ t ) ∝ exp ( − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 − ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) − Δ t ∂ t ∂ log p ( x t ) ) − ∣ ∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t − ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) = − 1 2 g ( t ) 2 Δ t ( ∣ ∣ x t + Δ t − x t ∣ ∣ 2 − 2 ( x t + Δ t − x t ) f ( x t , t ) Δ t + f ( x t , t ) 2 Δ t 2 + 2 g ( t ) 2 Δ t ( x t + Δ t − x t ) ∇ x t log p ( x t ) ) = − 1 2 g ( t ) 2 Δ t ( ∣ ∣ x t + Δ t − x t ∣ ∣ 2 − 2 ( x t + Δ t − x t ) [ f ( x t , t ) − g ( t ) 2 ∇ x t log p ( x t ) ] Δ t + C ( Δ t ) ) = − 1 2 g ( t ) 2 Δ t ( ∣ ∣ x t + Δ t − x t − [ f ( x t , t ) − g ( t ) 2 ∇ x t log p ( x t ) ] Δ t ∣ ∣ 2 + C ( Δ t ) ) \begin{align*} & -\frac{||\bold{x}_{t + \Delta t} - \bold{x}_t - \bold{f}(\bold{x}_t, t) \Delta t||^2}{2 g(t)^2 \Delta t} - (\bold{x}_{t + \Delta t} - \bold{x}_t) \cdot \nabla_{\bold{x}_t} \log p(\bold{x}_t) \\ &= -\frac{1}{2 g(t)^2 \Delta t} \left( ||\bold{x}_{t+\Delta t} - \bold{x}_t||^2 - 2(\bold{x}_{t+\Delta t} - \bold{x}_t) \bold{f}(\bold{x}_t, t) \Delta t + \bold{f}(\bold{x}_t, t)^2 \Delta t^2 + 2 g(t)^2 \Delta t (\bold{x}_{t + \Delta t} - \bold{x}_t)\nabla_{\bold{x}_t} \log p(\bold{x}_t)\right) \\ &= -\frac{1}{2 g(t)^2 \Delta t} \left( ||\bold{x}_{t+\Delta t} - \bold{x}_t||^2 - 2(\bold{x}_{t + \Delta t} - \bold{x}_t)[\bold{f}(\bold{x}_t, t) - g(t)^2 \nabla_{\bold{x}_t \log p(\bold{x}_t)}] \Delta t + C(\Delta t) \right) \\ &= -\frac{1}{2 g(t)^2 \Delta t}\left( ||\bold{x}_{t+\Delta t} - \bold{x}_t - [\bold{f}(\bold{x}_t, t) - g(t)^2 \nabla_{\bold{x}_t \log p(\bold{x}_t)}] \Delta t||^2 + C(\Delta t) \right)\end{align*} − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 − ( x t + Δ t − x t ) ⋅ ∇ x t log p ( x t ) = − 2 g ( t ) 2 Δ t 1 ( ∣∣ x t + Δ t − x t ∣ ∣ 2 − 2 ( x t + Δ t − x t ) f ( x t , t ) Δ t + f ( x t , t ) 2 Δ t 2 + 2 g ( t ) 2 Δ t ( x t + Δ t − x t ) ∇ x t log p ( x t ) ) = − 2 g ( t ) 2 Δ t 1 ( ∣∣ x t + Δ t − x t ∣ ∣ 2 − 2 ( x t + Δ t − x t ) [ f ( x t , t ) − g ( t ) 2 ∇ x t l o g p ( x t ) ] Δ t + C ( Δ t ) ) = − 2 g ( t ) 2 Δ t 1 ( ∣∣ x t + Δ t − x t − [ f ( x t , t ) − g ( t ) 2 ∇ x t l o g p ( x t ) ] Δ t ∣ ∣ 2 + C ( Δ t ) ) Here C ( Δ t ) C(\Delta t) C ( Δ t ) is a polynomial of Δ t \Delta t Δ t without a constant term. It goes to 0 0 0 when Δ t → 0 \Delta t \rightarrow 0 Δ t → 0 . Therefore,
p ( x t ∣ x t + Δ t ) ∝ exp ( − ∣ ∣ x t + Δ t − x t − [ f ( x t , t ) − g ( t ) 2 ∇ x t log p ( x t ) ] Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t ) ≈ exp ( − ∣ ∣ x t − x t + Δ t − [ f ( x t + Δ t , t + Δ t ) − g ( t + Δ t ) 2 ∇ x t log p ( x t + Δ t ) ] ( − Δ t ) ∣ ∣ 2 2 g ( t + Δ t ) 2 Δ t ) \begin{align*} p(\bold{x}_t | \bold{x}_{t + \Delta t}) & \propto \exp \left( - \frac{||\bold{x}_{t+\Delta t} - \bold{x}_t - [\bold{f}(\bold{x}_t, t) - g(t)^2 \nabla_{\bold{x}_t \log p(\bold{x}_t)}] \Delta t||^2}{2g(t)^2 \Delta t} \right) \\ &\approx \exp \left( - \frac{||\bold{x}_{t} - \bold{x}_{t + \Delta t} - [\bold{f}(\bold{x}_{t+\Delta t}, t + \Delta t) - g(t+\Delta t)^2 \nabla_{\bold{x}_t \log p(\bold{x}_{t+\Delta t})}] (-\Delta t)||^2}{2g(t + \Delta t)^2 \Delta t} \right)\end{align*} p ( x t ∣ x t + Δ t ) ∝ exp ( − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − [ f ( x t , t ) − g ( t ) 2 ∇ x t l o g p ( x t ) ] Δ t ∣ ∣ 2 ) ≈ exp ( − 2 g ( t + Δ t ) 2 Δ t ∣∣ x t − x t + Δ t − [ f ( x t + Δ t , t + Δ t ) − g ( t + Δ t ) 2 ∇ x t l o g p ( x t + Δ t ) ] ( − Δ t ) ∣ ∣ 2 ) Previously, we derived p ( x t + Δ t ∣ x t ) ∝ exp ( − ∣ ∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 2 g ( t ) 2 Δ t ) p(\bold{x}_{t + \Delta t} | \bold{x}_t) \propto \exp \left( - \frac{||\bold{x}_{t + \Delta t} - \bold{x}_t - \bold{f}(\bold{x}_t, t) \Delta t||^2}{2 g(t)^2 \Delta t} \right) p ( x t + Δ t ∣ x t ) ∝ exp ( − 2 g ( t ) 2 Δ t ∣∣ x t + Δ t − x t − f ( x t , t ) Δ t ∣ ∣ 2 ) from d x = f ( x , t ) d t + g ( t ) d w d\bold{x} = \bold{f}(\bold{x}, t) dt + g(t) d\bold{w} d x = f ( x , t ) d t + g ( t ) d w . Therefore we can conclude that the corresponding reverse-time SDE is:
d x = [ f ( x t , t ) − g ( t ) 2 ∇ x t log p ( x t ) ] d t + g ( t ) d w d \bold{x} = [\bold{f}(\bold{x}_{t}, t) - g(t)^2 \nabla_{\bold{x}_t \log p(\bold{x}_{t})}]dt + g(t) d \bold{w} d x = [ f ( x t , t ) − g ( t ) 2 ∇ x t l o g p ( x t ) ] d t + g ( t ) d w
3. Core settings of two diffusion models: SMLD and DDPM 3.1 Denoising score matching with Langevin Dynamics (SMLD) Let the perturbation kernel to be: p σ ( x ~ ∣ x ) ≔ N ( x ~ ; x , σ 2 I ) p_{\sigma} (\tilde{\bold{x}} | \bold{x}) \coloneqq \mathcal{N}(\tilde{\bold{x}}; \bold{x} , \sigma^2 \bold{I}) p σ ( x ~ ∣ x ) : = N ( x ~ ; x , σ 2 I ) , and p σ ( x ~ ) ≔ ∫ p d a t a ( x ) p σ ( x ~ ∣ x ) d x p_{\sigma}(\tilde{\bold{x}}) \coloneqq \int p_{data}(\bold{x}) p_{\sigma}(\tilde{\bold{x}} | \bold{x}) d\bold{x} p σ ( x ~ ) : = ∫ p d a t a ( x ) p σ ( x ~ ∣ x ) d x , where p d a t a ( x ) p_{data}(\bold{x}) p d a t a ( x ) denotes the data distribution. Consider a sequence of positive noise scales σ min = σ 1 < σ 2 < ⋯ < σ N = σ max \sigma_{\min} = \sigma_1 < \sigma_2 < \dots < \sigma_N = \sigma_{\max} σ m i n = σ 1 < σ 2 < ⋯ < σ N = σ m a x . Usually σ min \sigma_{\min} σ m i n is small enough such that p σ min ( x ) ≈ p d a t a ( x ) p_{\sigma_{\min}}(\bold{x}) \approx p_{data}(\bold{x}) p σ m i n ( x ) ≈ p d a t a ( x ) and σ max \sigma_{\max} σ m a x is large enough such that p σ max ( x ) ≈ N ( x ; 0 , σ max 2 I ) p_{\sigma_{\max}}(\bold{x}) \approx \mathcal{N}(\bold{x}; \bold{0}, \sigma_{\max}^2 \bold{I}) p σ m a x ( x ) ≈ N ( x ; 0 , σ m a x 2 I )
Overall Step (derived from single step ): p ( x t ∣ x 0 ) = N ( x 0 , σ t 2 I ) p(\bold{x}_t | \bold{x}_0) = \mathcal{N}(\bold{x}_0, \sigma_t^2 \bold{I}) p ( x t ∣ x 0 ) = N ( x 0 , σ t 2 I )
Single Step (by design ): p ( x t ∣ x t − 1 ) = N ( x t − 1 , ( σ t 2 − σ t − 1 2 ) I ) p(\bold{x}_t | \bold{x}_{t-1}) = \mathcal{N}(\bold{x}_{t-1}, (\sigma_t^2 - \sigma_{t-1}^2) \bold{I}) p ( x t ∣ x t − 1 ) = N ( x t − 1 , ( σ t 2 − σ t − 1 2 ) I )
Previous work propose to train a Noise Conditional Score Network s θ ( x , σ ) \bold{s}_{\theta}(\bold{x}, \sigma) s θ ( x , σ ) with a weighted sum of denoising score matching objectives:
θ ∗ = arg min θ ∑ i = 1 N σ i 2 E p d a t a ( x ) E p σ i ( x ~ ∣ x ) [ ∣ ∣ s θ ( x ~ , σ i ) − ∇ x ~ log p σ i ( x ~ ∣ x ) ∣ ∣ 2 2 ] \theta^* = \argmin_{\theta} \sum_{i=1}^N \sigma_i^2 \mathbb{E}_{p_{data}(\bold{x})} \mathbb{E}_{p_{\sigma_i}(\tilde{\bold{x}}|\bold{x})}\left[ ||\bold{s}_{\theta}(\tilde{\bold{x}}, \sigma_i) - \nabla_{\tilde{\bold{x}}} \log p_{\sigma_i}(\tilde{\bold{x}}|\bold{x})||^2_2 \right] θ ∗ = θ arg min i = 1 ∑ N σ i 2 E p d a t a ( x ) E p σ i ( x ~ ∣ x ) [ ∣∣ s θ ( x ~ , σ i ) − ∇ x ~ log p σ i ( x ~ ∣ x ) ∣ ∣ 2 2 ] Here, ∇ x ~ log p σ i ( x ~ ∣ x ) = − x ~ − x σ i 2 \nabla_{\tilde{\bold{x}}} \log p_{\sigma_i}(\tilde{\bold{x}}|\bold{x}) = - \frac{\tilde{\bold{x}} - \bold{x}}{\sigma_i^2} ∇ x ~ log p σ i ( x ~ ∣ x ) = − σ i 2 x ~ − x . The optimal score-based model s θ ∗ ( x , σ ) \bold{s}_{\theta^*}(\bold{x}, \sigma) s θ ∗ ( x , σ ) can be obtained given sufficient data and model capacity. It matches ∇ x log p σ ( x ) \nabla{\bold{x}} \log p_{\sigma}(\bold{x}) ∇ x log p σ ( x ) almost everywhere for σ ∈ { σ i } i = 1 N \sigma \in \{ \sigma_i \}_{i=1}^N σ ∈ { σ i } i = 1 N .
For sampling , the work uses M M M steps of Langevin MCMC to get a sample for each p σ i ( x ) p_{\sigma_i}(\bold{x}) p σ i ( x ) sequentially:
x i m = x i m − 1 + ϵ i s θ ∗ ( x i m − 1 , σ i ) + 2 ϵ i z i m m = 1 , 2 , … , M , \bold{x}_i^m = \bold{x}_i^{m-1} + \epsilon_i \bold{s}_{\theta^*}(\bold{x}_i^{m-1}, \sigma_i) + \sqrt{2 \epsilon_i} \bold{z}_i^m \qquad m=1,2,\dots,M, x i m = x i m − 1 + ϵ i s θ ∗ ( x i m − 1 , σ i ) + 2 ϵ i z i m m = 1 , 2 , … , M , where ϵ i > 0 \epsilon_i > 0 ϵ i > 0 is the step size, and z i m \bold{z}_i^m z i m is standard normal. The above process is repeated for i = N , N − 1 , … , 1 i = N, N-1, \dots, 1 i = N , N − 1 , … , 1 in turn with x N 0 ∼ N ( x ∣ 0 , σ max 2 I ) \bold{x}_N^0 \sim \mathcal{N}(\bold{x} | \bold{0}, \sigma_{\max}^2 \bold{I}) x N 0 ∼ N ( x ∣ 0 , σ m a x 2 I ) and x i 0 = x i + 1 M \bold{x}_i^0 = \bold{x}_{i+1}^M x i 0 = x i + 1 M when i < N i < N i < N .
Note: i i i means different noise levels; M M M means denoise M M M steps at each noise level; Therefore, to generate a sample, we need to run the score function N × M N \times M N × M times:
x N 0 → x N 1 → ⋯ → x N M → x N − 1 1 → ⋯ → x N − 1 M → ⋯ → x 0 M \bold{x}_N^0 \rightarrow \bold{x}_N^1 \rightarrow \dots \rightarrow \bold{x}_N^M \rightarrow \bold{x}_{N-1}^1 \rightarrow \dots \rightarrow \bold{x}_{N-1}^M \rightarrow \dots \rightarrow \bold{x}_{0}^M x N 0 → x N 1 → ⋯ → x N M → x N − 1 1 → ⋯ → x N − 1 M → ⋯ → x 0 M 3.2 Denoising Diffusion Probabilistic Models (DDPM) Consider a sequence of positive noise scales 0 < β 1 , β 2 , … , β N < 1 0 < \beta_1, \beta_2, \dots, \beta_N < 1 0 < β 1 , β 2 , … , β N < 1 . For each training data point x 0 ∼ p d a t a ( x ) \bold{x}_0 \sim p_{data}(\bold{x}) x 0 ∼ p d a t a ( x ) , construct a discrete Markov Chain { x 0 , x 1 , … , x N } \{\bold{x}_0, \bold{x}_1, \dots, \bold{x}_N\} { x 0 , x 1 , … , x N } . The single step and overall step of this chain is:
Single step (by design ): p ( x i ∣ x i − 1 ) = N ( x i ; 1 − β i x i − 1 , β i I ) p(\bold{x}_i | \bold{x}_{i-1}) = \mathcal{N} (\bold{x}_i; \sqrt{1 - \beta_i} \bold{x}_{i-1}, \beta_i \bold{I}) p ( x i ∣ x i − 1 ) = N ( x i ; 1 − β i x i − 1 , β i I )
Overall step (derived ): p α i ( x i ∣ x 0 ) = N ( x i ; α i x 0 , ( 1 − α i ) I ) p_{\alpha_i}(\bold{x}_i | \bold{x}_0) = \mathcal{N} (\bold{x}_i ; \sqrt{\alpha_i} \bold{x}_0, (1 - \alpha_i) \bold{I}) p α i ( x i ∣ x 0 ) = N ( x i ; α i x 0 , ( 1 − α i ) I )
Here α i ≔ Π j = 1 i ( 1 − β j ) \alpha_i \coloneqq \Pi_{j=1}^i (1 - \beta_j) α i : = Π j = 1 i ( 1 − β j )
Proof of overall step:
Since x i = 1 − β i x i − 1 + β i z i \bold{x}_i = \sqrt{1 - \beta_i}\bold{x}_{i-1} + \sqrt{\beta_i} \bold{z}_i x i = 1 − β i x i − 1 + β i z i and x i + 1 = 1 − β i + 1 x i + β i + 1 z i + 1 \bold{x}_{i+1} = \sqrt{1 - \beta_{i+1}} \bold{x}_i + \sqrt{\beta_{i+1}} \bold{z}_{i+1} x i + 1 = 1 − β i + 1 x i + β i + 1 z i + 1 , we have:
x i + 1 = 1 − β i + 1 ( 1 − β i x i − 1 + β i z i ) + β i + 1 z i + 1 = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + ( 1 − β i + 1 ) β i z i + β i + 1 z i + 1 = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + β i + β i + 1 − β i β i + 1 z ( z ∼ N ( 0 , I ) ) = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + 1 − ( 1 − β i + 1 ) ( 1 − β i ) z \begin{align*} \bold{x}_{i+1} &= \sqrt{1 - \beta_{i+1}} (\sqrt{1 - \beta_i}\bold{x}_{i-1} + \sqrt{\beta_i} \bold{z}_i) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &= \sqrt{(1 - \beta_{i+1})(1 - \beta_i)}x_{i-1} + \sqrt{(1 - \beta_{i+1})\beta_i} \bold{z}_i + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &= \sqrt{(1 - \beta_{i+1})(1 - \beta_i)}x_{i-1} + \sqrt {\beta_i + \beta_{i+1} - \beta_i \beta_{i+1}} \bold{z} \qquad (\bold{z} \sim \mathcal{N}(\bold{0}, \bold{I})) \\ &= \sqrt{(1 - \beta_{i+1})(1 - \beta_i)}x_{i-1} + \sqrt{1 - (1 - \beta_{i+1})(1 - \beta_i)} \bold{z}\end{align*} x i + 1 = 1 − β i + 1 ( 1 − β i x i − 1 + β i z i ) + β i + 1 z i + 1 = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + ( 1 − β i + 1 ) β i z i + β i + 1 z i + 1 = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + β i + β i + 1 − β i β i + 1 z ( z ∼ N ( 0 , I )) = ( 1 − β i + 1 ) ( 1 − β i ) x i − 1 + 1 − ( 1 − β i + 1 ) ( 1 − β i ) z By induction, we can conclude that:
x i = Π j = 1 i ( 1 − β j ) x 0 + 1 − Π j = 1 i ( 1 − β j ) z = α i x 0 + 1 − α i z ( z ∼ N ( 0 , I ) ) \begin{align*} \bold{x}_i &= \sqrt{\Pi_{j=1}^i (1 - \beta_j)} \bold{x}_0 + \sqrt{1 - \Pi_{j=1}^i (1 - \beta_j)} \bold{z} \\ &= \sqrt{\alpha_i} \bold{x}_0 + \sqrt{1 - \alpha_i} \bold{z} \qquad (\bold{z} \sim \mathcal{N} (\bold{0}, \bold{I}))\end{align*} x i = Π j = 1 i ( 1 − β j ) x 0 + 1 − Π j = 1 i ( 1 − β j ) z = α i x 0 + 1 − α i z ( z ∼ N ( 0 , I )) Therefore, p α i ( x i ∣ x 0 ) = N ( x i ; α i x 0 , ( 1 − α i ) I ) p_{\alpha_i}(\bold{x}_i | \bold{x}_0) = \mathcal{N} (\bold{x}_i ; \sqrt{\alpha_i} \bold{x}_0, (1 - \alpha_i) \bold{I}) p α i ( x i ∣ x 0 ) = N ( x i ; α i x 0 , ( 1 − α i ) I ) .
End of proof.
The noise scales are prescribed such that x N \bold{x}_N x N is approximately distributed according to N ( 0 , I ) \mathcal{N} ( \bold{0}, \bold{I}) N ( 0 , I )
The denoising distribution derived from Bayesian formula is:
p θ ( x i − 1 ∣ x i ) = N ( x i − 1 ; 1 1 − β i ( x i + β i s θ ( x i , i ) ) , β i I ) p_{\theta}(\bold{x}_{i-1} | \bold{x}_i) = \mathcal{N} (\bold{x}_{i-1}; \frac{1}{\sqrt{1 - \beta_i}}(\bold{x}_i + \beta_i \bold{s}_{\theta}(\bold{x}_i, i)), \beta_i \bold{I}) p θ ( x i − 1 ∣ x i ) = N ( x i − 1 ; 1 − β i 1 ( x i + β i s θ ( x i , i )) , β i I ) and it can be trained with a re-weighted variant of the evidence lower bound (ELBO):
θ ∗ = arg min θ ∑ i = 1 N ( 1 − α i ) E p d a t a ( x ) E p α i ( x ~ ∣ x ) [ ∣ ∣ s θ ( x ~ , i ) − ∇ x ~ log p α i ( x ~ ∣ x ) ∣ ∣ 2 2 ] \theta^* = \argmin_{\theta} \sum_{i=1}^N (1 - \alpha_i) \mathbb{E}_{p_{data}(\bold{x})} \mathbb{E}_{p_{\alpha_i}}(\tilde{\bold{x}} | \bold{x}) [||\bold{s}_{\theta}(\tilde{\bold{x}} , i) - \nabla_{\tilde{\bold{x}}}\log p_{\alpha_i}(\tilde{\bold{x}} | \bold{x})||^2_2] θ ∗ = θ arg min i = 1 ∑ N ( 1 − α i ) E p d a t a ( x ) E p α i ( x ~ ∣ x ) [ ∣∣ s θ ( x ~ , i ) − ∇ x ~ log p α i ( x ~ ∣ x ) ∣ ∣ 2 2 ] After solving the above equation we get the optimal model s θ ∗ ( x i , i ) \bold{s}_{\theta^*} (\bold{x}_i, i) s θ ∗ ( x i , i ) , then samples can be generated by starting from x N ∼ N ( 0 , I ) \bold{x}_N \sim \mathcal{N}(\bold{0}, \bold{I}) x N ∼ N ( 0 , I ) and following the estimated reverse Markov chain as below:
x i − 1 = 1 1 − β i ( x i + β i s θ ∗ ( x i , i ) ) + β i z i , i = N , N − 1 , … , 1. \bold{x}_{i-1} = \frac{1}{\sqrt{1 - \beta_i}} (\bold{x}_i + \beta_i \bold{s}_{\theta^*}(\bold{x}_i, i)) + \sqrt{\beta_i} \bold{z}_i, \qquad i=N, N-1, \dots, 1. x i − 1 = 1 − β i 1 ( x i + β i s θ ∗ ( x i , i )) + β i z i , i = N , N − 1 , … , 1. NOTE: we can notice that the weights of the i-th summand in both SMLD and DDPM loss functions, namely σ i 2 \sigma_i^2 σ i 2 and ( 1 − α i ) (1 - \alpha_i) ( 1 − α i ) , are related to the corresponding perturbation kernels in the same functional form: σ i 2 ∝ 1 / E [ ∣ ∣ ∇ x log p σ i ( x ~ ∣ x ) ∣ ∣ 2 2 ] \sigma_i^2 \propto 1 / \mathbb{E} [||\nabla_{\bold{x}} \log p_{\sigma_i}(\tilde{\bold{x}}| \bold{x})||^2_2] σ i 2 ∝ 1/ E [ ∣∣ ∇ x log p σ i ( x ~ ∣ x ) ∣ ∣ 2 2 ] and ( 1 − α i ) ∝ 1 / E [ ∣ ∣ ∇ x log p α i ( x ~ ∣ x ) ∣ ∣ 2 2 ] (1 - \alpha_i) \propto 1 / \mathbb{E} [||\nabla_{\bold{x}} \log p_{\alpha_i}(\tilde{\bold{x}}| \bold{x})||^2_2] ( 1 − α i ) ∝ 1/ E [ ∣∣ ∇ x log p α i ( x ~ ∣ x ) ∣ ∣ 2 2 ]
4. VE, VP SDEs: Derive SDEs from Conditional Distributions The noise perturbations used in SMLD and DDPM can be regarded as discretizations of two different SDEs.
VE Model When using a total of N noise scales, each perturbation kernel p σ i ( x ∣ x 0 ) p_{\sigma_i}(\bold{x} | \bold{x}_0) p σ i ( x ∣ x 0 ) of SMLD corresponds to the distribution of x i \bold{x}_i x i in the following Markov chain:
x i = x i − 1 + σ i 2 − σ i − 1 2 z i − 1 i = 1 , … , N . \bold{x}_i = \bold{x}_{i-1} + \sqrt{\sigma_i^2 - \sigma_{i-1}^2} \bold{z}_{i-1} \qquad i=1,\dots,N. x i = x i − 1 + σ i 2 − σ i − 1 2 z i − 1 i = 1 , … , N . where z i − 1 ∼ N ( 0 , I ) \bold{z}_{i-1} \sim \mathcal{N}(\bold{0}, \bold{I}) z i − 1 ∼ N ( 0 , I ) , and we have introduced σ 0 = 0 \sigma_0 = 0 σ 0 = 0 to simplify the notation.
(Recall that single step perturbation of SMLD is: p ( x i ∣ x i − 1 ) = N ( x i ; x i − 1 , ( σ i 2 − σ i − 1 2 ) I ) p(\bold{x}_i | \bold{x}_{i-1}) = \mathcal{N}(\bold{x}_i; \bold{x}_{i-1}, (\sigma_i^2 - \sigma_{i-1}^2) \bold{I}) p ( x i ∣ x i − 1 ) = N ( x i ; x i − 1 , ( σ i 2 − σ i − 1 2 ) I )
When N → ∞ N \rightarrow \infty N → ∞ , { σ i } i = 1 N \{ \sigma_i \}_{i=1}^N { σ i } i = 1 N becomes a function σ ( t ) \sigma(t) σ ( t ) , z i \bold{z}_i z i becomes z ( t ) \bold{z}(t) z ( t ) , and the Markov chain { x i } i = 1 N \{ \bold{x}_i \}_{i=1}^N { x i } i = 1 N becomes a continuous stochastic process { x ( t ) } t = 0 1 \{ \bold{x}(t) \}_{t=0}^1 { x ( t ) } t = 0 1 , where we have used a continuous time variable t ∈ [ 0 , 1 ] t \in [0, 1] t ∈ [ 0 , 1 ] . The process { x ( t ) } t = 0 1 \{ \bold{x} (t) \}_{t=0}^1 { x ( t ) } t = 0 1 is given by the following SDE:
d x = d [ σ 2 ( t ) ] d t d w d \bold{x} = \sqrt{\frac{d[\sigma^2(t)]}{dt}} d\bold{w} d x = d t d [ σ 2 ( t )] d w VP Model Similarly for DDPM, the perturbation kernels are p ( x i ∣ x i − 1 ) = N ( 1 − β i x i − 1 , β i I ) p(\bold{x}_i | \bold{x}_{i-1}) = \mathcal{N} (\sqrt{1 - \beta_i} x_{i-1}, \beta_i \bold{I}) p ( x i ∣ x i − 1 ) = N ( 1 − β i x i − 1 , β i I ) . Then the discrete Markov chain is:
x i = 1 − β i x i − 1 + β i z i − 1 i = 1 , … , N . \bold{x}_i = \sqrt{1 - \beta_i} \bold{x}_{i-1} + \sqrt{\beta_i} \bold{z}_{i-1} \qquad i=1,\dots,N. x i = 1 − β i x i − 1 + β i z i − 1 i = 1 , … , N . As N → ∞ N \rightarrow \infty N → ∞ , the Markov chain converges to the following SDE:
d x = − 1 2 β ( t ) x d t + β ( t ) d w d \bold{x} = - \frac{1}{2} \beta(t) \bold{x} dt + \sqrt{\beta(t)} d \bold{w} d x = − 2 1 β ( t ) x d t + β ( t ) d w Proof for SMLD(VE) :
x i = x i − 1 + σ i 2 − σ i − 1 2 z i − 1 i = 1 , … , N . \bold{x}_i = \bold{x}_{i-1} + \sqrt{\sigma_i^2 - \sigma_{i-1}^2} \bold{z}_{i-1} \qquad i=1,\dots,N. x i = x i − 1 + σ i 2 − σ i − 1 2 z i − 1 i = 1 , … , N . Define some notations first: x ( i N ) = x i \bold{x}(\frac{i}{N}) = \bold{x}_i x ( N i ) = x i , σ ( i N ) = σ i \sigma(\frac{i}{N}) = \sigma_i σ ( N i ) = σ i , and z ( i N ) = z i \bold{z} (\frac{i}{N}) = \bold{z}_i z ( N i ) = z i for i = 1 , … , N i=1,\dots,N i = 1 , … , N . We can rewrite the Markov chain as follows with Δ t = 1 N \Delta t = \frac{1}{N} Δ t = N 1 and t ∈ { i , 1 N , … , N − 1 N } t \in \{ i, \frac{1}{N}, \dots, \frac{N-1}{N} \} t ∈ { i , N 1 , … , N N − 1 } :
x ( t + Δ t ) = x ( t ) + σ 2 ( t + Δ t ) − σ 2 ( t ) z ( t ) ≈ x ( t ) + d [ σ 2 ( t ) ] d t Δ t z ( t ) \bold{x}(t + \Delta t) = \bold{x}(t) + \sqrt{\sigma^2(t + \Delta t) - \sigma^2 (t)} \bold{z}(t) \approx \bold{x}(t) + \sqrt{\frac{d[\sigma^2(t)]}{dt}\Delta t} \bold{z}(t) x ( t + Δ t ) = x ( t ) + σ 2 ( t + Δ t ) − σ 2 ( t ) z ( t ) ≈ x ( t ) + d t d [ σ 2 ( t )] Δ t z ( t ) The approximation is given by the definition of derivative: d [ σ 2 ( t ) ] d t = lim Δ t → 0 σ 2 ( t + Δ t ) − σ 2 ( t ) Δ t \frac{d[\sigma^2(t)]}{dt} = \lim_{\Delta t \rightarrow 0} \frac{\sigma^2(t + \Delta t) - \sigma^2 (t)}{\Delta t} d t d [ σ 2 ( t )] = lim Δ t → 0 Δ t σ 2 ( t + Δ t ) − σ 2 ( t )
As Δ t → 0 \Delta t \rightarrow 0 Δ t → 0 , Δ t z ( t ) = d w \sqrt{\Delta t} z(t) = d \bold{w} Δ t z ( t ) = d w where w \bold{w} w is a Wiener process. This is because w ( t + Δ t ) − w ( t ) ∼ N ( 0 , Δ t ) \bold{w}(t + \Delta t) - \bold{w}(t) \sim \mathcal{N}(0, \Delta t) w ( t + Δ t ) − w ( t ) ∼ N ( 0 , Δ t ) .
End of Proof.
Proof for DDPM(VP):
x i = 1 − β i x i − 1 + β i z i − 1 i = 1 , … , N . \bold{x}_i = \sqrt{1 - \beta_i} \bold{x}_{i-1} + \sqrt{\beta_i} \bold{z}_{i-1} \qquad i=1,\dots,N. x i = 1 − β i x i − 1 + β i z i − 1 i = 1 , … , N . Define an auxiliary set of noise scales { β i ˉ = N β i } i = 1 N \{ \bar{\beta_i} = N \beta_i \}_{i=1}^N { β i ˉ = N β i } i = 1 N and rewrite the Markov chain as below:
x i = 1 − β i ˉ N x i − 1 + β i ˉ N z i − 1 i = 1 , … , N \bold{x}_i = \sqrt{1 - \frac{\bar{\beta_i}}{N}} \bold{x}_{i-1} + \sqrt{\frac{\bar{\beta_i}}{N}} \bold{z}_{i-1} \qquad i=1,\dots, N x i = 1 − N β i ˉ x i − 1 + N β i ˉ z i − 1 i = 1 , … , N In the limit of N → ∞ N \rightarrow \infty N → ∞ , { β i ˉ } i = 1 N \{ \bar{\beta_i}\}_{i=1}^N { β i ˉ } i = 1 N becomes a function β ( t ) \beta(t) β ( t ) indexed by t ∈ [ 0 , 1 ] t \in [0, 1] t ∈ [ 0 , 1 ] . Let β ( i N ) = β i ˉ \beta(\frac{i}{N}) = \bar{\beta_i} β ( N i ) = β i ˉ , x ( i N ) = x i \bold{x}(\frac{i}{N}) = \bold{x}_i x ( N i ) = x i , z ( i N ) = z i \bold{z}(\frac{i}{N}) = \bold{z}_i z ( N i ) = z i . We can rewrite the above Markov chain as the following with Δ t = 1 N \Delta t = \frac{1}{N} Δ t = N 1 and t ∈ { 0 , 1 , … , N − 1 N } t \in \{ 0, 1, \dots, \frac{N-1}{N} \} t ∈ { 0 , 1 , … , N N − 1 } :
x ( t + Δ t ) = 1 − β ( t + Δ t ) Δ t x ( t ) + β ( t + Δ t ) Δ t z ( t ) ≈ x ( t ) − 1 2 β ( t + Δ t ) Δ t x ( t ) + β ( t + Δ t ) Δ t z ( t ) ≈ x ( t ) − 1 2 β ( t ) Δ t x ( t ) + β ( t ) Δ t z ( t ) \begin{align*} \bold{x}(t + \Delta t) &= \sqrt{1 - \beta(t + \Delta t) \Delta t} \bold{x}(t) + \sqrt{\beta(t + \Delta t) \Delta t} \bold{z}(t) \\ &\approx \bold{x}(t) - \frac{1}{2} \beta(t + \Delta t) \Delta t \bold{x}(t) + \sqrt{\beta(t + \Delta t) \Delta t} \bold{z}(t) \\ &\approx \bold{x}(t) - \frac{1}{2} \beta(t) \Delta t \bold{x}(t) + \sqrt{\beta(t) \Delta t} \bold{z}(t)\end{align*} x ( t + Δ t ) = 1 − β ( t + Δ t ) Δ t x ( t ) + β ( t + Δ t ) Δ t z ( t ) ≈ x ( t ) − 2 1 β ( t + Δ t ) Δ t x ( t ) + β ( t + Δ t ) Δ t z ( t ) ≈ x ( t ) − 2 1 β ( t ) Δ t x ( t ) + β ( t ) Δ t z ( t ) The first approximation comes from Taylor Expansion: 1 − x = 1 − x 2 − x 2 8 − ⋯ ≈ 1 − x 2 \sqrt{1 - x} = 1 - \frac{x}{2} - \frac{x^2}{8} - \dots \approx 1 - \frac{x}{2} 1 − x = 1 − 2 x − 8 x 2 − ⋯ ≈ 1 − 2 x
Therefore, in the limit of Δ t → 0 \Delta t \rightarrow 0 Δ t → 0 , the Markov chain converges to the following VP SDE:
d x = − 1 2 β ( t ) x d t + β ( t ) d w d \bold{x} = - \frac{1}{2} \beta(t) \bold{x} dt + \sqrt{\beta(t)} d \bold{w} d x = − 2 1 β ( t ) x d t + β ( t ) d w
5. How to solve the SDE? Idea : Solve E ( x t ) \mathbb{E}(\bold{x}_t) E ( x t ) and Var ( x t ) \text{Var}(\bold{x}_t) Var ( x t ) , then under the Gaussian assumption, we know that p 0 t ( x t ∣ x 0 ) ∼ N ( ⋅ , ⋅ ) p_{0t}(\bold{x}_t | \bold{x}_0) \sim \mathcal{N}(\cdot, \cdot) p 0 t ( x t ∣ x 0 ) ∼ N ( ⋅ , ⋅ )
Theorem 5.1 (simplified from Equation (5.50), Equation (5.51) in Applied Stochastic Differential Equations )
If the SDE takes the form:
d x = f ( x , t ) d t + g ( t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + g(t) d \bold{w} d x = f ( x , t ) d t + g ( t ) d w Then the expectation m ( t ) \bold{m}(t) m ( t ) and covariance matrix P ( t ) \bold{P}(t) P ( t ) of x ( t ) \bold{x}(t) x ( t ) have:
d m ( t ) d t = E [ f ( x , t ) ] \frac{d \bold{m}(t)}{dt} = \mathbb{E} [\bold{f}(\bold{x}, t)] d t d m ( t ) = E [ f ( x , t )] d P ( t ) d t = E [ f ( x , t ) ( x − m ( t ) ) T ] + E [ ( x − m ( t ) ) f T ( x , t ) ] + E [ g 2 ( t ) ] \frac{d \bold{P}(t)}{dt} = \mathbb{E}[\bold{f}(\bold{x}, t) (\bold{x} - \bold{m}(t))^T] + \mathbb{E}[ (\bold{x} - \bold{m}(t))\bold{f}^T(\bold{x}, t)] + \mathbb{E}[g^2(t)] d t d P ( t ) = E [ f ( x , t ) ( x − m ( t ) ) T ] + E [( x − m ( t )) f T ( x , t )] + E [ g 2 ( t )] Solution to VP SDE: VP Model : d x = − 1 2 β ( t ) x d t + β ( t ) d w \text{VP Model}: \quad d \bold{x} = -\frac{1}{2} \beta(t) \bold{x} dt + \sqrt{\beta(t)} d\bold{w} VP Model : d x = − 2 1 β ( t ) x d t + β ( t ) d w By Theorem 5.1, we have
d m d t = E [ − 1 2 β ( t ) x ] = − 1 2 β ( t ) E [ x ( t ) ] = − 1 2 β ( t ) m ⇒ d m m = − 1 2 β ( t ) d t ln m ( t ) − ln m ( 0 ) = − 1 2 ∫ 0 t β ( s ) d s ⇒ m ( t ) = e ln m ( 0 ) − 1 2 ∫ 0 t β ( s ) d s = m ( 0 ) e − 1 2 ∫ 0 t β ( s ) d s \begin{align*} \frac{d \bold{m}}{dt} &= \mathbb{E}[-\frac{1}{2} \beta(t) \bold{x}] = -\frac{1}{2} \beta(t) \mathbb{E}[\bold{x}(t)] \\ &= -\frac{1}{2} \beta(t) \bold{m} \\ \Rightarrow \frac{d\bold{m}}{\bold{m}} &= -\frac{1}{2} \beta(t) dt \\ \ln \bold{m}(t) - \ln \bold{m}(0) & = -\frac{1}{2} \int_0^t \beta(s) ds \\ \Rightarrow \bold{m}(t) &= e^{\ln \bold{m}(0) - \frac{1}{2} \int_0^t \beta(s) ds} = \bold{m}(0) e^{- \frac{1}{2} \int_0^t \beta(s) ds}\end{align*} d t d m ⇒ m d m ln m ( t ) − ln m ( 0 ) ⇒ m ( t ) = E [ − 2 1 β ( t ) x ] = − 2 1 β ( t ) E [ x ( t )] = − 2 1 β ( t ) m = − 2 1 β ( t ) d t = − 2 1 ∫ 0 t β ( s ) d s = e l n m ( 0 ) − 2 1 ∫ 0 t β ( s ) d s = m ( 0 ) e − 2 1 ∫ 0 t β ( s ) d s Therefore,
E [ x ( t ) ∣ x ( 0 ) ] = x ( 0 ) e − 1 2 ∫ 0 t β ( s ) d s \mathbb{E}[\bold{x} (t) | \bold{x}(0)] = \bold{x}(0) e^{- \frac{1}{2} \int_0^t \beta(s) ds} E [ x ( t ) ∣ x ( 0 )] = x ( 0 ) e − 2 1 ∫ 0 t β ( s ) d s For covariance matrix P ( t ) P(t) P ( t ) :
d P ( t ) d t = E [ − 1 2 β ( t ) x ( t ) ( x ( t ) − m ( t ) ) T ] + E [ − 1 2 β ( t ) ( x ( t ) − m ( t ) ) x ( t ) T ] + E [ β ( t ) ] = − β ( t ) E [ x ( t ) ( x ( t ) − m ( t ) ) T ] + β ( t ) \begin{align*} \frac{dP(t)}{dt} &= \mathbb{E} [-\frac{1}{2} \beta(t)\bold{x}(t) (\bold{x}(t) - \bold{m}(t))^T] + \mathbb{E} [-\frac{1}{2} \beta(t) (\bold{x}(t) - \bold{m}(t)) \bold{x}(t)^T] + \mathbb{E}[\beta(t)] \\ &= -\beta(t)\mathbb{E}[\bold{x}(t) (\bold{x}(t) - \bold{m}(t))^T] + \beta(t) \end{align*} d t d P ( t ) = E [ − 2 1 β ( t ) x ( t ) ( x ( t ) − m ( t ) ) T ] + E [ − 2 1 β ( t ) ( x ( t ) − m ( t )) x ( t ) T ] + E [ β ( t )] = − β ( t ) E [ x ( t ) ( x ( t ) − m ( t ) ) T ] + β ( t ) Since E [ x ( t ) − m ( t ) ] = 0 \mathbb{E}[\bold{x}(t) - \bold{m}(t)] = 0 E [ x ( t ) − m ( t )] = 0 , we have:
E [ x ( t ) ( x ( t ) − m ( t ) ) T ] = E [ x ( t ) ( x ( t ) − m ( t ) ) T ] − 0 = E [ x ( t ) ( x ( t ) − m ( t ) ) T ] − m ( t ) E [ ( x ( t ) − m ( t ) ) T ] = E [ ( x ( t ) − m ( t ) ) ( x ( t ) − m ( t ) ) T ] = P ( t ) \begin{align*} \mathbb{E} [\bold{x}(t) (\bold{x}(t) - \bold{m}(t))^T] &= \mathbb{E} [\bold{x}(t) (\bold{x}(t) - \bold{m}(t))^T] - 0 \\ &= \mathbb{E} [\bold{x}(t) (\bold{x}(t) - \bold{m}(t))^T] - \bold{m}(t) \mathbb{E} [(\bold{x}(t) - \bold{m}(t))^T] \\ &= \mathbb{E} [(\bold{x}(t) - \bold{m}(t))(\bold{x}(t) - \bold{m}(t))^T] \\ &= P(t)\end{align*} E [ x ( t ) ( x ( t ) − m ( t ) ) T ] = E [ x ( t ) ( x ( t ) − m ( t ) ) T ] − 0 = E [ x ( t ) ( x ( t ) − m ( t ) ) T ] − m ( t ) E [( x ( t ) − m ( t ) ) T ] = E [( x ( t ) − m ( t )) ( x ( t ) − m ( t ) ) T ] = P ( t ) ⇒ d P ( t ) d t = − β ( t ) P ( t ) + β ( t ) = β ( t ) ( I − P ( t ) ) d P ( t ) I − P ( t ) = β ( t ) d t − ln ( I − P ( t ) ) + ln ( I − P ( 0 ) ) = ∫ 0 t β ( s ) d s I − P ( t ) = exp { ln ( I − P ( 0 ) ) − ∫ 0 t β ( s ) d s } P ( t ) = I − ( I − P ( 0 ) ) e − ∫ 0 t β ( s ) d s \begin{align*} \Rightarrow \frac{d P(t)}{dt} &= -\beta(t) P(t) + \beta(t) \\ &= \beta(t) (\bold{I} - P(t)) \\ \frac{d P(t)}{\bold{I} - P(t)} &= \beta(t) dt \\ -\ln (\bold{I} - P(t)) + \ln (\bold{I} - P(0)) &= \int_0^t \beta(s) ds \\ \bold{I} - P(t) &= \exp\{\ln(\bold{I} - P(0)) -\int_0^t \beta(s)ds \} \\ P(t) &= \bold{I} - (\bold{I} - P(0)) e^{-\int_0^t \beta(s)ds} \end{align*} ⇒ d t d P ( t ) I − P ( t ) d P ( t ) − ln ( I − P ( t )) + ln ( I − P ( 0 )) I − P ( t ) P ( t ) = − β ( t ) P ( t ) + β ( t ) = β ( t ) ( I − P ( t )) = β ( t ) d t = ∫ 0 t β ( s ) d s = exp { ln ( I − P ( 0 )) − ∫ 0 t β ( s ) d s } = I − ( I − P ( 0 )) e − ∫ 0 t β ( s ) d s Since C o v ( x 0 ∣ x 0 ) = 0 Cov(\bold{x}_0 | \bold{x}_0) = 0 C o v ( x 0 ∣ x 0 ) = 0 , we have:
C o v ( x t ∣ x 0 ) = I − I e − ∫ 0 t β ( s ) d s Cov(\bold{x}_t | \bold{x}_0) = \bold{I} - \bold{I} e^{-\int_0^t \beta(s)ds} C o v ( x t ∣ x 0 ) = I − I e − ∫ 0 t β ( s ) d s Therefore, the solution to VP model is:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; x ( 0 ) e − 1 2 ∫ 0 t β ( s ) d s , I − I e − ∫ 0 t β ( s ) d s ) ( VP SDE ) p_{0t}(\bold{x}(t) | \bold{x}(0)) = \mathcal{N} (\bold{x}(t); \bold{x}(0) e^{- \frac{1}{2} \int_0^t \beta(s) ds}, \bold{I} - \bold{I} e^{-\int_0^t \beta(s)ds}) \qquad (\text{VP SDE}) p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; x ( 0 ) e − 2 1 ∫ 0 t β ( s ) d s , I − I e − ∫ 0 t β ( s ) d s ) ( VP SDE )
Solution to VE SDE: VE Model : d x = d [ σ 2 ( t ) ] d t d w ( f ( x , t ) = 0 ) \text{VE Model}: \quad d\bold{x} = \sqrt{\frac{d[\sigma^2(t)]}{dt}} d\bold{w} \qquad (\bold{f}(\bold{x}, t) = 0) VE Model : d x = d t d [ σ 2 ( t )] d w ( f ( x , t ) = 0 ) By Theorem 5.1:
d m d t = 0 ⇒ m ( t ) = C = m ( 0 ) = x 0 \frac{d\bold{m}}{dt} = 0 \Rightarrow \bold{m}(t) = C = \bold{m}(0) = \bold{x}_0 d t d m = 0 ⇒ m ( t ) = C = m ( 0 ) = x 0 d P ( t ) d t = E [ d [ σ 2 ( t ) ] d t ] = d [ σ 2 ( t ) ] d t d P ( t ) = d [ σ 2 ( t ) ] P ( t ) − P ( 0 ) = σ 2 ( t ) − σ 2 ( 0 ) P ( t ) = σ 2 ( t ) − σ 2 ( 0 ) \begin{align*}\frac{d P(t)}{dt} &= \mathbb{E} [\frac{d[\sigma^2(t)]}{dt}] = \frac{d[\sigma^2(t)]}{dt} \\ dP(t) &= d[\sigma^2(t)] \\ P(t) - P(0) &= \sigma^2(t) - \sigma^2(0) \\ P(t) &= \sigma^2(t) - \sigma^2(0)\end{align*} d t d P ( t ) d P ( t ) P ( t ) − P ( 0 ) P ( t ) = E [ d t d [ σ 2 ( t )] ] = d t d [ σ 2 ( t )] = d [ σ 2 ( t )] = σ 2 ( t ) − σ 2 ( 0 ) = σ 2 ( t ) − σ 2 ( 0 ) Therefore, the solution to VE SDE is:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; x ( 0 ) , [ σ 2 ( t ) − σ 2 ( 0 ) ] I ) ( VE SDE ) p_{0t}(\bold{x}(t) | \bold{x}(0)) = \mathcal{N} (\bold{x}(t); \bold{x}(0) , [\sigma^2(t) - \sigma^2(0)] \bold{I}) \qquad (\text{VE SDE}) p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; x ( 0 ) , [ σ 2 ( t ) − σ 2 ( 0 )] I ) ( VE SDE )
6. Derive mean and variance of the perturbation kernel from sub-VP SDE d x = − 1 2 β ( t ) x d t + β ( t ) ( 1 − e − 2 ∫ 0 t β ( s ) d s ) d w d \bold{x} = -\frac{1}{2} \beta(t) \bold{x} dt + \sqrt{\beta(t) (1 - e^{-2 \int_0^t \beta(s) ds})} d\bold{w} d x = − 2 1 β ( t ) x d t + β ( t ) ( 1 − e − 2 ∫ 0 t β ( s ) d s ) d w Why sub-VP SDE :
Perform well on likelihood Variance is bounded by VP SDE
Since VE, VP and sub-VP SDEs all have affine drift coefficients, their perturbation kernels p 0 t ( x ( t ) ∣ x ( 0 ) ) p_{0t} (\bold{x}(t) | \bold{x}(0)) p 0 t ( x ( t ) ∣ x ( 0 )) are all Gaussian and can be computed in closed-forms. This makes training with the score-matching loss:
θ ∗ = arg min θ E t { λ ( t ) E x ( 0 ) E x ( t ) ∣ x ( 0 ) [ ∣ ∣ s θ ( x ( t ) , t ) − ∇ x ( t ) log p 0 t ( x ( t ) ∣ x ( 0 ) ) ∣ ∣ 2 2 ] } \theta^* = \argmin_{\theta} \mathbb{E}_t \{ \lambda(t) \mathbb{E}_{\bold{x}(0)} \mathbb{E}_{\bold{x}(t) | \bold{x}(0)} [||\bold{s}_{\theta}(\bold{x}(t), t) - \nabla_{\bold{x}(t)} \log p_{0t}(\bold{x}(t) | \bold{x}(0))||^2_2] \} θ ∗ = θ arg min E t { λ ( t ) E x ( 0 ) E x ( t ) ∣ x ( 0 ) [ ∣∣ s θ ( x ( t ) , t ) − ∇ x ( t ) log p 0 t ( x ( t ) ∣ x ( 0 )) ∣ ∣ 2 2 ]}
Solution to sub-VP SDE: Corollary 6.1:
Given an ODE of the form y ′ ( x ) + p ( x ) y ( x ) = f ( x ) y'(x) + p(x)y(x) = f(x) y ′ ( x ) + p ( x ) y ( x ) = f ( x ) , the solution is given by:
y ( x ) = 1 μ ( x ) ( ∫ f ( ξ ) μ ( ξ ) d ξ + C ) y(x) = \frac{1}{\mu(x)} \left( \int f(\xi) \mu(\xi) d\xi + C \right) y ( x ) = μ ( x ) 1 ( ∫ f ( ξ ) μ ( ξ ) d ξ + C ) where μ ( t ) = exp ( ∫ t p ( ξ ) d ξ ) \mu(t) = \exp \left( \int^t p(\xi)d\xi \right) μ ( t ) = exp ( ∫ t p ( ξ ) d ξ ) .
Similar to VP SDE, we have:
E [ x ( t ) ∣ x ( 0 ) ] = x ( 0 ) e − 1 2 ∫ 0 t β ( s ) d s \mathbb{E} [\bold{x}(t) | \bold{x}(0)] = \bold{x}(0) e^{-\frac{1}{2} \int_0^t \beta(s) ds} E [ x ( t ) ∣ x ( 0 )] = x ( 0 ) e − 2 1 ∫ 0 t β ( s ) d s By Theorem 5.1:
d P ( t ) d t = − β ( t ) P ( t ) + β ( t ) ( 1 − exp { − 2 ∫ 0 t β ( s ) d s } ) P ′ ( t ) + β ( t ) P ( t ) = β ( t ) ( 1 − exp { − 2 ∫ 0 t β ( s ) d s } ) \begin{align*} \frac{d P(t)}{dt} &= -\beta(t) P(t) + \beta(t) (1 - \exp \{ -2 \int_0^t \beta(s) ds \}) \\ P'(t) + \beta(t) P(t) &= \beta(t) (1 - \exp \{ -2 \int_0^t \beta(s) ds \})\end{align*} d t d P ( t ) P ′ ( t ) + β ( t ) P ( t ) = − β ( t ) P ( t ) + β ( t ) ( 1 − exp { − 2 ∫ 0 t β ( s ) d s }) = β ( t ) ( 1 − exp { − 2 ∫ 0 t β ( s ) d s }) By Corollary 6.1:
P ( t ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ ( ∫ 0 t β ( s ) [ 1 − exp { − 2 ∫ 0 s β ( ξ ) d ξ } ] exp { ∫ 0 s β ( ξ ) d ξ } d s + C ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ ( ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s − ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s + C ) \begin{align*} P(t) &= \bold{I} \cdot \exp\{ -\int_0^t \beta(s) ds \} \cdot \left( \int_0^t \beta(s) \left[ 1 - \exp\{ -2 \int_0^s \beta(\xi) d\xi \} \right] \exp \{ \int_0^s \beta(\xi) d\xi \} ds + C \right) \\ &= \bold{I} \cdot \exp\{ -\int_0^t \beta(s) ds \} \cdot \left( \int_0^t \beta(s) \exp \{ \int_0^s \beta(\xi) d\xi \} ds - \int_0^t \beta(s) \exp \{ -\int_0^s \beta(\xi) d\xi \} ds + C \right)\end{align*} P ( t ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ ( ∫ 0 t β ( s ) [ 1 − exp { − 2 ∫ 0 s β ( ξ ) d ξ } ] exp { ∫ 0 s β ( ξ ) d ξ } d s + C ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ ( ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s − ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s + C ) Denote 1 ◯ = ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s \textcircled{1} = \int_0^t \beta(s) \exp \{ \int_0^s \beta(\xi) d\xi \} ds 1 ◯ = ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s and 2 ◯ = ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s \textcircled{2} = \int_0^t \beta(s) \exp \{ -\int_0^s \beta(\xi) d\xi \} ds 2 ◯ = ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s . Solve them separately.
1 ◯ = ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s = exp { ∫ 0 s β ( ξ ) d ξ } ∣ s = 0 s = t = exp { ∫ 0 t β ( s ) d s } − 1 \begin{align*} \textcircled{1} &= \int_0^t \beta(s) \exp \{ \int_0^s \beta(\xi) d\xi \} ds \\ &= \exp \{ \int_0^s \beta(\xi) d\xi \} |_{s=0}^{s=t} \\ &= \exp \{ \int_0^t \beta(s) ds \} - 1 \end{align*} 1 ◯ = ∫ 0 t β ( s ) exp { ∫ 0 s β ( ξ ) d ξ } d s = exp { ∫ 0 s β ( ξ ) d ξ } ∣ s = 0 s = t = exp { ∫ 0 t β ( s ) d s } − 1 2 ◯ = ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s = − exp { − ∫ 0 s β ( ξ ) d ξ } ∣ s = 0 s = t = − exp { − ∫ 0 t β ( s ) d s } + 1 \begin{align*} \textcircled{2} &= \int_0^t \beta(s) \exp \{ -\int_0^s \beta(\xi) d\xi \} ds \\ &= - \exp \{ -\int_0^s \beta(\xi) d\xi \} |_{s=0}^{s=t} \\ &= - \exp \{ -\int_0^t \beta(s) ds \} + 1\end{align*} 2 ◯ = ∫ 0 t β ( s ) exp { − ∫ 0 s β ( ξ ) d ξ } d s = − exp { − ∫ 0 s β ( ξ ) d ξ } ∣ s = 0 s = t = − exp { − ∫ 0 t β ( s ) d s } + 1 Therefore:
P ( t ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ [ exp { ∫ 0 t β ( s ) d s } + exp { − ∫ 0 t β ( s ) d s } + C ] = I ⋅ [ 1 + exp { − 2 ∫ 0 t β ( s ) d s } + exp { − ∫ 0 t β ( s ) d s } ⋅ C ] \begin{align*} P(t) &= \bold{I} \cdot \exp\{ -\int_0^t \beta(s) ds \} \cdot \left[ \exp \{ \int_0^t \beta(s) ds \} + \exp \{ - \int_0^t \beta(s) ds \} + C \right] \\ &= \bold{I} \cdot \left[ 1 + \exp \{ -2\int_0^t \beta(s) ds \} + \exp \{ -\int_0^t \beta(s) ds \} \cdot C\right]\end{align*} P ( t ) = I ⋅ exp { − ∫ 0 t β ( s ) d s } ⋅ [ exp { ∫ 0 t β ( s ) d s } + exp { − ∫ 0 t β ( s ) d s } + C ] = I ⋅ [ 1 + exp { − 2 ∫ 0 t β ( s ) d s } + exp { − ∫ 0 t β ( s ) d s } ⋅ C ] Plug in t = 0 t=0 t = 0 , we have P ( 0 ) = I ( 2 + C ) ⇒ C I = P ( 0 ) − 2 I P(0) = \bold{I} (2 + C) \Rightarrow C \bold{I} = P(0) - 2\bold{I} P ( 0 ) = I ( 2 + C ) ⇒ C I = P ( 0 ) − 2 I . Then:
P ( t ) = I + e − 2 ∫ 0 t β ( s ) d s I + e − ∫ 0 t β ( s ) d s ( P ( 0 ) − 2 I ) P(t) = \bold{I} + e^{-2 \int_0^t \beta(s) ds} \bold{I} + e^{- \int_0^t \beta(s) ds} (P(0) - 2 \bold{I}) P ( t ) = I + e − 2 ∫ 0 t β ( s ) d s I + e − ∫ 0 t β ( s ) d s ( P ( 0 ) − 2 I ) Note: If lim t → ∞ ∫ 0 t β ( s ) d s = ∞ \lim_{t \rightarrow \infty} \int_0^t \beta(s) ds = \infty lim t → ∞ ∫ 0 t β ( s ) d s = ∞ , we can observe that lim t → ∞ P ( t ) = I \lim_{t \rightarrow \infty} P(t) = \bold{I} lim t → ∞ P ( t ) = I . This justifies the use of sub-VP SDEs for score-based generative modeling, since they can perturb any data distribution to standard Gaussian under suitable conditions.
Since P ( 0 ) = 0 P(0) = 0 P ( 0 ) = 0 , we have:
P ( t ) = I + e − 2 ∫ 0 t β ( s ) d s I − 2 e − ∫ 0 t β ( s ) d s I = [ 1 − e − ∫ 0 t β ( s ) d s ] 2 I P(t) = \bold{I} + e^{-2 \int_0^t \beta(s) ds} \bold{I} - 2 e^{- \int_0^t \beta(s) ds} \bold{I} = [1 - e^{- \int_0^t \beta(s) ds}]^2 \bold{I} P ( t ) = I + e − 2 ∫ 0 t β ( s ) d s I − 2 e − ∫ 0 t β ( s ) d s I = [ 1 − e − ∫ 0 t β ( s ) d s ] 2 I Therefore, the solution to sub-VP SDEs is:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; x ( 0 ) e − 1 2 ∫ 0 t β ( s ) d s , [ 1 − e − ∫ 0 t β ( s ) d s ] 2 I ) (sub-VP SDE) p_{0t}(\bold{x}(t) | \bold{x}(0)) = \mathcal{N} (\bold{x}(t); \bold{x}(0) e^{-\frac{1}{2} \int_0^t \beta(s) ds} , [1 - e^{- \int_0^t \beta(s) ds}]^2 \bold{I}) \qquad \text{(sub-VP SDE)} p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; x ( 0 ) e − 2 1 ∫ 0 t β ( s ) d s , [ 1 − e − ∫ 0 t β ( s ) d s ] 2 I ) (sub-VP SDE)
7. How to choose the noise scale 7.1 SMLD (VE SDEs) In SMLD, the noise scales { σ i } i = 1 N \{ \sigma_i \}_{i=1}^N { σ i } i = 1 N is typically a geometric sequence where σ min = 0.01 \sigma_{\text{min}} = 0.01 σ min = 0.01 and σ max \sigma_{\text{max}} σ max is chosen to Technique 1 in Song & Ermon (2020) .
Technique 1: Choose σ max \sigma_{\text{max}} σ max to be as large as the maximum Euclidean distance between all pairs of training data points. (Usually, SMLD models normalize image inputs to the range [0,1])
Since { σ i } i = 1 N \{ \sigma_i \}_{i=1}^N { σ i } i = 1 N is a geometric sequence, we have σ ( i N ) = σ i = σ min ( σ max σ min ) i − 1 N − 1 \sigma(\frac{i}{N}) = \sigma_i = \sigma_{\text{min}} (\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^{\frac{i-1}{N-1}} σ ( N i ) = σ i = σ min ( σ min σ max ) N − 1 i − 1 for i = 1 , … , N i = 1, \dots, N i = 1 , … , N . When N → ∞ N \rightarrow \infty N → ∞ , σ ( t ) = σ min ( σ max σ min ) t \sigma(t) = \sigma_{\text{min}} (\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^t σ ( t ) = σ min ( σ min σ max ) t for t ∈ ( 0 , 1 ] t \in (0, 1] t ∈ ( 0 , 1 ] .
Then the corresponding VE SDE is:
d x = d [ σ 2 ( t ) ] d t d w = σ min ( σ max σ min ) t 2 ln σ max σ min d w \begin{align*} d \bold{x} &= \sqrt{\frac{d[\sigma^2(t)]}{dt}} d\bold{w} \\ &= \sigma_{\text{min}} (\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^t \sqrt{2 \ln \frac{\sigma_{\text{max}}}{\sigma_{\text{min}}}} d\bold{w}\end{align*} d x = d t d [ σ 2 ( t )] d w = σ min ( σ min σ max ) t 2 ln σ min σ max d w The perturbation kernel can be derived according to Section 5 VE SDEs:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; x ( 0 ) , σ min 2 ( σ max σ min ) 2 t I ) , t ∈ ( 0 , 1 ] p_{0t} (\bold{x}(t) | \bold{x}(0)) = \mathcal{N} \left( \bold{x}(t); \bold{x}(0), \sigma^2_{\text{min}} (\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^{2t} \bold{I} \right), \quad t \in (0, 1] p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; x ( 0 ) , σ min 2 ( σ min σ max ) 2 t I ) , t ∈ ( 0 , 1 ]
7.2 DDPM (VP SDEs) In DDPM, { β i } i = 1 N \{ \beta_i \}_{i=1}^N { β i } i = 1 N is typically an arithmetic sequence where β i = β ˉ min N + i − 1 N ( N − 1 ) ( β ˉ max − β ˉ min ) \beta_i = \frac{\bar{\beta}_{\text{min}}}{N} + \frac{i-1}{N(N-1)}(\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}}) β i = N β ˉ min + N ( N − 1 ) i − 1 ( β ˉ max − β ˉ min ) for i = 1 , … , N i=1,\dots,N i = 1 , … , N . As N → ∞ N \rightarrow \infty N → ∞ , β ( t ) = β ˉ min + t ( β ˉ max − β ˉ min ) \beta(t) = \bar{\beta}_{\text{min}} + t (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}}) β ( t ) = β ˉ min + t ( β ˉ max − β ˉ min ) for t ∈ [ 0 , 1 ] t \in [0, 1] t ∈ [ 0 , 1 ] . This correspinds to the following instantialtion of VP SDE:
d x = − 1 2 ( β ˉ min + t ( β ˉ max − β ˉ min ) ) x d t + β ˉ min + t ( β ˉ max − β ˉ min ) d w , t ∈ [ 0 , 1 ] . d \bold{x} = -\frac{1}{2} (\bar{\beta}_{\text{min}} + t (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}})) \bold{x} dt + \sqrt{\bar{\beta}_{\text{min}} + t (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}})} d \bold{w}, \quad t\in[0,1]. d x = − 2 1 ( β ˉ min + t ( β ˉ max − β ˉ min )) x d t + β ˉ min + t ( β ˉ max − β ˉ min ) d w , t ∈ [ 0 , 1 ] . where x ( 0 ) ∼ p data ( x ) \bold{x}(0) \sim p_{\text{data}}(\bold{x}) x ( 0 ) ∼ p data ( x ) . Note: β ˉ min = 0.1 \bar{\beta}_{\text{min}} = 0.1 β ˉ min = 0.1 and β ˉ max = 20 \bar{\beta}_{\text{max}} = 20 β ˉ max = 20 is set to match previous work .
Then the perturbation kernel becomes:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; e − 1 4 t 2 ( β ˉ max − β ˉ min ) − 1 2 t β ˉ min x ( 0 ) , I − I e − 1 2 t 2 ( β ˉ max − β ˉ min ) − t β ˉ min ) , t ∈ [ 0 , 1 ] p_{0t}(\bold{x}(t) | \bold{x}(0)) = \mathcal{N} \left( \bold{x}(t); e^{-\frac{1}{4}t^2 (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}})- \frac{1}{2} t \bar{\beta}_{\text{min}}}\bold{x}(0), \bold{I} - \bold{I} e^{-\frac{1}{2} t^2 (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}}) -t \bar{\beta}_{\text{min}}} \right), t\in[0,1] p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; e − 4 1 t 2 ( β ˉ max − β ˉ min ) − 2 1 t β ˉ min x ( 0 ) , I − I e − 2 1 t 2 ( β ˉ max − β ˉ min ) − t β ˉ min ) , t ∈ [ 0 , 1 ]
7.3 sub-VP SDEs For sub-VP SDEs, the same β ( t ) \beta(t) β ( t ) is used as VP SDEs. This leads to the following perturbation kernel:
p 0 t ( x ( t ) ∣ x ( 0 ) ) = N ( x ( t ) ; e − 1 4 t 2 ( β ˉ max − β ˉ min ) − 1 2 t β ˉ min x ( 0 ) , [ 1 − e − 1 2 t 2 ( β ˉ max − β ˉ min ) − t β ˉ min ] 2 I ) , t ∈ [ 0 , 1 ] p_{0t}(\bold{x}(t) | \bold{x}(0)) = \mathcal{N} \left( \bold{x}(t); e^{-\frac{1}{4}t^2 (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}})- \frac{1}{2} t \bar{\beta}_{\text{min}}}\bold{x}(0), [1- e^{-\frac{1}{2} t^2 (\bar{\beta}_{\text{max}} - \bar{\beta}_{\text{min}}) -t \bar{\beta}_{\text{min}}}]^2 \bold{I} \right), t\in[0,1] p 0 t ( x ( t ) ∣ x ( 0 )) = N ( x ( t ) ; e − 4 1 t 2 ( β ˉ max − β ˉ min ) − 2 1 t β ˉ min x ( 0 ) , [ 1 − e − 2 1 t 2 ( β ˉ max − β ˉ min ) − t β ˉ min ] 2 I ) , t ∈ [ 0 , 1 ]
8. Two ways for sampling from SDEs: ODE, Reverse-SDE Solution1: Convert an SDE to an ODE
Solution2: Convert an SDE to a reverse-SDE
8.1 Reverse Diffusion Sampling Given a forward SDE:
d x = f ( x , t ) d t + G ( t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + \bold{G}(t) d\bold{w} d x = f ( x , t ) d t + G ( t ) d w the following iteration rule is a discretization of it:
x i + 1 = x i + f i ( x i ) + G i z i , i = 0 , 1 , … , N \bold{x}_{i+1} = \bold{x}_i + \bold{f}_i(\bold{x}_i) + \bold{G}_i \bold{z}_i, \qquad i=0,1,\dots,N x i + 1 = x i + f i ( x i ) + G i z i , i = 0 , 1 , … , N where z i ∼ N ( 0 , I ) \bold{z}_i \sim \mathcal{N}(\bold{0}, \bold{I}) z i ∼ N ( 0 , I ) . The discretization schedule of time is absorbed into the notations of f i \bold{f}_i f i and G i \bold{G}_i G i .
Then we want to discretize the reverse-time SDE
d x = [ f ( x , t ) − G ( t ) G T ( t ) ∇ x log p t ( x ) ] d t + G ( t ) d w ˉ d\bold{x} = [\bold{f}(\bold{x}, t) - \bold{G}(t) \bold{G}^T(t) \nabla_{\bold{x}}\log p_t(\bold{x})]dt + \bold{G}(t) d \bold{\bar{w}} d x = [ f ( x , t ) − G ( t ) G T ( t ) ∇ x log p t ( x )] d t + G ( t ) d w ˉ With a similar functional form, the discretization is given by:
x i = x i + 1 − f i + 1 ( x i + 1 ) + G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) + G i + 1 z i + 1 \bold{x}_i = \bold{x}_{i+1} - \bold{f}_{i+1}(\bold{x}_{i+1}) + \bold{G}_{i+1}\bold{G}^T_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \bold{G}_{i+1} \bold{z}_{i+1} x i = x i + 1 − f i + 1 ( x i + 1 ) + G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) + G i + 1 z i + 1 where the trained score-based model s θ ∗ ( x i , i ) \bold{s}_{\theta^*}(\bold{x}_i, i) s θ ∗ ( x i , i ) is conditioned on iteration number i i i .
Note: the discretization is derived from d x = x i − x i + 1 d \bold{x} = \bold{x}_i - \bold{x}_{i+1} d x = x i − x i + 1 , d t = Δ t = − 1 dt = \Delta t = -1 d t = Δ t = − 1 , and d w ˉ = w t − w t + Δ t = ∣ Δ t ∣ z ∼ N ( 0 , ∣ Δ t ∣ I ) d \bold{\bar{w}} = \bold{w}_{t} - \bold{w}_{t + \Delta t} = \sqrt{|\Delta t|} \bold{z} \sim \mathcal{N}(0, |\Delta t| \bold{I}) d w ˉ = w t − w t + Δ t = ∣Δ t ∣ z ∼ N ( 0 , ∣Δ t ∣ I ) .
8.2 Probability Flow ODE Consider a general form SDE:
d x = f ( x , t ) d t + G ( x , t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + \bold{G}(\bold{x}, t) d\bold{w} d x = f ( x , t ) d t + G ( x , t ) d w where f ( x , t ) : R d → R d \bold{f}(\bold{x}, t): \mathbb{R}^d \rightarrow \mathbb{R}^d f ( x , t ) : R d → R d and G ( x , t ) : R d → R d × d \bold{G}(\bold{x}, t): \mathbb{R}^d \rightarrow \mathbb{R}^{d \times d} G ( x , t ) : R d → R d × d . The corresponding probability flow ODE is:
d x = { f ( x , t ) − 1 2 ∇ ⋅ [ G ( x , t ) G ( x , t ) T ] − 1 2 G ( x , t ) G ( x , t ) T ∇ x log p t ( x ) } d t d\bold{x} = \{ \bold{f}(\bold{x}, t) - \frac{1}{2}\nabla \cdot [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T] - \frac{1}{2}\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T \nabla_{\bold{x}} \log p_t(\bold{x})\} dt d x = { f ( x , t ) − 2 1 ∇ ⋅ [ G ( x , t ) G ( x , t ) T ] − 2 1 G ( x , t ) G ( x , t ) T ∇ x log p t ( x )} d t where we define ∇ ⋅ F ( x ) ≔ ( ∇ ⋅ f 1 ( x ) , ∇ ⋅ f 2 ( x ) , … , ∇ ⋅ f d ( x ) ) T : R d → R d \nabla \cdot \bold{F}(\bold{x}) \coloneqq (\nabla \cdot \bold{f}^1(\bold{x}), \nabla \cdot \bold{f}^2 (\bold{x}), \dots, \nabla \cdot \bold{f}^d (\bold{x}))^T: \mathbb{R}^d \rightarrow \mathbb{R}^d ∇ ⋅ F ( x ) : = ( ∇ ⋅ f 1 ( x ) , ∇ ⋅ f 2 ( x ) , … , ∇ ⋅ f d ( x ) ) T : R d → R d for a matrix-valued function F ( x ) ≔ ( f 1 ( x ) , f 2 ( x ) , … , f d ( x ) ) T : R d → R d × d \bold{F} (\bold{x}) \coloneqq (\bold{f}^1(\bold{x}), \bold{f}^2(\bold{x}), \dots, \bold{f}^d(\bold{x}))^T: \mathbb{R}^d \rightarrow \mathbb{R}^{d \times d} F ( x ) : = ( f 1 ( x ) , f 2 ( x ) , … , f d ( x ) ) T : R d → R d × d .
Recap on Vector Differential Calculus Let u : R d → R \bold{u}: \mathbb{R}^d \rightarrow \mathbb{R} u : R d → R be a scalar-valued function which takes an d-dimensional vector as input.Gradient of a Scalar-valued Function : ∇ u = ( ∂ 1 u , … , ∂ d u ) T \nabla \bold{u} = (\partial_1 \bold{u}, \dots, \partial_d \bold{u})^T ∇ u = ( ∂ 1 u , … , ∂ d u ) T .
Let u : R d → R d \bold{u}: \mathbb{R}^d \rightarrow \mathbb{R}^d u : R d → R d ( u ( x ) = ( u 1 ( x ) , … , u d ( x ) ) T \bold{u}(\bold{x}) = (u_1(\bold{x}), \dots, u_d(\bold{x}))^T u ( x ) = ( u 1 ( x ) , … , u d ( x ) ) T )be a vector-valued function which takes an d-dimensional vector as input and output an d-dimensional vector.Divergence of a Vector-valued Function : ∇ ⋅ u = ∑ i = 1 d ∂ i u i \nabla \cdot \bold{u} = \sum_{i=1}^d \partial_i u_i ∇ ⋅ u = ∑ i = 1 d ∂ i u i .
Let U ( x ) : R d → R d × d \bold{U}(\bold{x}): \mathbb{R}^d \rightarrow \mathbb{R}^{d \times d} U ( x ) : R d → R d × d be a matrix-valued function which takes an d-dimensional vector as input and output an d × d d \times d d × d matrix:Divergence of Matrix-valued Function:
U ( x ) = [ u 11 ( x ) … u 1 d ( x ) ⋮ ⋱ ⋮ u d 1 ( x ) … u d d ( x ) ] = [ u 1 ( x ) ⋮ u d ( x ) ] \bold{U}(\bold{x}) = \begin{bmatrix} u_{11}(\bold{x}) & \dots & u_{1d}(\bold{x}) \\ \vdots & \ddots & \vdots \\ u_{d1}(\bold{x}) & \dots & u_{dd}(\bold{x})\end{bmatrix} = \begin{bmatrix} \bold{u}_1(\bold{x}) \\ \vdots \\ \bold{u}_d(\bold{x})\end{bmatrix} U ( x ) = u 11 ( x ) ⋮ u d 1 ( x ) … ⋱ … u 1 d ( x ) ⋮ u dd ( x ) = u 1 ( x ) ⋮ u d ( x ) ∇ ⋅ U ( x ) = [ ∇ ⋅ u 1 ( x ) ⋮ ∇ ⋅ u d ( x ) ] = [ ∑ j = 1 d ∂ j u 1 j ( x ) ⋮ ∑ j = 1 d ∂ j u d j ( x ) ] \nabla \cdot \bold{U}(\bold{x}) = \begin{bmatrix} \nabla \cdot \bold{u}_1(\bold{x}) \\ \vdots \\ \nabla \cdot \bold{u}_d(\bold{x}) \end{bmatrix} = \begin{bmatrix} \sum_{j=1}^d \partial_j u_{1j}(\bold{x}) \\ \vdots \\ \sum_{j=1}^d \partial_j u_{dj}(\bold{x}) \end{bmatrix} ∇ ⋅ U ( x ) = ∇ ⋅ u 1 ( x ) ⋮ ∇ ⋅ u d ( x ) = ∑ j = 1 d ∂ j u 1 j ( x ) ⋮ ∑ j = 1 d ∂ j u d j ( x )
Derivation of Probability Flow ODE: Let p t ( x ( t ) ) p_t(\bold{x}(t)) p t ( x ( t )) denote the marginal probability of x ( t ) \bold{x}(t) x ( t ) . According to Kolmogorov’s forward equation (Fokker-Planck equation) , we have:
∂ p t ( x ) ∂ t = − ∑ i = 1 d ∂ ∂ x i [ f i ( x , t ) p t ( x ) ] + 1 2 ∑ i = 1 d ∑ j = 1 d ∂ 2 ∂ x i ∂ x j [ ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) p t ( x ) ] . \frac{\partial p_t(\bold{x})}{\partial t} = - \sum_{i=1}^d \frac{\partial}{\partial x_i}[f_i(\bold{x}, t) p_t(x)] + \frac{1}{2} \sum_{i=1}^d \sum_{j=1}^d \frac{\partial^2}{\partial x_i \partial x_j} \left[\sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk}(\bold{x}, t) p_t(\bold{x})\right]. ∂ t ∂ p t ( x ) = − i = 1 ∑ d ∂ x i ∂ [ f i ( x , t ) p t ( x )] + 2 1 i = 1 ∑ d j = 1 ∑ d ∂ x i ∂ x j ∂ 2 [ k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) p t ( x ) ] . This equation can be rewrite into:
∂ p t ( x ) ∂ t = − ∑ i = 1 d ∂ ∂ x i [ f i ( x , t ) p t ( x ) ] + 1 2 ∑ i = 1 d ∂ ∂ x i [ ∑ j = 1 d ∂ ∂ x j [ ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) p t ( x ) ] ] \frac{\partial p_t(\bold{x})}{\partial t} = - \sum_{i=1}^d \frac{\partial}{\partial x_i}[f_i(\bold{x}, t) p_t(x)] + \frac{1}{2} \sum_{i=1}^d \frac{\partial}{\partial x_i} \left[ \sum_{j=1}^d \frac{\partial}{\partial x_j} \left[ \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk} (\bold{x}, t)p_t(\bold{x}) \right] \right] ∂ t ∂ p t ( x ) = − i = 1 ∑ d ∂ x i ∂ [ f i ( x , t ) p t ( x )] + 2 1 i = 1 ∑ d ∂ x i ∂ [ j = 1 ∑ d ∂ x j ∂ [ k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) p t ( x ) ] ] Note that:
∑ j = 1 d ∂ ∂ x j [ ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) p t ( x ) ] = ∑ j = 1 d ∂ ∂ x j [ p t ( x ) ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) ] = ∑ j = 1 d ∂ ∂ x j [ ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) ] p t ( x ) + ∑ j = 1 d ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) p t ( x ) ∂ ∂ x j log p t ( x ) \begin{align*} &\sum_{j=1}^d \frac{\partial}{\partial x_j} \left[ \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk} (\bold{x}, t)p_t(\bold{x}) \right] = \sum_{j=1}^d \frac{\partial}{\partial x_j} \left[ p_t(\bold{x}) \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk} (\bold{x}, t) \right] \\ =& \sum_{j=1}^d \frac{\partial}{\partial x_j} \left[ \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk} (\bold{x}, t) \right] p_t(\bold{x}) + \sum_{j=1}^d \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk}(\bold{x}, t) p_t(\bold{x}) \frac{\partial}{\partial x_j} \log p_t(\bold{x}) \end{align*} = j = 1 ∑ d ∂ x j ∂ [ k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) p t ( x ) ] = j = 1 ∑ d ∂ x j ∂ [ p t ( x ) k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) ] j = 1 ∑ d ∂ x j ∂ [ k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) ] p t ( x ) + j = 1 ∑ d k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) p t ( x ) ∂ x j ∂ log p t ( x ) Explanation 1: f ( x ) d log f ( x ) d x = f ( x ) ⋅ 1 f ( x ) d f ( x ) d x = d f ( x ) d x f(x) \frac{d \log f(x)}{dx} = f(x) \cdot \frac{1}{f(x)} \frac{d f(x)}{dx} = \frac{df(x)}{dx} f ( x ) d x d l o g f ( x ) = f ( x ) ⋅ f ( x ) 1 d x df ( x ) = d x df ( x ) .
For simplicity, let’s denote F ( x , t ) = ( F i j ( x , t ) ) d × d = G ( x , t ) G ( x , t ) T \bold{F} (\bold{x}, t) = (\bold{F}_{ij}(\bold{x}, t))_{d \times d} = \bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T F ( x , t ) = ( F ij ( x , t ) ) d × d = G ( x , t ) G ( x , t ) T , where F i j ( x , t ) = ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) \bold{F}_{ij}(\bold{x}, t) = \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk}(\bold{x}, t) F ij ( x , t ) = ∑ k = 1 d G ik ( x , t ) G jk ( x , t ) . Then we can rewrite the above equation:
∑ j = 1 d ∂ ∂ x j [ ∑ k = 1 d G i k ( x , t ) G j k ( x , t ) p t ( x ) ] = ∑ i = 1 d ∂ ∂ x j F i j ( x , t ) p t ( x ) + ∑ j = 1 d F i j ( x , t ) p t ( x ) ∂ ∂ x j log p t ( x ) = p t ( x ) ∑ i = 1 d ∂ ∂ x j F i j ( x , t ) + p t ( x ) ∑ j = 1 d F i j ( x , t ) ∂ ∂ x j log p t ( x ) = p t ( x ) [ ∇ ⋅ F ( x , t ) ] i + p t ( x ) [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i \begin{align*} &\sum_{j=1}^d \frac{\partial}{\partial x_j} \left[ \sum_{k=1}^d G_{ik}(\bold{x}, t) G_{jk} (\bold{x}, t)p_t(\bold{x}) \right] \\ =& \sum_{i=1}^d \frac{\partial}{\partial x_j}\bold{F}_{ij}(\bold{x}, t) p_t(\bold{x}) + \sum_{j=1}^d \bold{F}_{ij}(\bold{x}, t) p_t(\bold{x}) \frac{\partial}{\partial x_j} \log p_t(\bold{x}) \\ =& p_t(\bold{x})\sum_{i=1}^d \frac{\partial}{\partial x_j}\bold{F}_{ij}(\bold{x}, t) + p_t(\bold{x})\sum_{j=1}^d \bold{F}_{ij}(\bold{x}, t) \frac{\partial}{\partial x_j} \log p_t(\bold{x}) \\ =& p_t(\bold{x}) [\nabla \cdot \bold{F} (\bold{x}, t)]_i + p_t(\bold{x}) [\bold{F} (\bold{x}, t)^T \cdot \nabla_{\bold{x}} \log p_t(\bold{x})]_i \end{align*} = = = j = 1 ∑ d ∂ x j ∂ [ k = 1 ∑ d G ik ( x , t ) G jk ( x , t ) p t ( x ) ] i = 1 ∑ d ∂ x j ∂ F ij ( x , t ) p t ( x ) + j = 1 ∑ d F ij ( x , t ) p t ( x ) ∂ x j ∂ log p t ( x ) p t ( x ) i = 1 ∑ d ∂ x j ∂ F ij ( x , t ) + p t ( x ) j = 1 ∑ d F ij ( x , t ) ∂ x j ∂ log p t ( x ) p t ( x ) [ ∇ ⋅ F ( x , t ) ] i + p t ( x ) [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i Therefore, go back to the Fokker-Planck Equation:
∂ p t ( x ) ∂ t = − ∑ i = 1 d ∂ ∂ x i [ f i ( x , t ) p t ( x ) ] + 1 2 ∑ i = 1 d ∂ ∂ x i [ p t ( x ) [ ∇ ⋅ F ( x , t ) ] i + p t ( x ) [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i ] = − ∑ i = 1 d ∂ ∂ x i { p t ( x ) [ f i ( x , t ) − 1 2 [ ∇ ⋅ F ( x , t ) ] i − 1 2 [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i ] } = − ∑ i = 1 d ∂ ∂ x i [ f ~ i ( x , t ) p t ( x ) ] \begin{align*} \frac{\partial p_t(\bold{x})}{\partial t} &= - \sum_{i=1}^d \frac{\partial}{\partial x_i}[f_i(\bold{x}, t) p_t(x)] + \frac{1}{2} \sum_{i=1}^d \frac{\partial}{\partial x_i} \left[p_t(\bold{x}) [\nabla \cdot \bold{F} (\bold{x}, t)]_i + p_t(\bold{x}) [\bold{F} (\bold{x}, t)^T \cdot \nabla_{\bold{x}} \log p_t(\bold{x})]_i \right] \\ &= -\sum_{i=1}^d \frac{\partial}{\partial x_i} \left\{ p_t(\bold{x}) \left[ f_i(\bold{x}, t) - \frac{1}{2} [\nabla \cdot \bold{F} (\bold{x}, t)]_i - \frac{1}{2} [\bold{F} (\bold{x}, t)^T \cdot \nabla_{\bold{x}} \log p_t(\bold{x})]_i \right] \right\} \\ &= -\sum_{i=1}^d \frac{\partial}{\partial x_i}[\tilde{f}_i(\bold{x}, t)p_t(\bold{x})] \end{align*} ∂ t ∂ p t ( x ) = − i = 1 ∑ d ∂ x i ∂ [ f i ( x , t ) p t ( x )] + 2 1 i = 1 ∑ d ∂ x i ∂ [ p t ( x ) [ ∇ ⋅ F ( x , t ) ] i + p t ( x ) [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i ] = − i = 1 ∑ d ∂ x i ∂ { p t ( x ) [ f i ( x , t ) − 2 1 [ ∇ ⋅ F ( x , t ) ] i − 2 1 [ F ( x , t ) T ⋅ ∇ x log p t ( x ) ] i ] } = − i = 1 ∑ d ∂ x i ∂ [ f ~ i ( x , t ) p t ( x )] Here:
f ~ ( x , t ) = f ( x , t ) − 1 2 ∇ ⋅ F ( x , t ) − 1 2 F ( x , t ) T ∇ x log p t ( x ) = f ( x , t ) − 1 2 ∇ ⋅ [ G ( x , t ) G ( x , t ) ] − 1 2 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) \begin{align*} \bold{\tilde{f}}(\bold{x}, t) &= \bold{f}(\bold{x}, t) - \frac{1}{2} \nabla \cdot \bold{F}(\bold{x}, t) - \frac{1}{2} \bold{F}(\bold{x}, t)^T \nabla_{\bold{x}} \log p_t(\bold{x}) \\ &= \bold{f}(\bold{x}, t) - \frac{1}{2} \nabla \cdot [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)] - \frac{1}{2} [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T] \nabla_{\bold{x}} \log p_t(\bold{x}) \end{align*} f ~ ( x , t ) = f ( x , t ) − 2 1 ∇ ⋅ F ( x , t ) − 2 1 F ( x , t ) T ∇ x log p t ( x ) = f ( x , t ) − 2 1 ∇ ⋅ [ G ( x , t ) G ( x , t )] − 2 1 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) Therefore, this Fokker-Planck Equation can not only be derived from d x = f ( x , t ) d t + G ( x , t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + \bold{G}(\bold{x}, t) d\bold{w} d x = f ( x , t ) d t + G ( x , t ) d w , but also can be derived from another SDE with G ~ ( x , t ) ≔ 0 \bold{\tilde{G}}(\bold{x}, t) \coloneqq 0 G ~ ( x , t ) : = 0 :
d x = f ~ ( x , t ) d t + G ~ ( x , t ) d w d \bold{x} = \bold{\tilde{f}}(\bold{x}, t) dt + \bold{\tilde{G}}(\bold{x}, t) d\bold{w} d x = f ~ ( x , t ) d t + G ~ ( x , t ) d w which is essentially an ODE:
d x = f ~ ( x , t ) d t = { f ( x , t ) − 1 2 ∇ ⋅ [ G ( x , t ) G ( x , t ) ] − 1 2 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t d \bold{x} = \bold{\tilde{f}}(\bold{x}, t) dt = \left\{ \bold{f}(\bold{x}, t) - \frac{1}{2} \nabla \cdot [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)] - \frac{1}{2} [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T] \nabla_{\bold{x}} \log p_t(\bold{x}) \right\} dt d x = f ~ ( x , t ) d t = { f ( x , t ) − 2 1 ∇ ⋅ [ G ( x , t ) G ( x , t )] − 2 1 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t Hence, we have derived the corresponding ODE of the original SDE
9. Sample from VE SDE and VP SDE Refresh, the forward SDE takes the form:
d x = f ( x , t ) d t + G ( t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + \bold{G}(t) d \bold{w} d x = f ( x , t ) d t + G ( t ) d w 9.1 Reverse Diffusion Sampling In Section8.1, given the reverse-time SDE:
d x = [ f ( x , t ) − G ( t ) G ( t ) T ∇ x log p t ( x ) ] d t + G ( t ) d w ˉ d \bold{x} = \left[ \bold{f}(\bold{x}, t) - \bold{G}(t)\bold{G}(t)^T \nabla_{\bold{x}} \log p_t(\bold{x}) \right] dt + \bold{G}(t) d\bold{\bar{w}} d x = [ f ( x , t ) − G ( t ) G ( t ) T ∇ x log p t ( x ) ] d t + G ( t ) d w ˉ we have derived its discretization form:
x i = x i + 1 − f i + 1 ( x i + 1 ) + G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) + G i + 1 z i + 1 \bold{x}_i = \bold{x}_{i+1} - \bold{f}_{i+1}(\bold{x}_{i+1}) + \bold{G}_{i+1}\bold{G}_{i+1}^T \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \bold{G}_{i+1}\bold{z}_{i+1} x i = x i + 1 − f i + 1 ( x i + 1 ) + G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) + G i + 1 z i + 1 Then, by applying this equation to VE SDE and VP SDE, we can obtain their sampler respectively.
VE SDE Sampler: d x = d [ σ 2 ( t ) ] d t d w d \bold{x} = \sqrt{\frac{d[\sigma^2(t)]}{dt}} d\bold{w} d x = d t d [ σ 2 ( t )] d w For VE SDE, f ( x , t ) = 0 \bold{f}(\bold{x}, t) = 0 f ( x , t ) = 0 and G ( t ) = d [ σ 2 ( t ) ] d t \bold{G}(t) = \sqrt{\frac{d[\sigma^2(t)]}{dt}} G ( t ) = d t d [ σ 2 ( t )] . Therefore, the VE SDE sampler is:
x i = x i + 1 + d [ σ 2 ( t ) ] d t s θ ∗ ( x i + 1 , i + 1 ) + d [ σ 2 ( t ) ] d t z i + 1 = x i + 1 + ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , i + 1 ) + ( σ i + 1 2 − σ i 2 ) z i + 1 \begin{align*} \bold{x}_i &= \bold{x}_{i+1} + \frac{d[\sigma^2(t)]}{dt} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\frac{d[\sigma^2(t)]}{dt}} \bold{z}_{i+1} \\ &= \bold{x}_{i+1} + (\sigma_{i+1}^2 - \sigma_i^2) \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + (\sigma_{i+1}^2 - \sigma_i^2) \bold{z}_{i+1} \end{align*} x i = x i + 1 + d t d [ σ 2 ( t )] s θ ∗ ( x i + 1 , i + 1 ) + d t d [ σ 2 ( t )] z i + 1 = x i + 1 + ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , i + 1 ) + ( σ i + 1 2 − σ i 2 ) z i + 1
VP SDE Sampler: d x = − 1 2 β ( t ) x d t + β ( t ) d w d \bold{x} = -\frac{1}{2} \beta(t) \bold{x} dt + \sqrt{\beta(t)} d\bold{w} d x = − 2 1 β ( t ) x d t + β ( t ) d w For VP SDE, f ( x , t ) = − 1 2 β ( t ) x \bold{f}(\bold{x}, t) = -\frac{1}{2} \beta(t) \bold{x} f ( x , t ) = − 2 1 β ( t ) x and G ( t ) = β ( t ) \bold{G}(t) = \sqrt{\beta(t)} G ( t ) = β ( t ) . Therefore, the VP SDE sampler is:
x i = x i + 1 + 1 2 β i + 1 x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 = ( 1 + 1 2 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 \begin{align*} \bold{x}_i &= \bold{x}_{i+1} + \frac{1}{2} \beta_{i+1} x_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &= (1 + \frac{1}{2} \beta_{i+1}) x_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &\approx (2 - \sqrt{1 - \beta_{i+1}}) x_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \end{align*} x i = x i + 1 + 2 1 β i + 1 x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 = ( 1 + 2 1 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1
9.2 Probability Flow Sampling In section 8.2, given a general form SDE:
d x = f ( x , t ) d t + G ( x , t ) d w d \bold{x} = \bold{f}(\bold{x}, t) dt + \bold{G}(\bold{x}, t) d\bold{w} d x = f ( x , t ) d t + G ( x , t ) d w we derived the corresponding probability flow ODE:
d x = { f ( x , t ) − 1 2 ∇ ⋅ [ G ( x , t ) G ( x , t ) ] − 1 2 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t d \bold{x} = \left\{ \bold{f}(\bold{x}, t) - \frac{1}{2} \nabla \cdot [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)] - \frac{1}{2} [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T] \nabla_{\bold{x}} \log p_t(\bold{x}) \right\} dt d x = { f ( x , t ) − 2 1 ∇ ⋅ [ G ( x , t ) G ( x , t )] − 2 1 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t When G ( x , t ) = G ( t ) \bold{G}(\bold{x}, t) = \bold{G}(t) G ( x , t ) = G ( t ) , the second term degrades to zero, then the ODE becomes:
d x = { f ( x , t ) − 1 2 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t d \bold{x} = \left\{ \bold{f}(\bold{x}, t) - \frac{1}{2} [\bold{G}(\bold{x}, t)\bold{G}(\bold{x}, t)^T] \nabla_{\bold{x}} \log p_t(\bold{x}) \right\} dt d x = { f ( x , t ) − 2 1 [ G ( x , t ) G ( x , t ) T ] ∇ x log p t ( x ) } d t We can employ any numerical method to integrate the probability flow ODE backwards in time and substitute the score function with the trained neural network for sample generation. One discretization takes the following form:
x i = x i + 1 − f i + 1 ( x i + 1 ) + 1 2 G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) , i = 0 , 1 , … , N − 1 \bold{x}_i = \bold{x}_{i+1} - \bold{f}_{i+1} (\bold{x}_{i+1}) + \frac{1}{2} \bold{G}_{i+1} \bold{G}_{i+1}^T \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1), \qquad i=0,1,\dots,N-1 x i = x i + 1 − f i + 1 ( x i + 1 ) + 2 1 G i + 1 G i + 1 T s θ ∗ ( x i + 1 , i + 1 ) , i = 0 , 1 , … , N − 1 By analogy to section 9.1, we can obtain the probability flow sampling rules for SMLD(VE model) and DDPM(VP model).
SMLD ODE Sampling: x i = x i + 1 + 1 2 ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , i + 1 ) \bold{x}_i = \bold{x}_{i+1} + \frac{1}{2}(\sigma_{i+1}^2 - \sigma_i^2) \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) x i = x i + 1 + 2 1 ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , i + 1 ) DDPM ODE Sampling: x i = ( 2 − 1 − β i + 1 ) x i + 1 + 1 2 β i + 1 s θ ∗ ( x i + 1 , i + 1 ) \bold{x}_i = (2 - \sqrt{1 - \beta_{i+1}}) x_{i+1} + \frac{1}{2}\beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) x i = ( 2 − 1 − β i + 1 ) x i + 1 + 2 1 β i + 1 s θ ∗ ( x i + 1 , i + 1 )
10. Equivalence between DDPM Sampling and VP Reverse-SDE Claim: The DDPM sampling is equivalent to a numerical solution of VP reverse-SDE
The DDPM sampling takes the form:
x i = 1 1 − β i + 1 ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) ) + β i + 1 z i + 1 \bold{x}_i = \frac{1}{\sqrt{1 - \beta_{i+1}}} (\bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1)) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} x i = 1 − β i + 1 1 ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 )) + β i + 1 z i + 1 According to the first order Taylor expansion of f ( x ) = ( 1 − x ) − 1 2 f(x) = (1-x)^{-\frac{1}{2}} f ( x ) = ( 1 − x ) − 2 1 at x = 0 x=0 x = 0 ,
x i = ( 1 + 1 2 β i + 1 + o ( β i + 1 ) ) ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) ) + β i + 1 z i + 1 ≈ ( 1 + 1 2 β i + 1 ) ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) ) + β i + 1 z i + 1 = ( 1 + 1 2 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + 1 2 β i + 1 2 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 1 + 1 2 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 = [ 2 − ( 1 − 1 2 β i + 1 ) ] x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 \begin{align*} \bold{x}_i &= \left(1 + \frac{1}{2} \beta_{i+1} + o(\beta_{i+1})\right)(\bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1)) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &\approx \left( 1 + \frac{1}{2} \beta_{i+1} \right)(\bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1)) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &= \left( 1 + \frac{1}{2} \beta_{i+1} \right) \bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \frac{1}{2}\beta_{i+1}^2 \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) +\sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &\approx \left( 1 + \frac{1}{2} \beta_{i+1} \right) \bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &= \left[ 2 - (1 - \frac{1}{2} \beta_{i+1}) \right] \bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1} \\ &\approx ( 2 - \sqrt{1 - \beta_{i+1}} ) \bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) + \sqrt{\beta_{i+1}} \bold{z}_{i+1}\end{align*} x i = ( 1 + 2 1 β i + 1 + o ( β i + 1 ) ) ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 )) + β i + 1 z i + 1 ≈ ( 1 + 2 1 β i + 1 ) ( x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 )) + β i + 1 z i + 1 = ( 1 + 2 1 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + 2 1 β i + 1 2 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 1 + 2 1 β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 = [ 2 − ( 1 − 2 1 β i + 1 ) ] x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1 ≈ ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) + β i + 1 z i + 1
11. Predictor-Corrector Samplers Algorithm 1 Predictor-Corrector (PC) sampling
Require :
N N N : Number of discretization steps for the reverse-time SDE
M M M : Number of corrector steps
Initialize X N ∼ p T ( x ) \bold{X}_N \sim p_T(\bold{x}) X N ∼ p T ( x )
for i = N − 1 i = N - 1 i = N − 1 to 0 0 0 do
x i ← Predictor ( x i + 1 ) \bold{x}_i \leftarrow \text{Predictor}(\bold{x}_{i+1}) x i ← Predictor ( x i + 1 )
for j = 1 j=1 j = 1 to M M M do
x i ← Corrector ( x i ) \bold{x}_i \leftarrow \text{Corrector}(\bold{x}_i) x i ← Corrector ( x i )
return x 0 \bold{x}_0 x 0
Predictor-Corrector (PC) sampling The predictor can be any numerical solver for the reverse-time SDE with a fixed discretization strategy. The corrector can be any score-based MCMC approach.
For example, when we use the reverse diffusion SDE solver (Section 9.1) as the predictor and annealed Langevin dynamics as the corrector, we have the following two algorithms for VE and VP SDEs respectively, where { ϵ i } i = 0 N − 1 \{ \epsilon_i \}_{i=0}^{N-1} { ϵ i } i = 0 N − 1 are step sizes for Langevin dynamics.
Algorithm 2 PC sampling (VE SDE)
x N ∼ N ( 0 , σ max 2 I ) \bold{x}_N \sim \mathcal{N}(\bold{0}, \sigma_{\text{max}}^2 \bold{I}) x N ∼ N ( 0 , σ max 2 I )
for i = N − 1 i=N-1 i = N − 1 to 0 0 0 do
x i ′ ← x i + 1 + ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , σ i + 1 ) \bold{x}_i' \leftarrow \bold{x}_{i+1} + (\sigma_{i+1}^2 - \sigma_i^2) \bold{s}_{\theta^*}(\bold{x}_{i+1}, \sigma_{i+1}) x i ′ ← x i + 1 + ( σ i + 1 2 − σ i 2 ) s θ ∗ ( x i + 1 , σ i + 1 )
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N}(\bold{0}, \bold{I}) z ∼ N ( 0 , I )
x i ← x i ′ + σ i + 1 2 − σ i 2 z \bold{x}_i \leftarrow \bold{x}_i' + \sqrt{\sigma_{i+1}^2 - \sigma_i^2} \bold{z} x i ← x i ′ + σ i + 1 2 − σ i 2 z
for j = 1 j=1 j = 1 to M M M do
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N}(\bold{0}, \bold{I}) z ∼ N ( 0 , I )
x i ← x i + ϵ i s θ ∗ ( x i , σ i ) + 2 ϵ i z \bold{x}_i \leftarrow \bold{x}_i + \epsilon_i \bold{s}_{\theta^*}(\bold{x}_i, \sigma_i) + \sqrt{2 \epsilon_i} \bold{z} x i ← x i + ϵ i s θ ∗ ( x i , σ i ) + 2 ϵ i z
return x 0 \bold{x}_0 x 0
Algorithm 3 PC sampling (VP SDE)
x N ∼ N ( 0 , I ) \bold{x}_N \sim \mathcal{N}(\bold{0}, \bold{I}) x N ∼ N ( 0 , I )
for i = N − 1 i = N-1 i = N − 1 to 0 0 0 do
x i ′ ← ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 ) \bold{x}_i' \leftarrow (2-\sqrt{1-\beta_{i+1}}) \bold{x}_{i+1} + \beta_{i+1} \bold{s}_{\theta^*}(\bold{x}_{i+1}, i+1) x i ′ ← ( 2 − 1 − β i + 1 ) x i + 1 + β i + 1 s θ ∗ ( x i + 1 , i + 1 )
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N} (\bold{0}, \bold{I}) z ∼ N ( 0 , I )
x i ← x i ′ + β i + 1 z \bold{x}_i \leftarrow \bold{x}_i' + \sqrt{\beta_{i+1}} \bold{z} x i ← x i ′ + β i + 1 z
for j = 1 j = 1 j = 1 to M M M do
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N}(\bold{0}, \bold{I}) z ∼ N ( 0 , I )
x i ← x i + ϵ i s θ ∗ ( x i , i ) + 2 ϵ i z \bold{x}_i \leftarrow \bold{x}_i + \epsilon_i \bold{s}_{\theta^*}(\bold{x}_i, i) + \sqrt{2 \epsilon_i} \bold{z} x i ← x i + ϵ i s θ ∗ ( x i , i ) + 2 ϵ i z
return x 0 \bold{x}_0 x 0
The corrector algorithms The corrector algorithms are given in the following Algorithm 4 and 5, where r r r is the “signal-to-noise” ratio. The Langevin Dynamics step size ϵ \epsilon ϵ is determined using the norm of the Gaussian noise ∣ ∣ z ∣ ∣ 2 ||z||_2 ∣∣ z ∣ ∣ 2 , norm of the score-based model ∣ ∣ s θ ∗ ∣ ∣ 2 || \bold{s}_{\theta^*}||_2 ∣∣ s θ ∗ ∣ ∣ 2 and the signal-to-noise ratio r r r .
Trick: when sampling a large batch of samples together, the norm ∣ ∣ ⋅ ∣ ∣ 2 ||\cdot||_2 ∣∣ ⋅ ∣ ∣ 2 is replaced with the average norm across the mini-batch. When the batch size is small, ∣ ∣ z ∣ ∣ 2 ||\bold{z}||_2 ∣∣ z ∣ ∣ 2 is replaced by d \sqrt{d} d , where d d d is the dimensionality of z \bold{z} z .
Algorithm 4 Corrector algorithm (VE SDE).
Require : { σ i } i = 1 N , r , N , M \{ \sigma_i \}_{i=1}^N, r, N, M { σ i } i = 1 N , r , N , M
x N 0 ∼ N ( 0 , σ max 2 I ) \bold{x}_N^0 \sim \mathcal{N} (\bold{0}, \sigma_{\text{max}}^2 \bold{I}) x N 0 ∼ N ( 0 , σ max 2 I )
for i ← N i \leftarrow N i ← N to 1 1 1 do
for j ← 1 j \leftarrow 1 j ← 1 to M M M do
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N}(\bold{0}, \bold{I}) z ∼ N ( 0 , I )
g ← s θ ∗ ( x i j − 1 , σ i ) \bold{g} \leftarrow \bold{s}_{\theta^*}(\bold{x}_i^{j-1}, \sigma_i) g ← s θ ∗ ( x i j − 1 , σ i )
ϵ ← 2 ( r ∣ ∣ z ∣ ∣ 2 / ∣ ∣ g ∣ ∣ 2 ) 2 \epsilon \leftarrow 2 (r ||\bold{z}||_2 / ||\bold{g}||_2)^2 ϵ ← 2 ( r ∣∣ z ∣ ∣ 2 /∣∣ g ∣ ∣ 2 ) 2
x i j ← x i j − 1 + ϵ g + 2 ϵ z \bold{x}_i^j \leftarrow \bold{x}_i^{j-1} + \epsilon \bold{g} + \sqrt{2 \epsilon} \bold{z} x i j ← x i j − 1 + ϵ g + 2 ϵ z
x i − 1 0 ← x i M \bold{x}_{i-1}^0 \leftarrow \bold{x}_i^M x i − 1 0 ← x i M
return x 0 0 \bold{x}_0^0 x 0 0
Algorithm 5 Corrector algorithm (VP SDE).
Require : { β i } i = 1 N , { α i } i = 1 N , r , N , M \{ \beta_i \}_{i=1}^N , \{ \alpha_i \}_{i=1}^N, r, N, M { β i } i = 1 N , { α i } i = 1 N , r , N , M
x N 0 ∼ N ( 0 , I ) \bold{x}_N^0 \sim \mathcal{N} (\bold{0}, \bold{I}) x N 0 ∼ N ( 0 , I )
for i ← N i \leftarrow N i ← N to 1 1 1 do
for j ← 1 j \leftarrow 1 j ← 1 to M M M do
z ∼ N ( 0 , I ) \bold{z} \sim \mathcal{N}(\bold{0}, \bold{I}) z ∼ N ( 0 , I )
g ← s θ ∗ ( x i j − 1 , i ) \bold{g} \leftarrow \bold{s}_{\theta^*}(\bold{x}_i^{j-1}, i) g ← s θ ∗ ( x i j − 1 , i )
ϵ ← 2 α i ( r ∣ ∣ z ∣ ∣ 2 / ∣ ∣ g ∣ ∣ 2 ) 2 \epsilon \leftarrow 2 \alpha_i (r ||\bold{z}||_2 / ||\bold{g}||_2)^2 ϵ ← 2 α i ( r ∣∣ z ∣ ∣ 2 /∣∣ g ∣ ∣ 2 ) 2
x i j ← x i j − 1 + ϵ g + 2 ϵ z \bold{x}_i^j \leftarrow \bold{x}_i^{j-1} + \epsilon \bold{g} + \sqrt{2 \epsilon} \bold{z} x i j ← x i j − 1 + ϵ g + 2 ϵ z
x i − 1 0 ← x i M \bold{x}_{i-1}^0 \leftarrow \bold{x}_i^M x i − 1 0 ← x i M
return x 0 0 \bold{x}_0^0 x 0 0
Denoising For both SMLD and DDPM models, the generated samples typically contain small noise that is hard to detect by humans. This is part of the reason why NCSN models trained with SMLD has been performing worse than DDPM models in terms of FID, because the former doesn’t use a denoising step at the end of sampling. One can use Tweedie’s formula at the end of the sampling process.
For a Gaussian variable z ∼ N ( z ; μ z , Σ z ) \bold{z} \sim \mathcal{N} (\bold{z}; \mu_{z}, \Sigma_z) z ∼ N ( z ; μ z , Σ z ) , Tweedie’s Formula states that:
E [ μ z ∣ z ] = z + Σ z ∇ z log p ( z ) \mathbb{E}[\mu_z | \bold{z}] = \bold{z} + \Sigma_z \nabla_{\bold{z}} \log p(\bold{z}) E [ μ z ∣ z ] = z + Σ z ∇ z log p ( z ) This is equivalent to running a predictor step without adding the random noise.