How to regress distribution parameters with autodiff
This guide shows how to recover distribution parameters by combining
random.* sampling with loglikelihood.* inside autodiff.
Step 1: Fit a normal distribution
Generate synthetic observations with random.normal, then recover mu and
sigma by maximizing the log-likelihood via
loglikelihood.normal.
table NormalObs = extend.range(500)
trueMu = 3.5
trueSigma = 1.2
NormalObs.X = random.normal(trueMu into NormalObs, trueSigma)
autodiff NormalObs epochs:300 learningRate:0.05 with
params mu auto
params sigma in [0.001 ..] auto
return -loglikelihood.normal(mu, sigma, NormalObs.X)
show summary "Normal fit" with
trueMu as "True mu"
trueSigma as "True sigma"
mu as "Learned mu"
sigma as "Learned sigma"
Example output:
| Label | Value |
|---|---|
| True mu | 3.5 |
| True sigma | 1.2 |
| Learned mu | 3.764843 |
| Learned sigma | 1.304051 |
Step 2: Fit a Poisson distribution
Generate counts with random.poisson, then recover lambda with
loglikelihood.poisson.
table PoissonObs = extend.range(500)
trueLambda = 4.2
PoissonObs.K = random.poisson(trueLambda into PoissonObs)
autodiff PoissonObs epochs:200 learningRate:0.05 with
params lambda in [0.001 ..] auto(2, 0.5)
return -loglikelihood.poisson(lambda, PoissonObs.K)
show summary "Poisson fit" with
trueLambda as "True lambda"
lambda as "Learned lambda"
Example output:
| Label | Value |
|---|---|
| True lambda | 4.2 |
| Learned lambda | 4.3403 |
If your learned parameters are close to the true ones, the regression worked.
Adjust epochs or learningRate if convergence is slow.