Hi @keseiya, I think the easiest solution would be to implement a custom distribution that defines `.log_prob()`

but not `.sample()`

. Looking a the wikipedia page something like this should work:

```
from torch.distributions import constraints
from torch.distributions.utils import broadcast_all
from pyro.distributions.torch import TorchDistribution
class Ricean(TorchDistribution):
arg_constraints = {"loc": constraints.positive,
"scale": constraints.positive}
support = constraints.positive
def __init__(self, loc, scale, *, validate_args=None):
self.loc, self.scale = broadcast_all(loc, scale)
super().__init__(self.loc.shape, validate_args=validate_args)
def log_prob(self, value):
s2 = self.scale.square()
return (
value.log() - s2.log()
- 0.5 * (value.square() + self.loc.square()) / s2
- torch.special.i0(value * self.loc / s2).log()
)
```

To validate you could either check against another distribution implementation (e.g. scipy) or implement a `.sample()`

method (by sampling a 2D random variable and computing its norm) and test via `pyro.distributions.testing.gof()`

as in this test code.

If you use this in variational inference with `AutoGuide`

s, Iād recommend initializing with init_to_feasible, which avoids the need to draw samples.