Integración de Metropolis-Hastings: ¿por qué no funciona mi estrategia?

16

Suponga que tengo una función g(x) que deseo integrar

g(x)dx.
Por supuesto, suponiendo que g(x) va a cero en los puntos finales, sin ampliaciones, buena función. Una forma con la que he estado jugando es usar el algoritmo Metropolis-Hastings para generar una lista de muestras x1,x2,,xn partir de la distribución proporcional a g(x), al que le falta la constante de normalización
N=g(x)dx
que llamaré p(x) , y luego calcularé alguna estadística f(x) en estas x 's:
1ni=0nf(xi)f(x)p(x)dx.

Como p(x)=g(x)/N , puedo sustituir en f(x)=U(x)/g(x) para cancelar g de la integral, lo que resulta en una expresión de la forma Entonces, siempre queU(x) seintegre a1 a lolargo de esa región, debería obtener el resultado1/N, que podría tomar el recíproco para obtener la respuesta que quiero. Por lo tanto, podría tomar el rango de mi muestra (para usar los puntos de manera más efectiva)r=xmax-xminy dejarU(x)=1/rpara cada muestra que he dibujado. De esa maneraU

1NU(x)g(x)g(x)dx=1NU(x)dx.
U(x)11/Nr=xmaxxminU(x)=1/rU(x) evaluates to zero outside of the region where my samples aren't, but integrates to 1 in that region. So if I now take the expected value, I should get:
E[U(x)g(x)]=1N1ni=0nU(x)g(x).

I tried testing this in R for the sample function g(x)=ex2. In this case I do not use Metropolis-Hastings to generate the samples but use the actual probabilities with rnorm to generate samples (just to test). I do not quite get the results I am looking for. Basically the full expression of what I'd be calculating is:

1n(xmaxxmin)i=0n1exi2.
This should in my theory evaluate to 1/π. It gets close but it certainly does not converge in the expected way, am I doing something wrong?
ys = rnorm(1000000, 0, 1/sqrt(2))
r = max(ys) - min(ys)
sum(sapply(ys, function(x) 1/( r * exp(-x^2))))/length(ys)
## evaluates to 0.6019741. 1/sqrt(pi) = 0.5641896

Edit for CliffAB

The reason I use the range is just to easily define a function that is non-zero over the region where my points are, but that integrates to 1 on the range [,]. The full specification of the function is:

U(x)={1xmaxxminxmax>x>xmin0otherwise.
I did not have to use U(x) as this uniform density. I could have used some other density that integrated to 1, for example the probability density
P(x)=1πex2.
However this would have made summing the individual samples trivial i.e.
1ni=0nP(x)g(x)=1ni=0nexi2/πexi2=1ni=0n1π=1π.

I could try this technique for other distributions that integrate to 1. However, I would still like to know why it doesn't work for a uniform distribution.

Mike Flynn
fuente
Only quickly looking over this, so I'm not sure exactly why you decided to use range(x). Conditionally on it being valid, it's extremely inefficient! The range of a sample of that size is just about the most unstable statistic you could take.
Cliff AB
@CliffAB There's nothing particularly special about me using the range, aside from defining a uniform distribution on the interval where my points lie. See edits.
Mike Flynn
1
I'll look at this later on in more detail. But something to consider is that as if x is a set of uniform RV's, then as n, range(x)1. But if x is a set of non-degenarate normal RV's, then as n, range(x).
Cliff AB
@CliffAB you might have been right, I think the reason was that the bounds of the integral were not fixed, and so the variance of the estimator will never converge...
Mike Flynn

Respuestas:

13

This is a most interesting question, which relates to the issue of approximating a normalising constant of a density g based on an MCMC output from the same density g. (A side remark is that the correct assumption to make is that g is integrable, going to zero at infinity is not sufficient.)

In my opinion, the most relevant entry on this topic in regard to your suggestion is a paper by Gelfand and Dey (1994, JRSS B), where the authors develop a very similar approach to find

Xg(x)dx
when generating from p(x)g(x). One result in this paper is that, for any probability density α(x) [this is equivalent to your U(x)] such that
{x;α(x)>0}{x;g(x)>0}
the following identity
Xα(x)g(x)p(x)dx=Xα(x)Ndx=1N
shows that a sample from p can produce an unbiased evaluation of 1/N by the importance sampling estimator
η^=1ni=1nα(xi)g(xi)xiiidp(x)
Obviously, the performances (convergence speed, existence of a variance, &tc.) of the estimator η^ do depend on the choice of α [even though its expectation does not]. In a Bayesian framework, a choice advocated by Gelfand and Dey is to take α=π, the prior density. This leads to
α(x)g(x)=1(x)
where (x) is the likelihood function, since g(x)=π(x)(x). Unfortunately, the resulting estimator
N^=ni=1n1/(xi)
is the harmonic mean estimator, also called the worst Monte Carlo estimator ever by Radford Neal, from the University of Toronto. So it does not always work out nicely. Or even hardly ever.

Your idea of using the range of your sample (min(xi),max(xi)) and the uniform over that range is connected with the harmonic mean issue: this estimator does not have a variance if only because because of the exp{x2} appearing in the numerator (I suspect it could always be the case for an unbounded support!) and it thus converges very slowly to the normalising constant. For instance, if you rerun your code several times, you get very different numerical values after 10⁶ iterations. This means you cannot even trust the magnitude of the answer.

A generic fix to this infinite variance issue is to use for α a more concentrated density, using for instance the quartiles of your sample (q.25(xi),q.75(xi)), because g then remains lower-bounded over this interval.

When adapting your code to this new density, the approximation is much closer to 1/π:

ys = rnorm(1e6, 0, 1/sqrt(2))
r = quantile(ys,.75) - quantile(ys,.25)
yc=ys[(ys>quantile(ys,.25))&(ys<quantile(ys,.75))]
sum(sapply(yc, function(x) 1/( r * exp(-x^2))))/length(ys)
## evaluates to 0.5649015. 1/sqrt(pi) = 0.5641896

We discuss this method in details in two papers with Darren Wraith and with Jean-Michel Marin.

Xi'an
fuente