La paradoja de los datos de iid (al menos para mí)

24

En cuanto a mi agregada (y escasos) conocimientos sobre estadísticas permisos, entendí que si X1,X2,...,Xn son variables aleatorias iid, luego, como el término implica, son independientes y están distribuidas de manera idéntica.

Mi preocupación aquí es la antigua propiedad de muestras iid, que dice:

p(Xn|Xi1,Xi2,...,Xik)=p(Xn),

para cualquier colección de ij 's st 1ij<n .

Sin embargo, se sabe que el conjunto de muestras independientes de distribuciones idénticas proporciona información sobre la estructura de distribución y, como resultado, sobre Xn en el caso anterior, por lo que no debería ser el caso de que:

p(Xn|Xi1,Xi2,...,Xik)=p(Xn).

Sé que soy víctima de la falacia, pero no sé por qué. Por favor, ayúdame con esto.

Cupitor
fuente
¿Conoces la regla de Bayes? Oído de los clásicos. vs Estadísticas bayesianas? Priors?
Matthew Gunn
1
No sigo el argumento al final de tu pregunta. ¿Puedes ser más explícito?
Glen_b -Reinstate Monica
@Glen_b, ¿qué es exactamente lo que no sigues? ¿A qué te refieres con el final? Estoy tratando de decir con diferentes lógicas que tanto la igualdad como la desigualdad parecen plausibles, lo cual es una paradoja.
Cupitor
No hay paradoja aquí, simplemente una falla al aplicar las definiciones apropiadas. ¡No puede pretender tener una paradoja cuando ignora el significado de las palabras que usa! En este caso, comparar la definición de independiente con la de probabilidad revelará el error.
whuber
@whuber, supongo que has notado el explícito "(al menos para mí)" en el título de mi pregunta y también el hecho de que pido ayuda para encontrar la "falacia" de mi argumento, que apunta al hecho de que esto De hecho, no es una verdadera paradoja.
Cupitor

Respuestas:

30

Creo que está confundiendo un modelo estimado de una distribución con una variable aleatoria . Reescribamos el supuesto de independencia de la siguiente manera: que dice que si conoce la distribución subyacente de X n ( y, por ejemplo, puede identificarlo mediante un conjunto de parámetros θ

(1)P(Xn|θ,Xi1,Xi2,,Xik)=P(Xn|θ)
Xnθ), entonces la distribución no cambia dado que ha observado algunas muestras de ella.

Por ejemplo, pensar en como la variable aleatoria que representa el resultado de la n -ésima lanzamiento de una moneda. Conocer la probabilidad de cara y cola de la moneda (que, por cierto, se supone que está codificada en θ ) es suficiente para conocer la distribución de X n . En particular, el resultado de los lanzamientos anteriores no cambia la probabilidad de la cabeza o de la cola para el n -ésimo lanzamiento, y ( 1 ) se mantiene.XnnθXnn(1)

P(θ|Xn)P(θ|Xi1,Xi2,,Xik).

Sobi
fuente
Thank you very much. Quite up to the point. Quite funny that I guessed such an answer a while ago but I forgot about it....So as far as I understand the fallacy goes with implicitly assuming "a model" which can parametrize the distribution of random variable. Did I get it right?
Cupitor
1
@Cupitor: I'm glad it was useful. Yes, conditioned on the model, the independent random variables do not affect each other. But, how likely a given distribution is to have generated a sequence of outcomes changes as you see more samples from the underlying (true) distribution (regardless of the independence assumption).
Sobi
15

If you take a Bayesian approach and treat parameters describing the distribution of X as a random variable/vector, then the observations indeed are not independent, but they would be conditionally independent given knowledge of θ hence P(XnXn1,X1,θ)=P(Xnθ) would hold.

In a classical statistical approach, θ is not a random variable. Calculations are done as if we know what θ is. In some sense, you're always conditioning on θ (even if you don't know the value).

When you wrote, "... provide information about the distribution structure, and as a result about Xn" you implicitly were adopting a Bayesian approach but not doing it precisely. You're writing a property of IID samples that a frequentist would write, but the corresponding statement in a Bayesian setup would involve conditioning on θ.

Bayesian vs. Classical statisticians

Let xi be the result of flipping a lopsided, unfair coin. We don't know the probability the coin lands heads.

  • To the classical statistician, the frequentist, P(xi=H) is some parameter, let's call it θ. Observe that θ here is a scalar, like the number 1/3. We may not know what the number is, but it's some number! It is not random!
  • To the Bayesian statistician, θ itself is a random variable! This is extremely different!

The key idea here is that the Bayesian statistician extends the tools of probability to situations where the classical statistician doesn't. To the frequentist, θ isn't a random variable because it only has one possible value! Multiple outcomes are not possible! In the Bayesian's imagination though, multiple values of θ are possible, and the Bayesian is willing to model that uncertainty (in his own mind) using the tools of probability.

Where is this going?

Let's say we flip the coin n times. One flip does not affect the outcome of the other. The classical statistician would call these independent flips (and indeed they are). We'll have:

P(xn=Hxn1,xn2,,x1)=P(xn=H)=θ
Where θ is some unknown parameter. (Remember, we don't know what it is, but it's not a random variable! It's some number.)

A Bayesian deep into subjective probability would say that what matters is the probability from her perspective!. If she sees 10 heads in a row, an 11th head is more likely because 10 heads in a row leads one to believe the coin is lopsided in favor of heads.

P(x11=Hx10=H,x9=H,,x1=H)>P(x1=H)

What has happened here? What is different?! Updating beliefs about a latent random variable θ! If θ is treated as a random variable, the flips aren't independent anymore. But, the flips are conditionally independent given the value of θ.

P(x11=Hx10=H,x9=H,,x1=H,θ)=P(x1=Hθ)=θ

Conditioning on θ in a sense connects how the Bayesian and the classical statistician models the problem. Or to put it another way, the frequentist and the Bayesian statistician will agree if the Bayesian conditions on θ.

Further notes

I've tried my best to give a short intro here, but what I've done is at best quite superficial and the concepts are in some sense quite deep. If you want to take a dive into the philosophy of probability, Savage's 1954 book, Foundation of Statistics is a classic. Google for bayesian vs. frequentist and a ton of stuff will come up.

Another way to think about IID draws is de Finetti's theorem and the notion of exchangeability. In a Bayesian framework, exchangeability is equivalent to independence conditional on some latent random variable (in this case, the lopsidedness of the coin).

Matthew Gunn
fuente
In essence, the bayesian approach would treat a statement "i.i.d. random variables" not as an axiom that they must be IID but just as a very strong prior assumption that they are so - and if even stronger evidence suggests that it's extremely unlikely that the given assumptions are true, then this "disbelief in the given conditions" will be reflected in the results.
Peteris
Thank you very much for your thorough answer. I have upvoted it, but I think Sobi's answer, points out more explicitly where the problem lies, i.e. implicitly assuming the model structure (or this is as far as I understood it)
Cupitor
1
@Matthew Gunn: neat, thorough, and very well explained! I learned a few things from your answer, thanks!
Sobi