En cuanto a mi agregada (y escasos) conocimientos sobre estadísticas permisos, entendí que si son variables aleatorias iid, luego, como el término implica, son independientes y están distribuidas de manera idéntica.
Mi preocupación aquí es la antigua propiedad de muestras iid, que dice:
para cualquier colección de 's st .
Sin embargo, se sabe que el conjunto de muestras independientes de distribuciones idénticas proporciona información sobre la estructura de distribución y, como resultado, sobre en el caso anterior, por lo que no debería ser el caso de que:
Sé que soy víctima de la falacia, pero no sé por qué. Por favor, ayúdame con esto.
Respuestas:
Creo que está confundiendo un modelo estimado de una distribución con una variable aleatoria . Reescribamos el supuesto de independencia de la siguiente manera: que dice que si conoce la distribución subyacente de X n ( y, por ejemplo, puede identificarlo mediante un conjunto de parámetros θ
Por ejemplo, pensar en como la variable aleatoria que representa el resultado de la n -ésima lanzamiento de una moneda. Conocer la probabilidad de cara y cola de la moneda (que, por cierto, se supone que está codificada en θ ) es suficiente para conocer la distribución de X n . En particular, el resultado de los lanzamientos anteriores no cambia la probabilidad de la cabeza o de la cola para el n -ésimo lanzamiento, y ( 1 ) se mantiene.Xn n θ Xn n (1)
fuente
If you take a Bayesian approach and treat parameters describing the distribution ofX as a random variable/vector, then the observations indeed are not independent, but they would be conditionally independent given knowledge of θ hence P(Xn∣Xn−1,…X1,θ)=P(Xn∣θ) would hold.
In a classical statistical approach,θ is not a random variable. Calculations are done as if we know what θ is. In some sense, you're always conditioning on θ (even if you don't know the value).
When you wrote, "... provide information about the distribution structure, and as a result aboutXn " you implicitly were adopting a Bayesian approach but not doing it precisely. You're writing a property of IID samples that a frequentist would write, but the corresponding statement in a Bayesian setup would involve conditioning on θ .
Bayesian vs. Classical statisticians
Letxi be the result of flipping a lopsided, unfair coin. We don't know the probability the coin lands heads.
The key idea here is that the Bayesian statistician extends the tools of probability to situations where the classical statistician doesn't. To the frequentist,θ isn't a random variable because it only has one possible value! Multiple outcomes are not possible! In the Bayesian's imagination though, multiple values of θ are possible, and the Bayesian is willing to model that uncertainty (in his own mind) using the tools of probability.
Where is this going?
Let's say we flip the coinn times. One flip does not affect the outcome of the other. The classical statistician would call these independent flips (and indeed they are). We'll have:
A Bayesian deep into subjective probability would say that what matters is the probability from her perspective!. If she sees 10 heads in a row, an 11th head is more likely because 10 heads in a row leads one to believe the coin is lopsided in favor of heads.
What has happened here? What is different?! Updating beliefs about a latent random variableθ ! If θ is treated as a random variable, the flips aren't independent anymore. But, the flips are conditionally independent given the value of θ .
Conditioning onθ in a sense connects how the Bayesian and the classical statistician models the problem. Or to put it another way, the frequentist and the Bayesian statistician will agree if the Bayesian conditions on θ .
Further notes
I've tried my best to give a short intro here, but what I've done is at best quite superficial and the concepts are in some sense quite deep. If you want to take a dive into the philosophy of probability, Savage's 1954 book, Foundation of Statistics is a classic. Google for bayesian vs. frequentist and a ton of stuff will come up.
Another way to think about IID draws is de Finetti's theorem and the notion of exchangeability. In a Bayesian framework, exchangeability is equivalent to independence conditional on some latent random variable (in this case, the lopsidedness of the coin).
fuente