Tengo observaciones emparejadas ( , ) extraídas de una distribución desconocida común, que tiene un primer y segundo momentos finitos, y es simétrica alrededor de la media.
Sea la desviación estándar de (incondicional en ), y lo mismo para Y. Me gustaría probar la hipótesis
:
:
Does anyone know of such a test? I can assume in first analysis that the distribution is normal, although the general case is more interesting. I am looking for a closed-form solution. Bootstrap is always a last resort.
Respuestas:
You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference of two chi squared random variates centered at the same unknown true variance. I do not know whether the difference of two chi-squared random variates is an identifiable distribution but the above may help you to some extent.
fuente
If you want to go down the non-parametric route you could always try the squared ranks test.
For the unpaired case, the assumptions for this test (taken from here) are:
These lecture notes describe the unpaired case in detail.
For the paired case you will have to change this procedure slightly. Midway down this page should give you an idea of where to start.
fuente
The most naive approach I can think of is to regressYi vs Xi as Yi∼m^Xi+b^ , then perform a t -test on the hypothesis m=1 . See t-test for regression slope.
A less naive approach is the Morgan-Pitman test. LetUi=Xi−Yi,Vi=Xi+Yi, then perform a test of the Pearson Correlation coefficient of Ui vs Vi . (One can do this simply using the Fisher R-Z transform, which gives the confidence intervals around the sample Pearson coefficient, or via a bootstrap.)
If you are using R, and don't want to have to code everything yourself, I would use
bootdpci
from Wilcox' Robust Stats package, WRS. (see Wilcox' page.)fuente
If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are well known - just the sample covariance matrix, the constrained ones (H_0) can be derived by writing out the likelihood (and will probably be some sort of "pooled" estimate).
If you don't want to derive the formulas, you can use SAS or R to fit a repeated measures model with unstructured and compound symmetry covariance structures and compare the likelihoods.
fuente
The difficulty clearly comes becauseX and Y are corellated (I assume (X,Y) is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fisher-Snedecor "F-test") because those would be of dependent χ2 distribution, and because you don't know what this dependence is which make it difficult to derive the distribution under H0 .
My answer relies on Equation (1) below. Because the difference in variance can be factorized with a difference in eigenvalues and a difference in rotation angle the test of equality can be declined into two tests. I show that it is possible to use the Fisher-Snedecor Test together with a test on the slope such as the one suggested by @shabbychef because of a simple property of 2D gaussian vectors.
Fisher-Snedecor Test: If fori=1,2 (Zi1,…,Zini) iid gaussian random variables with empirical unbiased variance λ^2i and true variance λ2i , then it is possible to test if λ1=λ2 using the fact that, under the null,
It uses the fact that
A simple property of 2D gaussian vector Let us denote by
Testing ofVar(X)=Var(Y) can be done through testing if (
λ21=λ22 or θ=π/4mod[π/2] )
Conclusion (Answer to the question) Testing forλ21=λ22 is easely done by using ACP (to decorrelate) and Fisher Scnedecor test. Testing θ=π/4[modπ/2] is done by testing if |β1|=1 in the linear regression Y=β1X+σϵ (I assume Y and X are centered).
Testing wether(λ21=λ22 or θ=π/4[modπ/2]) at level α is done by testing if λ21=λ22 at level α/3 or if |β1|=1 at level α/3 .
fuente