¿Cómo calculo el error relativo cuando el valor verdadero es cero?
Digamos que tengo y . Si defino error relativo como:
Entonces el error relativo siempre está indefinido. Si en cambio uso la definición:
Entonces el error relativo es siempre 100%. Ambos métodos parecen inútiles. ¿Hay otra alternativa?
Respuestas:
Hay muchas alternativas, dependiendo del propósito.
Una común es la "diferencia porcentual relativa", o RPD, que se utiliza en los procedimientos de control de calidad de laboratorio. Aunque puede encontrar muchas fórmulas aparentemente diferentes, todas se reducen a comparar la diferencia de dos valores con su magnitud promedio:
Esta es una expresión con signo , positiva cuando excede y y negativa cuando y excede x . Su valor siempre se encuentra entre - 2 y 2 . Al usar valores absolutos en el denominador, maneja los números negativos de manera razonable. La mayoría de las referencias que puedo encontrar, como la Guía técnica de evaluación de calidad de datos y evaluación de usabilidad de datos del Programa DEP Site Remediation Site de New Jersey , usan el valor absoluto de d 1 porque solo están interesados en la magnitud del error relativo.x y y x −2 2 d1
Un artículo de Wikipedia sobre cambio relativo y diferencia observa que
se usa con frecuencia como prueba de tolerancia relativa en algoritmos numéricos de coma flotante. El mismo artículo también señala que las fórmulas como y d ∞ pueden generalizarse ad1 d∞
donde la función depende directamente de las magnitudes de x e y (generalmente suponiendo que x e y son positivas). Como ejemplos que ofrece su max, min, y la media aritmética (con y sin tomar los valores absolutos de x y y ellos mismos), pero se podría contemplar otros tipos de promedios, tales como la media geométrica √f x y x y x y , la media armónica2/(1/|x|+1/|y|)yLpsignifica((|x|p+|y|p)/2)1 / p. (d1corresponde ap=1yd∞corresponde al límite comop→|xy|−−−√ 2/(1/|x|+1/|y|) Lp ((|x|p+|y|p)/2)1/p d1 p=1 d∞ p→∞ .) One might choose an f based on the expected statistical behavior of x and y . For instance, with approximately lognormal distributions the geometric mean would be an attractive choice for f because it is a meaningful average in that circumstance.
Most of these formulas run into difficulties when the denominator equals zero. In many applications that either is not possible or it is harmless to set the difference to zero whenx=y=0 .
Tenga en cuenta que todas estas definiciones comparten una propiedad de invariancia fundamental: cualquiera que sea la función de diferencia relativa , no cambia cuando los argumentos son reescalados uniformemente por λ > 0 :d λ>0
Es esta propiedad la que nos permite considerar como una diferencia relativa . Así, en particular, una función no invariante comod
simply does not qualify. Whatever virtues it might have, it does not express a relative difference.
The story does not end here. We might even find it fruitful to push the implications of invariance a little further.
The set of all ordered pairs of real numbers(x,y)≠(0,0) where (x,y) is considered to be the same as (λx,λy) is the Real Projective Line RP1 . In both a topological sense and an algebraic sense, RP1 is a circle. Any (x,y)≠(0,0) determines a unique line through the origin (0,0) . When x≠0 its slope is y/x ; otherwise we may consider its slope to be "infinite" (and either negative or positive). A neighborhood of this vertical line consists of lines with extremely large positive or extremely large negative slopes. We may parameterize all such lines in terms of their angle θ=arctan(y/x) , with −π/2<θ≤π/2 . Associated with every such θ is a point on the circle,
Any distance defined on the circle can therefore be used to define a relative difference.
As an example of where this can lead, consider the usual (Euclidean) distance on the circle, whereby the distance between two points is the size of the angle between them. The relative difference is least whenx=y , corresponding to 2θ=π/2 (or 2θ=−3π/2 when x and y have opposite signs). From this point of view a natural relative difference for positive numbers x and y would be the distance to this angle:
To first order, this is the relative distance|x−yEl | / | yEl | --but it works even when y= 0 . Moreover, it doesn't blow up, but instead (as a signed distance) is limited between - π/ 2 and π/ 2 , as this graph indicates:
This hints at how flexible the choices are when selecting a way to measure relative differences.
fuente
First, note that you typically take the absolute value in computing the relative error.
A common solution to the problem is to compute
fuente
I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.
If you think about it, you're comparing apples to oranges when you compare relative error to the error measured from zero, because the error measured from zero is equivalent to the measured value (that's why you get 100% error when you divide by the test number).
For example, consider measuring error of gauge pressure (the relative pressure from atmospheric) vs absolute pressure. Say that you use an instrument to measure the gauge pressure at perfect atmospheric conditions, and your device measured atmospheric pressure spot on so that it should record 0% error. Using the equation you provided, and first assuming we used the measured gauge pressure, to calculate relative error:relative error=Pgauge,true−Pgauge,testPgauge,true
Then Pgauge,true=0 and Pgauge,test=0 and you do not get 0% error, instead it is undefined. That is because the actual percent error should be using the absolute pressure values like this:
relative error=Pabsolute,true−Pabsolute,testPabsolute,true
Now Pabsolute,true=1atm and Pabsolute,test=1atm and you get 0% error. This is the proper application of relative error. The original application that used gauge pressure was more like "relative error of the relative value" which is a different thing than "relative error". You need to convert the gauge pressure to absolute before measuring the relative error.
The solution to your question is to make sure you are dealing with absolute values when measuring relative error, so that zero is not a possibility. Then you are actually getting relative error, and can use that as an uncertainty or a metric of your real percent error. If you must stick with relative values, than you should be using absolute error, because the relative (percent) error will change depending on your reference point.
It's hard to put a concrete definition on 0... "Zero is the integer denoted 0 that, when used as a counting number, means that no objects are present." - Wolfram MathWorld http://mathworld.wolfram.com/Zero.html
Feel free to nit pick, but zero essentially means nothing, it is not there. This is why it does not make sense to use gauge pressure when calculating relative error. Gauge pressure, though useful, assumes there is nothing at atmospheric pressure. We know this is not the case though, because it has an absolute pressure of 1 atm. Thus, the relative error with respect to nothing, just does not exist, it's undefined.
Feel free to argue against this, simply put: any quick fixes, such as adding one to the bottom value, are faulty and not accurate. They can be still be usefully if you are simply trying to minimize error. If you are trying to make accurate measurements of uncertainty though, not so much...
fuente
Finding MAPE,
It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR to know more.
fuente