In statistics, for nonnegative values of x, the error function has the following interpretation: for a random variableY that is normally distributed with mean 0 and variance 1/2, erf(x) describes the probability of Y falling in the range [-x, x].
The name "error function" and its abbreviation erf
were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors."
The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose density is given by (the normal distribution), Glaisher calculates the chance of an error lying between and as:
When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between −a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system.
The integrand f = exp(−z2) and f = erf(z) are shown in the complex z-plane in figures 2 and 3. Level of Im(f) = 0 is shown with a thick green line. Negative integer values of Im(f) are shown with thick red lines. Positive integer values of Im(f) are shown with thick blue lines. Intermediate levels of Im(f) = constant are shown with thin green lines. Intermediate levels of Re(f) = constant are shown with thin red lines for negative values and with thin blue lines for positive values.
The error function at +? is exactly 1 (see Gaussian integral). At the real axis, erf(z) approaches unity at z -> +? and −1 at z -> −?. At the imaginary axis, it tends to ±i?.
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:
By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than :
Inverse error function
Given complex number z, there is not a unique complex number w satisfying , so a true inverse function would be multivalued. However, for -1 < x < 1, there is a unique real number denoted satisfying .
The inverse error function is usually defined with domain (-1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series
where c0 = 1 and
So we have the series expansion (note that common factors have been canceled from numerators and denominators):
(After cancellation the numerator/denominator fractions are entries / in the OEIS; without cancellation the numerator terms are given in entry .) Note that the error function's value at ±? is equal to ±1.
For |z| < 1, we have .
The inverse complementary error function is defined as
For realx, there is a unique real number satisfying . The inverse imaginary error function is defined as .
For any real x, Newton's method can be used to compute , and for , the following Maclaurin series converges:
where ck is defined as above.
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is
where (2n - 1)!! is the double factorial: the product of all odd numbers up to (2n - 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any one has
which follows easily by induction, writing and integrating by parts.
For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc(x) (while for not too large values of x note that the above Taylor expansion at 0 provides a very fast convergence).
Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25-28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
where p = 0.3275911, a1 = 0.254829592, a2 = −0.284496736, a3 = 1.421413741, a4 = −1.453152027, a5 = 1.061405429
All of these approximations are valid for x >= 0. To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x) = −erf(−x).
Another approximation is given by
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the error is less than 0.00035 for all x. Using the alternate value a ? 0.147 reduces the maximum error to about 0.00012.
This approximation can also be inverted to calculate the inverse error function:
Exponential bounds and a pure exponential approximation for the complementary error function are given by 
where the parameter ? can be picked to minimize error on the desired interval of approximation.
An approximation with a maximal error of for any real argument is:
Table of values
Complementary error function
The complementary error function, denoted , is defined as
which also defines , the scaled complementary error function (which can be used instead of erfc to avoid arithmetic underflow). Another form of for non-negative is known as Craig's formula, after its discoverer:
This expression is valid only for positive values of x, but it can be used in conjunction with erfc(x) = 2 - erfc(-x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Imaginary error function
The imaginary error function, denoted erfi, is defined as
Despite the name "imaginary error function", is real when x is real.
When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted ?, also named norm(x) by software languages, as they differ only by scaling and translation. Indeed,
or rearranged for erf and erfc:
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
Graph of generalised error functions En(x): grey curve: E1(x) = (1 − e −x)/ red curve: E2(x) = erf(x) green curve: E3(x) blue curve: E4(x) gold curve: E5(x).
Some authors discuss the more general functions:
Notable cases are:
E0(x) is a straight line through the origin:
E2(x) is the error function, erf(x).
After division by n!, all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n!. All generalised error functions for n > 0 look similar on the positive x side of the graph.
^Greene, William H.; Econometric Analysis (fifth edition), Prentice-Hall, 1993, p. 926, fn. 11
^Glaisher, James Whitbread Lee (July 1871). "On a class of definite integrals". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (277): 294-302. Retrieved 2017.
^Glaisher, James Whitbread Lee (September 1871). "On a class of definite integrals. Part II". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. Taylor & Francis. 42 (279): 421-436. Retrieved 2017.
^H. M. Schöpf and P. H. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. doi:10.3888/tmj.16-11.Schöpf, Supancic