Standard deviation



Standard deviation
PDF generated using the open source mwlib toolkit. See for more information.
PDF generated at: Wed, 22 Jan 2014 22:53:40 UTC
Standard deviation
Samuelson's inequality
Distance correlation
Geometric standard deviation
Bregman divergence
Bhattacharyya distance
Hellinger distance
Mahalanobis distance
Similarity learning
Yamartino method
Article Sources and Contributors
Image Sources, Licenses and Contributors
Article Licenses
Standard deviation
Standard deviation
In statistics and probability theory, the
standard deviation (represented by
the Greek letter sigma, σ) shows how
much variation or dispersion from the
average exists. A low standard
deviation indicates that the data points
tend to be very close to the mean (also
called expected value); a high standard
deviation indicates that the data points
are spread out over a large range of
The standard deviation of a random
variable, statistical population, data set,
or probability distribution is the square
root of its variance. It is algebraically
simpler though in practice less robust
than the average absolute deviation. A
useful property of the standard
deviation is that, unlike the variance, it
is expressed in the same units as the
data. Note, however, that for
measurements with percentage as the
unit, the standard deviation will have
percentage points as the unit.
A plot of a normal distribution (or bell-shaped curve) where each band has a width of 1
standard deviation – See also: 68–95–99.7 rule
Cumulative probability of a normal distribution
with expected value 0 and standard deviation 1.
In addition to expressing the variability of a population, the standard deviation is commonly used to measure
confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating
the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported
margin of error is typically about twice the standard deviation – the half-width of a 95 percent confidence interval. In
science, researchers commonly report the standard deviation of experimental data, and only effects that fall much
farther than one standard deviation away from what would have been expected are considered statistically
significant – normal random error or variation in the measurements is in this way distinguished from causal
variation. The standard deviation is also important in finance, where the standard deviation on the rate of return on
an investment is a measure of the volatility of the investment.
When only a sample of data from a population is available, the term standard deviation of the sample or sample
standard deviation can refer to either the above-mentioned quantity as applied to those data or to a modified
quantity that is a better estimate of the population standard deviation (the standard deviation of the entire
Standard deviation
Basic examples
For a finite set of numbers, the standard deviation is found by taking the square root of the average of the squared
differences of the values from their average value. For example, consider a population consisting of the following
eight values:
These eight data points have the mean (average) of 5:
First, calculate the difference of each data point from the mean, and square the result of each:
Next, calculate the mean of these values, and take the square root:
This quantity is the population standard deviation, and is equal to the square root of the variance. This formula is
valid only if the eight values we began with form the complete population. If the values instead were a random
sample drawn from some larger parent population, then we would have divided by 7 (which is n−1) instead of
8 (which is n) in the denominator of the last formula, and then the quantity thus obtained would be called the sample
standard deviation. Dividing by n−1 gives a better estimate of the population standard deviation than dividing by n.
As a slightly more complicated real-life example, the average height for adult men in the United States is about
70 inches, with a standard deviation of around 3 inches. This means that most men (about 68 percent, assuming a
normal distribution) have a height within 3 inches of the mean (67–73 inches) – one standard deviation – and
almost all men (about 95%) have a height within 6 inches of the mean (64–76 inches) – two standard deviations. If
the standard deviation were zero, then all men would be exactly 70 inches tall. If the standard deviation were
20 inches, then men would have much more variable heights, with a typical range of about 50–90 inches. Three
standard deviations account for 99.7 percent of the sample population being studied, assuming the distribution is
normal (bell-shaped).
Definition of population values
Let X be a random variable with mean value μ:
Here the operator E denotes the average or expected value of X. Then the standard deviation of X is the quantity
(derived using the properties of expected value).
In other words the standard deviation σ (sigma) is the square root of the variance of X; i.e., it is the square root of the
average value of (X − μ)2.
Standard deviation
The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that
distribution. Not all random variables have a standard deviation, since these expected values need not exist. For
example, the standard deviation of a random variable that follows a Cauchy distribution is undefined because its
expected value μ is undefined.
Discrete random variable
In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same
probability, the standard deviation is
or, using summation notation,
If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have
probability p2, ..., xN have probability pN. In this case, the standard deviation will be
Continuous random variable
The standard deviation of a continuous real-valued random variable X with probability density function p(x) is
and where the integrals are definite integrals taken for x ranging over the set of possible values of the random
variable X.
In the case of a parametric family of distributions, the standard deviation can be expressed in terms of the
parameters. For example, in the case of the log-normal distribution with parameters μ and σ2, the standard deviation
is [(exp(σ2) − 1)exp(2μ + σ2)]1/2.
One can find the standard deviation of an entire population in cases (such as standardized testing) where every
member of a population is sampled. In cases where that cannot be done, the standard deviation σ is estimated by
examining a random sample taken from the population and computing a statistic of the sample, which is used as an
estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value
of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with
modifiers). However, unlike in the case of estimating the population mean, for which the sample mean is a simple
estimator with many desirable properties (unbiased, efficient, maximum likelihood), there is no single estimator for
the standard deviation with all these properties, and unbiased estimation of standard deviation is a very technical
involved problem. Most often, the standard deviation is estimated using the corrected sample standard deviation
(using N − 1), defined below, and this is often referred to as the "sample standard deviation", without qualifiers.
However, other estimators are better in other respects: the uncorrected estimator (using N) yields lower mean
squared error, while using N − 1.5 (for the normal distribution) almost completely eliminates bias.
Standard deviation
Uncorrected sample standard deviation
Firstly, the formula for the population standard deviation (of a finite population) can be applied to the sample, using
the size of the sample as the size of the population (though the actual population size from which the sample is
drawn may be much larger). This estimator, denoted by sN, is known as the uncorrected sample standard
deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined
as follows:[citation needed]
are the observed values of the sample items and
is the mean value of these observations,
while the denominator N stands for the size of the sample.
This is a consistent estimator (it converges in probability to the population value as the number of samples goes to
infinity), and is the maximum-likelihood estimate when the population is normally distributed.[citation needed]
However, this is a biased estimator, as the estimates are generally too low. The bias decreases as sample size grows,
dropping off as 1/n, and thus is most significant for small or moderate sample sizes; for
the bias is below
1%. Thus for very large sample sizes, the uncorrected sample standard deviation is generally acceptable. This
estimator also has a uniformly smaller mean squared error than the corrected sample standard deviation.
Corrected sample standard deviation
When discussing the bias, to be more precise, the corresponding estimator for the variance, the biased sample
equivalently the second central moment of the sample (as the mean is the first moment), is a biased estimator of the
variance (it underestimates the population variance). Taking the square root to pass to the standard deviation
introduces further downward bias, by Jensen's inequality, due to the square root being a concave function. The bias
in the variance is easily corrected, but the bias from the square root is more difficult to correct, and depends on the
distribution in question.
An unbiased estimator for the variance is given by applying Bessel's correction, using N − 1 instead of N to yield the
unbiased sample variance, denoted s2:
This estimator is unbiased if the variance exists and the sample values are drawn independently with replacement.
N − 1 corresponds to the number of degrees of freedom in the vector of residuals,
Taking square roots reintroduces bias, and yields the corrected sample standard deviation, denoted by s:
While s2 is an unbiased estimator for the population variance, s is a biased estimator for the population standard
deviation, though markedly less biased than the uncorrected sample standard deviation. The bias is still significant
for small samples (n less than 10), and also drops off as 1/n as sample size increases. This estimator is commonly
used, and generally known simply as the "sample standard deviation".
Standard deviation
Unbiased sample standard deviation
For unbiased estimation of standard deviation, there is no formula that works across all distributions, unlike for mean
and variance. Instead, s is used as a basis, and is scaled by a correction factor to produce an unbiased estimate. For
the normal distribution, an unbiased estimator is given by s/c4, where the correction factor (which depends on N) is
given in terms of the Gamma function, and equals:
This arises because the sampling distribution of the sample standard deviation follows a (scaled) chi distribution, and
the correction factor is the mean of the chi distribution.
An approximation is given by replacing N − 1 with N − 1.5, yielding:
The error in this approximation decays quadratically (as 1/N2), and it is suited for all but the smallest samples or
highest precision: for n = 3 the bias is equal to 1.3%, and for n = 9 the bias is already less than 0.1%.
For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further
refinement of the approximation:
where γ2 denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain
distributions, or estimated from the data.
Confidence interval of a sampled standard deviation
The standard deviation we obtain by sampling a distribution is itself not absolutely accurate, both for mathematical
reasons (explained here by the confidence interval) and for practical reasons of measurement (measurement error).
The mathematical effect can be described by the confidence interval or CI. To show how a larger sample will
increase the confidence interval, consider the following examples: For a small population of N=2, the 95% CI of the
SD is from 0.45*SD to 31.9*SD. In other words, the standard deviation of the distribution in 95% of the cases can be
larger by a factor of 31 or smaller by a factor of 2. For a larger population of N=10, the CI is 0.69*SD to 1.83*SD.
So even with a sample population of 10, the actual SD can still be almost a factor 2 higher than the sampled SD. For
a sample population N=100, this is down to 0.88*SD to 1.16*SD. To be more certain that the sampled SD is close to
the actual SD we need to sample a large number of points.
Identities and mathematical properties
The standard deviation is invariant under changes in location, and scales directly with the scale of the random
variable. Thus, for a constant c and random variables X and Y:
The standard deviation of the sum of two random variables can be related to their individual standard deviations and
the covariance between them:
Standard deviation
stand for variance and covariance, respectively.
The calculation of the sum of squared deviations can be related to moments calculated directly from the data. The
standard deviation of the sample can be computed as:
The sample standard deviation can be computed as:
For a finite population with equal probabilities at all points, we have
This means that the standard deviation is equal to the square root of (the average of the squares less the square of the
average). See computational formula for the variance for proof, and for an analogous result for the sample standard
Interpretation and application
A large standard deviation indicates
that the data points are far from the
mean and a small standard deviation
indicates that they are clustered closely
around the mean.
For example, each of the three
populations {0, 0, 14, 14}, {0, 6, 8,
14} and {6, 6, 8, 8} has a mean of 7.
Their standard deviations are 7, 5, and
1, respectively. The third population
has a much smaller standard deviation
than the other two because its values
are all close to 7. It will have the same
units as the data points themselves. If,
for instance, the data set {0, 6, 8, 14}
Example of two sample populations with the same mean and different standard
represents the ages of a population of
deviations. Red population has mean 100 and SD 10; blue population has mean 100 and
four siblings in years, the standard
SD 50.Wikipedia:Disputed statement
deviation is 5 years. As another
example, the population {1000, 1006, 1008, 1014} may represent the distances traveled by four athletes, measured in
meters. It has a mean of 1007 meters, and a standard deviation of 5 meters.
Standard deviation may serve as a measure of uncertainty. In physical science, for example, the reported standard
deviation of a group of repeated measurements gives the precision of those measurements. When deciding whether
measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial
importance: if the mean of the measurements is too far away from the prediction (with the distance measured in
standard deviations), then the theory being tested probably needs to be revised. This makes sense since they fall
outside the range of values that could reasonably be expected to occur, if the prediction were correct and the standard
deviation appropriately quantified. See prediction interval.
Standard deviation
While the standard deviation does measure how far typical values tend to be from the mean, other measures are
available. An example is the mean absolute deviation, which might be considered a more direct measure of average
distance, compared to the root mean square distance inherent in the standard deviation.
Application examples
The practical value of understanding the standard deviation of a set of values is in appreciating how much variation
there is from the average (mean).
As a simple example, consider the average daily maximum temperatures for two cities, one inland and one on the
coast. It is helpful to understand that the range of daily maximum temperatures for cities near the coast is smaller
than for cities inland. Thus, while these two cities may each have the same average maximum temperature, the
standard deviation of the daily maximum temperature for the coastal city will be less than that of the inland city as,
on any particular day, the actual maximum temperature is more likely to be farther from the average maximum
temperature for the inland city than for the coastal one.
Particle physics
Particle physics uses a standard of "5 sigma" for the declaration of a discovery. At five-sigma there is only one
chance in nearly two million that a random fluctuation would yield the result. This level of certainty prompted the
announcement that a particle consistent with the Higgs boson has been discovered in two independent experiments at
In finance, standard deviation is often used as a measure of the risk associated with price-fluctuations of a given
asset (stocks, bonds, property, etc.), or the risk of a portfolio of assets (actively managed mutual funds, index mutual
funds, or ETFs). Risk is an important factor in determining how to efficiently manage a portfolio of investments
because it determines the variation in returns on the asset and/or portfolio and gives investors a mathematical basis
for investment decisions (known as mean-variance optimization). The fundamental concept of risk is that as it
increases, the expected return on an investment should increase as well, an increase known as the risk premium. In
other words, investors should expect a higher return on an investment when that investment carries a higher level of
risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the
uncertainty of future returns. Standard deviation provides a quantified estimate of the uncertainty of future returns.
For example, let's assume an investor had to choose between two stocks. Stock A over the past 20 years had an
average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same
period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an
investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is
not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return). Stock B is
likely to fall short of the initial investment (but also to exceed the initial investment) more often than Stock A under
the same circumstances, and is estimated to return only two percent more on average. In this example, Stock A is
expected to earn about 10 percent, plus or minus 20 pp (a range of 30 percent to −10 percent), about two-thirds of the
future year returns. When considering more extreme possible returns or outcomes in future, an investor should
expect results of as much as 10 percent plus or minus 60 pp, or a range from 70 percent to −50 percent, which
includes outcomes for three standard deviations from the average return (about 99.7 percent of probable returns).
Calculating the average (or arithmetic mean) of the return of a security over a given period will generate the
expected return of the asset. For each period, subtracting the expected return from the actual return results in the
difference from the mean. Squaring the difference in each period and taking the average gives the overall variance of
the return of the asset. The larger the variance, the greater risk the security carries. Finding the square root of this
Standard deviation
variance will give the standard deviation of the investment tool in question.
Population standard deviation is used to set the width of Bollinger Bands, a widely adopted technical analysis tool.
For example, the upper Bollinger Band is given as x + nσx. The most commonly used value for n is 2; there is about
a five percent chance of going outside, assuming a normal distribution of returns.
Unfortunately, financial time series are known to be non-stationary series, whereas the statistical calculations above,
such as standard deviation, apply only to stationary series. Whatever apparent "predictive powers" or "forecasting
ability" that may appear when applied as above is illusory. To apply the above statistical tools to non-stationary
series, the series first must be transformed to a stationary series, enabling use of statistical tools that now have a valid
basis from which to work.
Geometric interpretation
It is requested that a diagram or diagrams be included in this article to improve its quality. Specific illustrations, plots or
diagrams can be requested at the Graphic Lab.
For more information, refer to discussion on this page and/or the listing at Wikipedia:Requested images.
To gain some geometric insights and clarification, we will start with a population of three values, x1, x2, x3. This
defines a point P = (x1, x2, x3) in R3. Consider the line L = {(r, r, r) : r ∈ R}. This is the "main diagonal" going
through the origin. If our three given values were all equal, then the standard deviation would be zero and P would
lie on L. So it is not unreasonable to assume that the standard deviation is related to the distance of P to L. And that
is indeed the case. To move orthogonally from L to the point P, one begins at the point:
whose coordinates are the mean of the values we started out with. A little algebra shows that the distance between P
and M (which is the same as the orthogonal distance between P and the line L) is equal to the standard deviation of
the vector x1, x2, x3, multiplied by the square root of the number of dimensions of the vector (3 in this case.)
Chebyshev's inequality
An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures
that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard
deviations of the mean is at least as much as given in the following table.
Minimum population Distance from mean
Standard deviation
Rules for normally distributed data
The central limit theorem says that the
distribution of an average of many
independent, identically distributed
random variables tends toward the
famous bell-shaped normal distribution
with a probability density function of:
where μ is the expected value of the
random variables, σ equals their
divided by n , and n is the number of
random variables. The standard
deviation therefore is simply a scaling
variable that adjusts how broad the
curve will be, though it also appears in
the normalizing constant.
Dark blue is one standard deviation on either side of the mean. For the normal
distribution, this accounts for 68.27 percent of the set; while two standard deviations from
the mean (medium and dark blue) account for 95.45 percent; three standard deviations
(light, medium, and dark blue) account for 99.73 percent; and four standard deviations
account for 99.994 percent. The two points of the curve that are one standard deviation
from the mean are also the inflection points.
If a data distribution is approximately normal, then the proportion of data values within z standard deviations of the
mean is defined by:
Proportion =
where is the error function. If a data distribution is approximately normal then about 68 percent of the data values
are within one standard deviation of the mean (mathematically, μ ± σ, where μ is the arithmetic mean), about 95
percent are within two standard deviations (μ ± 2σ), and about 99.7 percent lie within three standard deviations
(μ ± 3σ). This is known as the 68-95-99.7 rule, or the empirical rule.
For various values of z, the percentage of values expected to lie in and outside the symmetric interval, CI = (−zσ, zσ),
are as follows:
Percentage within CI Percentage outside CI
Fraction outside CI
0.674490σ 50%
1 / 2
0.994458σ 68%
1 / 3.125
1 / 3.1514872
1.281552σ 80%
1 / 5
1.644854σ 90%
1 / 10
1.959964σ 95%
1 / 20
1 / 21.977895
2.575829σ 99%
1 / 100
1 / 370.398
3.290527σ 99.9%
1 / 1000
3.890592σ 99.99%
1 / 10000
1 / 15787
1 / 100000
4.417173σ 99.999%
Standard deviation
3.4 / 1000000 (on each side of mean)
4.891638σ 99.9999%
1 / 1000000
1 / 1744278
5.326724σ 99.99999%
1 / 10000000
5.730729σ 99.999999%
1 / 100000000
1 / 506797346
6.109410σ 99.9999999%
1 / 1000000000
6.466951σ 99.99999999%
1 / 10000000000
6.806502σ 99.999999999%
1 / 100000000000
1 / 390682215445
Relationship between standard deviation and mean
The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain
sense, the standard deviation is a "natural" measure of statistical dispersion if the center of the data is measured about
the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise
statement is the following: suppose x1, ..., xn are real numbers and define the function:
Using calculus or by completing the square, it is possible to show that σ(r) has a unique minimum at the mean:
Variability can also be measured by the coefficient of variation, which is the ratio of the standard deviation to the
mean. It is a dimensionless number.
Standard deviation of the mean
Often, we want some information about the precision of the mean we obtained. We can obtain this by determining
the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the
standard deviation of the mean is related to the standard deviation of the distribution by:
where N is the number of observations in the sample used to estimate the mean. This can easily be proven with (see
basic properties of the variance):
Resulting in:
Standard deviation
It should be emphasized that in order to estimate standard deviation of the mean
it is necessary to know
standard deviation of the entire population beforehand. However, in most applications this parameter is unknown.
For example, if series of 10 measurements of previously unknown quantity is performed in laboratory, it is possible
to calculate resulting sample mean and sample standard deviation, but it is impossible to calculate standard deviation
of the mean.
Rapid calculation methods
The following two formulas can represent a running (repeatedly updated) standard deviation. A set of two power
sums s1 and s2 are computed over a set of N values of x, denoted as x1, ..., xN:
Given the results of these running summations, the values N, s1, s2 can be used at any time to compute the current
value of the running standard deviation:
Where :
Similarly for sample standard deviation,
In a computer implementation, as the three sj sums become large, we need to consider round-off error, arithmetic
overflow, and arithmetic underflow. The method below calculates the running sums method with reduced rounding
errors. This is a "one pass" algorithm for calculating variance of n samples without the need to store prior data during
the calculation. Applying this method to a time series will result in successive values of standard deviation
corresponding to n data points as n grows larger with each new sample, rather than a constant-width sliding window
For k = 1, ..., n:
where A is the mean value.
Sample variance:
Population variance:
Standard deviation
Weighted calculation
When the values xi are weighted with unequal weights wi, the power sums s0, s1, s2 are each computed as:
And the standard deviation equations remain unchanged. Note that s0 is now the sum of the weights and not the
number of samples N.
The incremental method with reduced rounding errors can also be applied, with some additional complexity.
A running sum of weights must be computed for each k from 1 to n:
and places where 1/n is used above must be replaced by wi/Wn:
In the final division,
where n is the total number of elements, and n' is the number of elements with non-zero weights. The above formulas
become equal to the simpler formulas given above if weights are taken as equal to one.
Combining standard deviations
Population-based statistics
The populations of sets, which may overlap, can be calculated simply as follows:
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-populations can be aggregated as follows if the size (actual
or relative to one another) and means of each are known:
For example, suppose it is known that the average American man has a mean height of 70 inches with a standard
deviation of three inches and that the average American woman has a mean height of 65 inches with a standard
deviation of two inches. Also assume that the number of men, N, is equal to the number of women. Then the mean
and standard deviation of heights of American adults could be calculated as:
Standard deviation
For the more general case of M non-overlapping populations, X1 through XM, and the aggregate population
If the size (actual or relative to one another), mean, and standard deviation of two overlapping populations are
known for the populations as well as their intersection, then the standard deviation of the overall population can still
be calculated as follows:
If two or more sets of data are being added together datapoint by datapoint, the standard deviation of the result can
be calculated if the standard deviation of each data set and the covariance between each pair of data sets is known:
For the special case where no correlation exists between any pair of data sets, then the relation reduces to the
Sample-based statistics
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-samples can be aggregated as follows if the actual size and
means of each are known:
For the more general case of M non-overlapping data sets, X1 through XM, and the aggregate data set
Standard deviation
If the size, mean, and standard deviation of two overlapping samples are known for the samples as well as their
intersection, then the standard deviation of the aggregated sample can still be calculated. In general:
The term standard deviation was first used in writing by Karl Pearson in 1894, following his use of it in lectures.
This was as a replacement for earlier alternative names for the same idea: for example, Gauss used mean error. It
may be worth noting in passing that the mean error is mathematically distinct from the standard deviation.
[1] Ghahramani, Saeed (2000). Fundamentals of Probability (2nd Edition). Prentice Hall: New Jersey. p. 438.
External links
• Hazewinkel, Michiel, ed. (2001), "Quadratic deviation" (
php?title=p/q076030), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
• A simple way to understand Standard Deviation (
• Standard Deviation – an explanation without maths (
• Standard Deviation, an elementary introduction (
• Standard Deviation while Financial Modeling in Excel (
• Standard Deviation, a simpler explanation for writers and journalists (
• The concept of Standard Deviation is shown in this 8-foot-tall (2.4 m) Probability Machine (named Sir Francis)
comparing stock market returns to the randomness of the beans dropping through the quincunx pattern. (http:// from Index Funds Advisors (
Samuelson's inequality
Samuelson's inequality
In statistics, Samuelson's inequality, named after the economist Paul Samuelson,[1] also called the
Laguerre–Samuelson inequality,[2] after the mathematician Edmond Laguerre, proved that every one of any
collection x1, ..., xn, is within √(n − 1) sample standard deviations of their sample mean. In other words, if we let
be the sample mean and
be the standard deviation of the sample, then
Equality holds on the left if and only if the n − 1 smallest of the n numbers are equal to each other, and on the right
iff the n − 1 largest ones are equal.
Samuelson's inequality may be considered a reason why studentization of residuals should be done externally.
Relationship to polynomials
Samuelson was not the first to describe this relationship. The first to discover this relationship was probably
Laguerre in 1880 while investigating the roots (zeros) of polynomials.[4][5]
Consider a polynomial
Without loss of generality let
and let
In terms of the coefficients
Laguerre showed that the roots of this polynomial were bounded by
Inspection shows that
is the mean of the roots and that b is the standard deviation of the roots.
Laguerre failed to notice this relationship with the means and standard deviations of the roots being more interested
in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in
their location.
Samuelson's inequality
When the coefficients
are both zero no information can be obtained about the location of the roots.
[1] Paul Samuelson, "How Deviant Can You Be?", Journal of the American Statistical Association, volume 63, number 324 (December, 1968),
pp. 1522–1525
[2] Jensen, Shane Tyler (1999) The Laguerre–Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory (http:/ /
www. collectionscanada. gc. ca/ obj/ s4/ f2/ dsk1/ tape10/ PQDD_0027/ MQ50799. pdf) MSc Thesis. Department of Mathematics and
Statistics, McGill University.
[3] Advances in Inequalities from Probability Theory and Statistics, by Neil S. Barnett and Sever Silvestru Dragomir, Nova Publishers, 2008,
page 164
[4] Jensen, Shane Tyler (1999) The Laguerre–Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory (http:/ /
www. collectionscanada. gc. ca/ obj/ s4/ f2/ dsk1/ tape10/ PQDD_0027/ MQ50799. pdf) MSc Thesis. Department of Mathematics and
Statistics, McGill University
[5] Laguerre E. (1880) Mémoire pour obtenir par approximation les racines d'une équation algébrique qui a toutes les racines réelles. Nouv Ann
Math 2e série, 19, 161-172, 193-202
Distance correlation
In statistics and in probability theory, distance correlation is a measure of statistical dependence between two
random variables or two random vectors of arbitrary, not necessarily equal dimension. An important property is that
this measure of dependence is zero if and only if the random variables are statistically independent. This measure is
derived from a number of other quantities that are used in its specification, specifically: distance variance, distance
standard deviation and distance covariance. These take the same roles as the ordinary moments with
corresponding names in the specification of the Pearson product-moment correlation coefficient.
These distance-based measures can be put into an indirect relationship to the ordinary moments by an alternative
formulation (described below) using ideas related to Brownian motion, and this has led to the use of names such as
Brownian covariance and Brownian distance covariance.
The classical measure of dependence,
the Pearson correlation coefficient,[1] is
mainly sensitive to a linear relationship
between two variables. Distance
correlation was introduced in 2005 by
Gabor J Szekely in several lectures to
address this deficiency of Pearson’s
correlation, namely that it can easily be
Correlation = 0 (uncorrelatedness)
does not imply independence while
distance correlation = 0 does imply
Several sets of (x, y) points, with the Distance correlation coefficient of x and y for each
set. Compare to the graph on correlation
independence. The first results on
distance correlation were published in
2007 and 2009.[2][3] It was proved that distance covariance is the same as the Brownian covariance. These measures
are examples of energy distances.
Distance correlation
Distance covariance
Let us start with the definition of the sample distance covariance. Let (Xk, Yk), k= 1, 2, ..., n be a statistical sample
from a pair of real valued or vector valued random variables (X, Y). First, compute all pairwise distances
where || ⋅ || denotes Euclidean norm. That is, compute the n by n distance matrices (aj, k) and (bj, k). Then take all
doubly centered distances
is the j-th row mean,
is thek-th column mean, and
is the grand mean of the distance matrix of the
X sample. The notation is similar for the b values. (In the matrices of centered distances (Aj, k) and (Bj,k) all rows and
all columns sum to zero.) The squared sample distance covariance is simply the arithmetic average of the products
Aj, kBj, k:
The statistic Tn = n dCov2n(X, Y) determines a consistent multivariate test of independence of random vectors in
arbitrary dimensions. For an implementation see dcov.test function in the energy package for R.[4]
The population value of distance covariance can be defined along the same lines. Let X be a random variable that
takes values in a p-dimensional Euclidean space with probability distribution μ and let Y be a random variable that
takes values in a q-dimensional Euclidean space with probability distribution ν, and suppose that X and Y have finite
expectations. Write
Finally, define the population value of squared distance covariance of X and Y as
One can show that this is equivalent to the following definition:
where E denotes expected value, and
are independent and identically
distributed. Distance covariance can be expressed in terms of Pearson’s covariance, cov, as follows:
This identity shows that the distance covariance is not the same as the covariance of distances, cov(||X-X' ||, ||Y-Y' ||).
This can be zero even if X and Y are not independent.
Alternately, the squared distance covariance can be defined as the weighted L2 norm of the distance between the joint
characteristic function of the random variables and the product of their marginal characteristic functions:[5]
where ϕX, Y(s, t), ϕX(s), and ϕY(t) are the characteristic functions of (X, Y), X, and Y, respectively, p, q denote the
Euclidean dimension of X and Y, and thus of s and t, and cp, cq are constants. The weight function
is chosen to produce a scale equivariant and rotation invariant measure that doesn't go to
Distance correlation
zero for dependent variables. One interpretation of the characteristic function definition is that the variables eisX and
eitY are cyclic representations of X and Y with different periods given by s and t, and the expression ϕX, Y(s, t) - ϕX(s)
ϕY(t) in the numerator of the characteristic function definition of distance covariance is simply the classical
covariance of eisX and eitY. The characteristic function definition clearly shows that dCov2(X, Y) = 0 if and only if X
and Y are independent.
Distance variance
The distance variance is a special case of distance covariance when the two variables are identical. The population
value of distance variance is the square root of
denotes the expected value,
independent of
is an independent and identically distributed copy of
and has the same distribution as
The sample distance variance is the square root of
which is a relative of Corrado Gini’s mean difference introduced in 1912 (but Gini did not work with centered
Distance standard deviation
The distance standard deviation is the square root of the distance variance.
Distance correlation
The distance correlation of two random variables is obtained by dividing their distance covariance by the product of
their distance standard deviations. The distance correlation is
and the sample distance correlation is defined by substituting the sample distance covariance and distance variances
for the population coefficients above.
For easy computation of sample distance correlation see the dcor function in the energy package for R.
Distance correlation
if and only if
are independent.
implies that dimensions of the linear spaces spanned by
are almost surely equal and if we assume that these subspaces are equal, then in this subspace
some vector
, scalar
, and orthonormal matrix
samples respectively
Distance correlation
Distance covariance
for all constant vectors
, and orthonormal matrices
(iii) If the random vectors
Equality holds if and only if
, scalars
are independent then
are both constants, or
are both constants, or
are mutually independent.
if and only if
are independent.
This last property is the most important effect of working with centered distances.
The statistic
. Under independence of X and Y [6]
is a biased estimator of
Distance variance
if and only if
almost surely.
if and only if every sample observation is identical.
(iv) If
for all constant vectors
, scalars
, and orthonormal matrices
are independent then
Equality holds in (iv) if and only if one of the random variables
is a constant.
Distance covariance can be generalized to include powers of Euclidean distance. Define
Then for every
are independent if and only if
note that this characterization does not hold for exponent
. It is important to
; in this case for bivariate
is a deterministic function of the Pearson correlation.
the corresponding distances,
, then
powers of
sample distance covariance can be defined as the nonnegative
number for which
One can extend
with metric
to metric-space-valued random variables
, then define
has finite first moment),
: If
has law
, and (provided
. Then if
in a metric space
is finite, i.e.,
has law
(in a
possibly different metric space with finite first moment), define
This is non-negative for all such
has negative type if
negative type, then
iff both metric spaces have negative type.[7] Here, a metric space
is isometric to a subset of a Hilbert space.[8] If both metric spaces have strong
are independent.
Distance correlation
Alternative formulation: Brownian covariance
Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square
of the covariance of random variables X and Y can be written in the following form:
where E denotes the expected value and the prime denotes independent and identically distributed copies. We need
the following generalization of this formula. If U(s), V(t) are arbitrary random processes defined for all real s and t
then define the U-centered version of X by
whenever the subtracted conditional expected value exists and denote by YV the V-centered version of Y.[9][10] The
(U,V) covariance of (X,Y) is defined as the nonnegative number whose square is
whenever the right-hand side is nonnegative and finite. The most important example is when U and V are two-sided
independent Brownian motions /Wiener processes with expectation zero and covariance |s| + |t| - |s-t| = 2 min(s,t).
(This is twice the covariance of the standard Wiener process; here the factor 2 simplifies the computations.) In this
case the (U,V) covariance is called Brownian covariance and is denoted by
There is a surprising coincidence: The Brownian covariance is the same as the distance covariance:
and thus Brownian correlation is the same as distance correlation.
On the other hand, if we replace the Brownian motion with the deterministic identity function id then Covid(X,Y) is
simply the absolute value of the classical Pearson covariance,
[1] Pearson (1895)
[2] Székely, Rizzo and Bakirov (2007)
[3] Székely & Rizzo (2009)
[4] energy package for R (http:/ / cran. us. r-project. org/ web/ packages/ energy/ index. html)
[5] Székely & Rizzo (2009) Theorem 7, (3.7), p. 1249.
[6] Székely and Rizzo (2009), Rejoinder
[7] Lyons, R. (2011) "Distance covariance in metric spaces".
[8] Klebanov, L. B. (2005) N-distances and their Applications, Karolinum Press, Charles University, Prague.
[9] Bickel & Xu (2009)
[10] Kosorok (2009)
Distance correlation
• Bickel, P.J. and Xu, Y. (2009) "Discussion of: Brownian distance covariance", Annals of Applied Statistics, 3 (4),
1266–1269. doi: 10.1214/09-AOAS312A ( Free access to article
• Gini, C. (1912). Variabilità e Mutabilità. Bologna: Tipografia di Paolo Cuppini.
• Pearson, K. (1895). "Note on regression and inheritance in the case of two parents", Proceedings of the Royal
Society, 58, 240–242
• Pearson, K. (1920). "Notes on the history of correlation", Biometrika, 13, 25–45.
• Székely, G. J. Rizzo, M. L. and Bakirov, N. K. (2007). "Measuring and testing independence by correlation of
distances", Annals of Statistics, 35/6, 2769–2794. doi: 10.1214/009053607000000505 (
1214/009053607000000505) Reprint (
• Székely, G. J. and Rizzo, M. L. (2009). "Brownian distance covariance", Annals of Applied Statistics, 3/4,
1233–1303. doi: 10.1214/09-AOAS312 ( Reprint (http://personal.
• Kosorok, M. R. (2009) "Discussion of: Brownian Distance Covariance", Annals of Applied Statistics, 3/4,
1270–1278. doi: 10.1214/09-AOAS312B ( Free access to article
External links
• E-statistics (energy statistics) (
Geometric standard deviation
In probability theory and statistics, the geometric standard deviation describes how spread out are a set of numbers
whose preferred average is the geometric mean. For such data, it may be preferred to the more usual standard
deviation. Note that unlike the usual arithmetic standard deviation, the geometric standard deviation is a
multiplicative factor, and thus is dimensionless, rather than having the same dimension as the input values.
If the geometric mean of a set of numbers {A1, A2, ..., An} is denoted as μg, then the geometric standard deviation is
If the geometric mean is
then taking the natural logarithm of both sides results in
The logarithm of a product is a sum of logarithms (assuming
is positive for all
), so
Geometric standard deviation
It can now be seen that
is the arithmetic mean of the set
, therefore the
arithmetic standard deviation of this same set should be
This simplifies to
Geometric standard score
The geometric version of the standard score is
If the geometric mean, standard deviation, and z-score of a datum are known, then the raw score can be reconstructed
Relationship to log-normal distribution
The geometric standard deviation is related to the log-normal distribution. The log-normal distribution is a
distribution which is normal for the logarithm transformed values. By a simple set of logarithm transformations we
see that the geometric standard deviation is the exponentiated value of the standard deviation of the log transformed
values (e.g. exp(stdev(ln(A))));
As such, the geometric mean and the geometric standard deviation of a sample of data from a log-normally
distributed population may be used to find the bounds of confidence intervals analogously to the way the arithmetic
mean and standard deviation are used to bound confidence intervals for a normal distribution. See discussion in
log-normal distribution for details.
Bregman divergence
Bregman divergence
In mathematics, a Bregman divergence or Bregman distance is similar to a metric, but does not satisfy the triangle
inequality nor symmetry. There are two ways in which Bregman divergences are important. Firstly, they generalize
squared Euclidean distance to a class of distances that all share similar properties. Secondly, they bear a strong
connection to exponential families of distributions; as has been shown by (Banerjee et al. 2005), there is a bijection
between regular exponential families and regular Bregman divergences.
Bregman divergences are named after L. M. Bregman, who introduced the concept in 1967. More recently
researchers in geometric algorithms have shown that many important algorithms can be generalized from Euclidean
metrics to distances defined by Bregman divergence (Banerjee et al. 2005; Nielsen and Nock 2006; Boissonnat et al.
be a continuously-differentiable real-valued and strictly convex function defined on a closed
convex set
The Bregman distance associated with F for points
is the difference between the value of F at point p and
the value of the first-order Taylor expansion of F around point q evaluated at point p:
• Non-negativity:
for all p, q. This is a consequence of the convexity of F.
• Convexity:
is convex in its first argument, but not necessarily in the second argument (see [1])
• Linearity: If we think of the Bregman distance as an operator on the function F, then it is linear with respect to
non-negative coefficients. In other words, for
strictly convex and differentiable, and
• Duality: The function F has a convex conjugate
. The Bregman distance defined with respect to
has an
interesting relationship to
is the dual point corresponding to p
• A key result about Bregman divergences is that, given a random vector, the mean vector minimizes the expected
Bregman divergence from the random vector. This result generalizes the textbook result that the mean of a set
minimizes total squared error to elements in the set. This result was proved for the vector case by (Banerjee et al.
2005), and extended to the case of functions/distributions by (Frigyik et al. 2008). This result is important because
it further justifies using a mean as a representative of a random set, particularly in Bayesian estimation.
Bregman divergence
• Squared Euclidean distance
is the canonical example of a Bregman distance, generated
by the convex function
• The squared Mahalanobis distance,
which is generated by the convex
. This can be thought of as a generalization of the above squared Euclidean distance.
• The generalized Kullback–Leibler divergence
is generated by the convex function
• The Itakura–Saito distance,
is generated by the convex function
Generalizing projective duality
A key tool in computational geometry is the idea of projective duality, which maps points to hyperplanes and vice
versa, while preserving incidence and above-below relationships. There are numerous analytical forms of the
projective dual: one common form maps the point
to the hyperplane
. This
mapping can be interpreted (identifying the hyperplane with its normal) as the convex conjugate mapping that takes
the point p to its dual point
, where F defines the d-dimensional paraboloid
If we now replace the paraboloid by an arbitrary convex function, we obtain a different dual mapping that retains the
incidence and above-below properties of the standard projective dual. This implies that natural dual concepts in
computational geometry like Voronoi diagrams and Delaunay triangulations retain their meaning in distance spaces
defined by an arbitrary Bregman divergence. Thus, algorithms from "normal" geometry extend directly to these
spaces (Boissonnat, Nielsen and Nock, 2010)
Matrix Bregman divergences, functional Bregman divergences and the
submodular Bregman divergences
Bregman divergences can also be defined between matrices, between functions, and between measures
(distributions). Bregman divergences between matrices include the Stein's loss and von Neumann entropy. Bregman
divergences between functions include total squared error, relative entropy, and squared bias; see the references by
Frigyik et al. below for definitions and properties. Similarly Bregman divergences have also been defined over sets,
through a submodular set function which is known as the discrete analog of a convex function. The submodular
Bregman divergences subsume a number of discrete distance measures, like the Hamming distance, precision and
recall, mutual information and some other set based distance measures (see Iyer & Bilmes, 2012) for more details
and properties of the submodular Bregman.)
For a list of common matrix Bregman divergences, see Table 15.1 in.[2]
Bregman divergence
[1] "Joint and separate convexity of the Bregman Distance", by H. Bauschke and J. Borwein, in D. Butnariu, Y. Censor, and S. Reich, editors,
Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, Elsevier 2001
[2] "Matrix Information Geometry", R. Nock, B. Magdalou, E. Briys and F. Nielsen, pdf (http:/ / www1. univ-ag. fr/ ~rnock/ Articles/ Drafts/
book12-nmbn. pdf), from this book (http:/ / www. springerlink. com/ index/ 10. 1007/ 978-3-642-30232-9)
• Banerjee, Arindam; Merugu, Srujana; Dhillon, Inderjit S.; Ghosh, Joydeep (2005). "Clustering with Bregman
divergences" ( Journal of Machine Learning Research
6: 1705–1749.
• Bregman, L. M. (1967). "The relaxation method of finding the common points of convex sets and its application
to the solution of problems in convex programming". USSR Computational Mathematics and Mathematical
Physics 7 (3): 200–217. doi: 10.1016/0041-5553(67)90040-7 (
• Frigyik, Bela A.; Srivastava, Santosh; Gupta, Maya R. (2008). "Functional Bregman Divergences and Bayesian
Estimation of Distributions" (
FrigyikSrivastavaGupta.pdf). IEEE Transactions on Information Theory 54 (11): 5130–5139. doi:
10.1109/TIT.2008.929943 (
• Iyer, Rishabh.; Bilmes, Jeff (2012). "The Submodular Bregman divergences and Lovasz Bregman divergences
with Applications". Conference on Neural Information Processing Systems.
• Frigyik, Bela A.; Srivastava, Santosh; Gupta, Maya R. (2008). An Introduction to Functional Derivatives (http:// UWEE
Tech Report 2008-0001. University of Washington, Dept. of Electrical Engineering.
• Nielsen, Frank; Nock, Richard (2009). "The dual Voronoi diagrams with respect to representational Bregman
divergences" ( Proc. 6th
International Symposium on Voronoi Diagrams. IEEE. doi: 10.1109/ISVD.2009.15 (
• Nielsen, Frank; Nock, Richard (2007). "On the Centroids of Symmetrized Bregman Divergences". arXiv:
0711.3242 ( [ cs.CG (].
• Nielsen, Frank; Boissonnat, Jean-Daniel; Nock, Richard (2007). "On Visualizing Bregman Voronoi diagrams"
( Proc. 23rd ACM Symposium on Computational
Geometry (video track).
• Boissonnat, Jean-Daniel; Nielsen, Frank; Nock, Richard (2010). "Bregman Voronoi Diagrams" (http://hal. Discrete and Computational Geometry. 44 (2).
• Nielsen, Frank; Nock, Richard (2006). "On approximating the smallest enclosing Bregman Balls". Proc. 22nd
ACM Symposium on Computational Geometry. pp. 485–486. doi: 10.1145/1137856.1137931 (
External links
• Bregman divergence interactive applet (
• Bregman Voronoi diagram applet (
• Exact Smallest Enclosing Bregman Ball applet (
• Approximating smallest enclosing Bregman ball applet (
• Sided and Symmetrized Bregman centroids (
Bhattacharyya distance
Bhattacharyya distance
In statistics, the Bhattacharyya distance measures the similarity of two discrete or continuous probability
distributions. It is closely related to the Bhattacharyya coefficient which is a measure of the amount of overlap
between two statistical samples or populations. Both measures are named after A. Bhattacharya, a statistician who
worked in the 1930s at the Indian Statistical Institute. The coefficient can be used to determine the relative closeness
of the two samples being considered. It is used to measure the separability of classes in classification and it is
considered to be more reliable than the Mahalanobis distance, as the Mahalanobis distance is a particular case of the
Bhattacharyya distance when the standard deviations of the two classes are the same. Therefore, when two classes
have the similar means but different standard deviations, the Mahalanobis distance would tend to zero, however, the
Bhattacharyya distance would grow depending on the difference between the standard deviations.
For discrete probability distributions p and q over the same domain X, it is defined as:
is the Bhattacharyya coefficient.
For continuous probability distributions, the Bhattacharyya coefficient is defined as:
In either case,
does not obey the triangle inequality, but the Hellinger
does obey the triangle inequality.
In its simplest formulation, the Bhattacharyya distance between two classes under the normal distribution can be
calculated [1] by extracting the mean and variances of two separate distributions or classes:
is the Bhattacharyya distance between p and q distributions or classes,
is the variance of the p-th distribution,
is the mean of the p-th distribution, and
are two different distributions.
The Mahalanobis distance used in Fisher's Linear discriminant analysis is a particular case of the Bhattacharyya
Distance. When the variances of the two distributions are the same the first term of the distance is zero as this term
depends solely on the variances of the distributions (left case of the figure). The first term will grow as the variances
differ (right case of the figure). The second term, on the other hand, will be zero if the means are equal and is
inversely proportional to the variances.
For multivariate normal distributions
are the means and covariances of the distributions, and
Bhattacharyya distance
Note that, in this case, the first term in the Bhattacharyya distance is related to the Mahalanobis distance.
Bhattacharyya coefficient
The Bhattacharyya coefficient is an approximate measurement of the amount of overlap between two statistical
samples. The coefficient can be used to determine the relative closeness of the two samples being considered.
Calculating the Bhattacharyya coefficient involves a rudimentary form of integration of the overlap of the two
samples. The interval of the values of the two samples is split into a chosen number of partitions, and the number of
members of each sample in each partition is used in the following formula,
where considering the samples a and b, n is the number of partitions, and
are the number of members
of samples a and b in the i'th partition.
This formula hence is larger with each partition that has members from both sample, and larger with each partition
that has a large overlap of the two sample's members within it. The choice of number of partitions depends on the
number of members in each sample; too few partitions will lose accuracy by overestimating the overlap region, and
too many partitions will lose accuracy by creating individual partitions with no members despite being in a
surroundingly populated sample space.
The Bhattacharyya coefficient will be 0 if there is no overlap at all due to the multiplication by zero in every
partition. This means the distance between fully separated samples will not be exposed by this coefficient alone.
The Bhattacharyya distance is widely used in research of feature extraction and selection,[3] image processing,[4]
speaker recognition,[5] phone clustering.[6]
A "Bhattacharyya Space" has been proposed as a feature selection technique that can be applied to texture
A "Bhattacharyya coefficient" has also been proposed as a feature selection technique that can be used to estimate
the given distance between the Indraneel Bhattacharyya number and any given Nesli coordinate.[7]
[1] Guy B. Coleman, Harry C. Andrews, Image Segmentation by Clustering, Proc IEEE, Vol. 67, No. 5, pp. 773-785,1979
[2] D. Comaniciu, V. Ramesh, P. Meer: Real-Time Tracking of Non-Rigid Objects using Mean Shift (http:/ / coewww. rutgers. edu/ riul/
research/ papers/ pdf/ trackmo. pdf), BEST PAPER AWARD, IEEE Conf. Computer Vision and Pattern Recognition (CVPR'00), Hilton Head
Island, South Carolina, Vol. 2, 142-149, 2000
[3] Euisun Choi, Chulhee Lee, Feature extraction based on the Bhattacharyya distance, Pattern Recognition, Volume 36, Issue 8, August 2003,
Pages 1703–1709
[4] François Goudail, Philippe Réfrégier, Guillaume Delyon, Bhattacharyya distance as a contrast parameter for statistical processing of noisy
optical images, JOSA A, Vol. 21, Issue 7, pp. 1231−1240 (2004)
[5] Chang Huai You, An SVM Kernel With GMM-Supervector Based on the Bhattacharyya Distance for Speaker Recognition, Signal Processing
Letters, IEEE, Vol 16, Is 1, pp. 49 - 52
[6] Mak, B., Phone clustering using the Bhattacharyya distance, Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International
Conference on, Vol 4, pp. 2005 - 2008 vol.4, 3−6 Oct 1996
[7] Reyes-Aldasoro, C.C., and A. Bhalerao, The Bhattacharyya space for feature selection and its application to texture segmentation, Pattern
Recognition, (2006) Vol. 39, Issue 5, May 2006, pp. 812-826
• Nielsen, F.; Boltz, S. (2010). "The Burbea-Rao and Bhattacharyya centroids". IEEE Transactions on Information
Theory 57 (8): 5455–5466. arXiv: 1004.5049 ( doi:
Bhattacharyya distance
10.1109/TIT.2011.2159046 (
• Bhattacharyya, A. (1943). "On a measure of divergence between two statistical populations defined by their
probability distributions". Bulletin of the Calcutta Mathematical Society 35: 99–109. MR 0010358 (http://www.
• Kailath, T. (1967). "The Divergence and Bhattacharyya Distance Measures in Signal Selection". IEEE
Transactions on Communication Technology 15 (1): 52–60. doi: 10.1109/TCOM.1967.1089532 (http://dx.doi.
• Djouadi, A.; Snorrason, O.; Garber, F. (1990). "The quality of Training-Sample estimates of the Bhattacharyya
coefficient". IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (1): 92–97. doi:
10.1109/34.41388 (
• For a short list of properties, see:
External links
• Hazewinkel, Michiel, ed. (2001), "Bhattacharyya distance" (
php?title=p/b110490), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
Hellinger distance
In probability and statistics, the Hellinger distance is used to quantify the similarity between two probability
distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which
was introduced by Ernst Hellinger in 1909.
Measure theory
To define the Hellinger distance in terms of measure theory, let P and Q denote two probability measures that are
absolutely continuous with respect to a third probability measure λ. The square of the Hellinger distance between P
and Q is defined as the quantity
Here, dP / dλ and dQ / dλ are the Radon–Nikodym derivatives of P and Q respectively. This definition does not
depend on λ, so the Hellinger distance between P and Q does not change if λ is replaced with a different probability
measure with respect to which both P and Q are absolutely continuous. For compactness, the above formula is often
written as
Hellinger distance
Probability theory using Lebesgue measure
To define the Hellinger distance in terms of elementary probability theory, we take λ to be Lebesgue measure, so that
dP / dλ and dQ / dλ are simply probability density functions. If we denote the densities as f and g, respectively, the
squared Hellinger distance can be expressed as a standard calculus integral
where the second form can be obtained by expanding the square and using the fact that the integral of a probability
density over its domain must be one.
The Hellinger distance H(P, Q) satisfies the property (derivable from the Cauchy-Schwarz inequality)
Discrete distributions
For two discrete probability distributions
, their Hellinger distance is
defined as
which is directly related to the Euclidean norm of the difference of the square root vectors, i.e.
Connection with the statistical distance
The Hellinger distance
and the total variation distance (or statistical distance)
are related as
These inequalities follow immediately from the inequalities between the 1-norm and the 2-norm.
The maximum distance 1 is achieved when P assigns probability zero to every set to which Q assigns a positive
probability, and vice versa.
Sometimes the factor 1/2 in front of the integral is omitted, in which case the Hellinger distance ranges from zero to
the square root of two.
The Hellinger distance is related to the Bhattacharyya coefficient
as it can be defined as
Hellinger distances are used in the theory of sequential and asymptotic statistics.[2]
Hellinger distance
The squared Hellinger distance between two normal distributions
The squared Hellinger distance between two exponential distributions
The squared Hellinger distance between two Weibull distributions
common shape parameter and
is a
are the scale parameters respectively):
The squared Hellinger distance between two Poisson distributions with rate parameters
, so that
, is:
[1] Harsha's lecture notes on communication complexity (http:/ / www. tcs. tifr. res. in/ ~prahladh/ teaching/ 2011-12/ comm/ lectures/ l12. pdf)
[2] Erik Torgerson (1991) Comparison of Statistical Experiments, volume 36 of Encyclopedia of Mathematics. Cambridge University Press.
• Yang, Grace Lo; Le Cam, Lucien M. (2000). Asymptotics in Statistics: Some Basic Concepts. Berlin: Springer.
ISBN 0-387-95036-2.
• Vaart, A. W. van der. Asymptotic Statistics (Cambridge Series in Statistical and Probabilistic Mathematics).
Cambridge, UK: Cambridge University Press. ISBN 0-521-78450-6.
• Pollard, David E. (2002). A user's guide to measure theoretic probability. Cambridge, UK: Cambridge University
Press. ISBN 0-521-00289-3.
Mahalanobis distance
Mahalanobis distance
The Mahalanobis distance is a descriptive statistic that provides a relative measure of a data point's distance
(residual) from a common point. It is a unitless measure introduced by P. C. Mahalanobis in 1936. The Mahalanobis
distance is used to identify and gauge similarity of an unknown sample set to a known one. It differs from Euclidean
distance in that it takes into account the correlations of the data set and is scale-invariant. In other words, it has a
multivariate effect size.
The Mahalanobis distance of a multivariate vector
from a group of values with mean
and covariance matrix S is defined as:
Mahalanobis distance (or "generalized squared interpoint distance" for its squared value[2]) can also be defined as a
dissimilarity measure between two random vectors and of the same distribution with the covariance matrix S:
If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the Euclidean distance. If the
covariance matrix is diagonal, then the resulting distance measure is called a normalized Euclidean distance:
where si is the standard deviation of the xi and yi over the sample set.
Intuitive explanation
Consider the problem of estimating the probability that a test point in N-dimensional Euclidean space belongs to a
set, where we are given sample points that definitely belong to that set. Our first step would be to find the average or
center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more
likely it is to belong to the set.
However, we also need to know if the set is spread out over a large range or a small range, so that we can decide
whether a given distance from the center is noteworthy or not. The simplistic approach is to estimate the standard
deviation of the distances of the sample points from the center of mass. If the distance between the test point and the
center of mass is less than one standard deviation, then we might conclude that it is highly probable that the test
point belongs to the set. The further away it is, the more likely that the test point should not be classified as
belonging to the set.
This intuitive approach can be made quantitative by defining the normalized distance between the test point and the
set to be
. By plugging this into the normal distribution we can derive the probability of the test point
belonging to the set.
The drawback of the above approach was that we assumed that the sample points are distributed about the center of
mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we
would expect the probability of the test point belonging to the set to depend not only on the distance from the center
of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be
closer, while in those where the axis is long the test point can be further away from the center.
Putting this on a mathematical basis, the ellipsoid that best represents the set's probability distribution can be
estimated by building the covariance matrix of the samples. The Mahalanobis distance is simply the distance of the
Mahalanobis distance
test point from the center of mass divided by the width of the ellipsoid in the direction of the test point.
In general, given a normal (Gaussian) random variable
normal random variable
can be defined in terms of
with variance
and mean
by the equation
, any other
Conversely, to
recover a normalized random variable from any normal random variable, one can typically solve for
. If we square both sides, and take the square-root, we will get an equation for a metric that
looks a lot like the Mahalanobis distance:
The resulting magnitude is always non-negative and varies with the distance of the data from the mean, attributes
that are convenient when trying to define a model for the data.
Relationship to leverage
Mahalanobis distance is closely related to the leverage statistic, h, but has a different scale:[3]
Squared Mahalanobis distance = (N − 1)(h − 1/N).
Mahalanobis's discovery was prompted by the problem of identifying the similarities of skulls based on
measurements in 1927.[4]
Mahalanobis distance is widely used in cluster analysis and classification techniques. It is closely related to
Hotelling's T-square distribution used for multivariate statistical testing and Fisher's Linear Discriminant Analysis
that is used for supervised classification.[5]
In order to use the Mahalanobis distance to classify a test point as belonging to one of N classes, one first estimates
the covariance matrix of each class, usually based on samples known to belong to each class. Then, given a test
sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class
for which the Mahalanobis distance is minimal.
Mahalanobis distance and leverage are often used to detect outliers, especially in the development of linear
regression models. A point that has a greater Mahalanobis distance from the rest of the sample population of points
is said to have higher leverage since it has a greater influence on the slope or coefficients of the regression equation.
Mahalanobis distance is also used to determine multivariate outliers. Regression techniques can be used to determine
if a specific case within a sample population is an outlier via the combination of two or more variable scores. A point
can be a multivariate outlier even if it is not a univariate outlier on any variable (consider a probability density
similar to a hollow cube in three dimensions, for example).
Mahalanobis distance was also widely used in biology, such as predicting protein structural class, predicting
membrane protein type,[6] predicting protein subcellular localization,[7]as well as predicting many other attributes of
proteins through their pseudo amino acid composition or Chou's PseAAC, based on Chou's invariance theorem, as
done in the papers.
Mahalanobis distance
[1] De Maesschalck, Roy; Jouan-Rimbaud, Delphine; and Massart, Désiré L. (2000); The Mahalanobis distance, Chemometrics and Intelligent
Laboratory Systems 50:1–18
[2] Gnanadesikan, Ramanathan; and Kettenring, John R. (1972); Robust estimates, residuals, and outlier detection with multiresponse data,
Biometrics 28:81–124
[3] Schinka, John A.; Velicer, Wayne F.; and Weiner, Irving B. (2003); Handbook of psychology: Research methods in psychology, John Wiley
and Sons
[4] Mahalanobis, Prasanta Chandra (1927); Analysis of race mixture in Bengal, Journal and Proceedings of the Asiatic Society of Bengal,
[5] McLachlan, Geoffrey J. (1992); Discriminant Analysis and Statistical Pattern Recognition, Wiley Interscience, p. 12. ISBN 0-471-69115-1
[6] Chou, Kuo-Chen; and Elrod, David W. (1999); Prediction of membrane protein types and subcellular locations, Proteins: Structure, Function,
and Genetics, 34, 137–153
[7] Chou, Kuo-Chen; and Elrod, David W. (1999); Protein subcellular location prediction, Protein Engineering, 12, 107–118
External links
• Hazewinkel, Michiel, ed. (2001), "Mahalanobis distance" (
php?title=p/m062130), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
• Mahalanobis distance tutorial (
html) – interactive online program and spreadsheet computation
• Mahalanobis distance (Nov-17-2006) (
html) – overview of Mahalanobis distance, including MATLAB code
• What is Mahalanobis distance? (
) – intuitive, illustrated explanation, from Rick Wicklin on
Similarity learning
Similarity learning is one type of a supervised machine learning task in artificial intelligence. It is closely related to
regression and classification, but the goal is to learn from examples a function that measure how similar or related
two objects are. It has applications in ranking and in recommendation systems.
Learning setup
There are three common setups for similarity and metric distance learning.
• Regression similarity learning. In this setup, pairs of objects are given
. The goal is to learn a function that approximates
triplet example
together with a measure of their
for every new labeled
. This is typically achieved by minimizing a regularized loss
• Classification similarity learning. Given are pairs of similar objects
An equivalent formulation is that every pair
and non similar objects
is given together with a binary label
determines if the two objects are similar or not. The goal is again to learn a classifier that can decide if a new pair
of objects is similar or not.
• Ranking similarity learning. Given are triplets of objects
predefined order:
is known to be more similar to
for any new triplet of objects
, it obeys
than to
whose relative similarity obey a
. The goal is to learn a function
such that
. This setup assumes a weaker
form of supervision than in regression, because instead of providing an exact measure of similarity, one only has
to provide the relative order of similarity. For this reason, ranking-based similarity learning is easier to apply in
real large scale applications.
Similarity learning
A common approach for learning similarity, is to model the similarity function as a bilinear form. For example, in
the case of ranking similarity learning, one aims to learn a matrix W that parametrizes the similarity function
Metric learning
Similarity learning is closely related to distance metric learning. Metric learning is the task of learning a distance
function over objects. A metric or distance function has to obey three axioms: non-negativity, symmetry and
subadditivity / triangle inequality.
When the objects
are vectors in
the space of x through the bilinear form
, then any positive definite matrix
defines a distance function of
. Some well known approaches for metric learning
include Large margin nearest neighbor , Information theoretic metric learning (ITML).
In statistics, the covariance matrix of the data is sometimes used to define a distance metric called Mahalanobis
Similarity learning is used in information retrieval for learning to rank, and in recommendation systems. Also, many
machine learning approaches rely on some metric. This include unsupervised learning such as clustering, which
groups together close or similar objects. It also includes supervised approaches like K-nearest neighbor algorithm
which rely on labels of nearby objects to decide on the label of a new object. Metric learning has been proposed as a
preprocessing step for many of these approaches
Further reading
For further information on this topic, see the survey on metric and similarity learning by Bellet et al.
Yamartino method
Yamartino method
The Yamartino method (introduced by Robert J. Yamartino in 1984) is an algorithm for calculating an
approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data. The
standard deviation of wind direction is a measure of lateral turbulence, and is used in a method for estimating the
Pasquill stability category.
The typical method for calculating standard deviation requires two passes through the list of values. The first pass
determines the average of those values; the second pass determines the sum of the squares of the differences between
the values and the average. This double-pass method requires access to all values. A single-pass method can be used
for normal data but is unsuitable for angular data such as wind direction where the 0°/360° (or +180°/-180°)
discontinuity forces special consideration. For example, the directions 1°, 0°, and 359° (or -1°) should not average to
the direction 120°!
The Yamartino method solves both problems. The United States Environmental Protection Agency (EPA) has
chosen it as the preferred way to compute the standard deviation of wind direction.[1] A further discussion of the
Yamartino method, along with other methods of estimating the standard deviation of wind direction can be found in
Farrugia & Micallef [2].
Over the time interval to be averaged across, n measurements of wind direction (θ) will be made and two totals are
accumulated without storage of the n individual values. At the end of the interval the calculations are as follows:
with the average values of sinθ and cosθ defined as
Then the average wind direction is given via the four-quadrant arctan(x,y) function as
From twenty different functions for σθ using variables obtained in a single-pass of the wind direction data,
Yamartino found the best function to be
The key here is to remember that sin2θ + cos2θ = 1 so that for example, with a constant wind direction at any value
of θ, the value of will be zero, leading to a zero value for the standard deviation.
The use of alone produces a result close to that produced with a double-pass when the dispersion of angles is
small (not crossing the discontinuity), but by construction it is always between 0 and 1. Taking the arcsine then
produces the double-pass answer when there are just two equally common angles: in the extreme case of an
oscillating wind blowing backwards and forwards, it produces a result of radians, i.e. a right angle. The final factor
adjusts this figure upwards so that it produces the double-pass result of
radians for an almost uniform
distribution of angles across all directions, while making minimal change to results for small dispersions.
The theoretical maximum error against the correct double-pass σθ is therefore about 15% with an oscillating wind.
Comparisons against Monte Carlo generated cases indicate that Yamartino's algorithm is within 2% for more
realistic distributions.
Yamartino method
A variant might be to weight each wind direction observation by the wind speed at that time.
[1] Meteorological Monitoring Guidance for Regulatory Modeling Applications (section 6.2.1) (http:/ / www. epa. gov/ scram001/ guidance/
met/ mmgrma. pdf)
[2] http:/ / journals. cambridge. org/ production/ action/ cjoGetFulltext?fulltextid=408748
Article Sources and Contributors
Article Sources and Contributors
Standard deviation Source: Contributors: 1exec1, 83d40m, A.amitkumar, AJR, AManWithNoPlan, Abalter, Aberglaube, Abscissa,
AbsolutDan, Abtin, Ace of Spades, Adamjslund, Addshore, Adi4094, Admissions, Aednichols, Aeriform, Afa86, Alansohn, Ale jrb, Alex.g, Alexandrov, AllCluesKey, Allessia67, Alvinwc,
Amahoney, Amatulic, Amead, Amitch, Amitchell125, Amorim Parga, Anameofmyveryown, Andraaide, Andre Engels, Andres, AndrewWTaylor, Andy Marchbanks, Andycjp, Andysor,
AngelOfSadness, Anomie, Anon lynx, Anonymous Dissident, Anonymous editor, Anonymousuk2015, Anrnusna, Ansa211, Anwar saadat, Arbitrarily0, Arnaugir, Aroundthewayboy, Arthur
Rubin, Artichoker, Artorius, Asaba, Ashawley, Ashiabor, Astatine211, Atethnekos, AugPi, AxelBoldt, Bart133, Bdesham, Beefyt, Beetstra, Behco, Beland, BenFrantzDale, Bgwhite, Bha100710,
BiT, Billgordon1099, Blah314, Blehfu, Bo Jacoby, Bobo192, Bodnotbod, Brianga, Brutha, BryanG, Bsodmike, Btyner, Buchanan-Hermit, Bulgaroctonus, Butcheries, CJLL Wright,
CRGreathouse, CSWarren, CWii, CYD, Calabe1992, Calculator1000, Calvin 1998, CambridgeBayWeather, Captain-n00dle, Carmichael, Cathardic, Ccdbasmile735593, Cenkuyan, Ceyockey,
Charles Matthews, Chatfecter, Chillwithabong, Chris the speller, ChrisFontenot13, ChrisGualtieri, Chrism, Christopher Parham, Chrysrobyn, Cjarrodsnow, Ck lostsword, Clemwang, Cmichael,
Coffee, Coffee2theorems, Conversion script, Coppertwig, Corrigann, Crazy Boris with a red beard, Crisófilax, Cryptic C62, Cutler, DARTH SIDIOUS 2, DRHagen, DRTllbrg, DVD R W,
DVdm, Daev, Danger, DanielCD, Danielb613, Danski14, Darko.veberic, Dave6, DavidLeighEllis, DavidMcKenzie, DavidSJ, DavidWBrooks, Davidkazuhiro, Davidwbulger, Dcoetzee, Ddiazhn,
Ddofborg, Ddr, Decayintodust, DeeDeeKerby, Dekuntz, Delldot, Den fjättrade ankan, Denisarona, Dennyhelper, DerHexer, Derekleungtszhei, Dgw, Dhanya139, Dick Beldin,
DieterVanUytvanck, Diomidis Spinellis, Dirkbb, Discospinster, Dmcq, Dmr2, Doctorambient, DomCleal, Dominus, Donner60, Dr.alaagad, DrMicro, Drappel, Dreslough, Duoduoduo, Dycedarg,
Dylan Lake, Dymitr, EJM86, EPublicRelationsMT, Earth, Eb Oesch, Economist 2007, Egriffin, Elanb, Elaragirl, Elliskev, Emerah, Endobrendo, Enigmaman, Epbr123, Eric Olson, Erutuon,
Esrever, Eve Teschlemacher, Everyking, Excirial, Falcon8765, Falcon9x5, Fatla00, Felixrising, Fgnievinski, FilipeS, Flamurai, Forlornturtle, Forty two, Frehley, Frencheigh, Fsiler, Furrykef, Fæ,
G.engelstein, G716, Gabbe, Gail, Gary King, Gatoclass, Gauge, Gauravm1312, Gemini1980, Geneffects, George Drummond, George Ponderevo, Gerriet42, Gherghel, Giftlite, Gilliam,
Gingemonkey, Giraffedata, Gjshisha, GlassCobra, Glen, Gogowitsch, Gouveia2, Graham87, Greentryst, Greg L, GregWooledge, Gurch, Gvanrossum, Gyro Copter, Gzkn, Gökhan,
HaakonHjortland, Hadleywickham, Haham hanuka, Haizum, HalfShadow, Harmil, HatlessAtlas, Hawaiian717, Hede2000, Heezy, Helix84, Hellknowz, Henrygb, Hgberman, Hgrenbor,
Hilgerdenaar, Hu12, Hut 8.5, Iccaldwell, Imagine Reason, Imeriki al-Shimoni, Inomyabcs, Intangir, Iridescent, Isaac Dupree, Isis, Isomorphic, IustinPop, Izno, JA(000)Davidson, JForget, JNW,
JRBrown, Jacob grace, JadeInOz, Jake04961, Jamamala, Jamesx12345, Jamned, Janderk, JanetteDoe, Jcw69, Jean15paul, Jeremy68, Jeremykemp, Jfitzg, Jim.belk, Jim.henderson, JinJian,
Jmoorhouse, Jni, Joerite, John, John Newbury, John11235813, JohnBlackburne, JohnCD, JorisvS, Jprg1966, Jratt, Jrennie, Jts10101, JulioSergio, Justanyone, Justinep, KJS77, KKoolstra,
Kainaw, Kastchei, Kbolino, Kelvie, Khalanil, Khazar2, Khunglongcon, Kidakaka, Kingpin13, Kingturtle, Kirachinmoku, Kiril Simeonovski, Kjtobo, Knkw, Knutux, Krinkle, Krishna.91,
Kungfuadam, Kuratowski's Ghost, Kuru, Kvng, Kwamikagami, Kyle824, LGW3, LWG, Lambiam, Larry_Sanger, Ldm, Learning4ever, LeaveSleaves, Legare, Legoman 86, Lethe, Lgauthie,
Lilac Soul, LizardJr8, Loadmaster, Loodog, Lord Jubjub, Lucasgw8, Lugia2453, Luna Santin, LysolPionex, M1arvin, M2Ys4U, M360 Real, MBisanz, MCepek, MER-C, MONGO, MZMcBride,
Macronencer, Madir, Madoka, MagneticFlux, Magnus Bakken, Malo, Mapley, Marcos, Marek69, MarkSweep, Markhebner, Markkawika, Markpravda, Marokwitz, Materialscientist, Matthew
Yeager, Maurice Carbonaro, Mavenp, Mbloore, Mbroshi, Mbweissman, McKay, Mcorson, Mdann52, Mehtaba, Melchoir, Melcombe, Mercuryeagle, Mesoderm, Methecooldude, Mets501,
Mgiganteus1, Mhinckley, Miaow Miaow, Michael Hardy, Miguel.mateo, Mike Rosoft, Mikhail Dvorkin, Mikhail Ryazanov, Milliemchi, Mimithebrain, MisterSheik, Mjg3456789, Mjrice,
Mjroyster, Moeron, Mollerup, Monteboi1, Mooli, MrOllie, Ms2ger, Msm, Mud4t, Munckin, Murb:, Mwtoews, NHRHS2010, Nakon, Nathandean, NawlinWiki, Nbarth, Nectarflowed,
NeilRickards, Neo Poz, Neudachnik, Neuralwarp, Neutrality, Ngoddard, Niallharkin, Nigholith, Noe, Nonagonal Spider, Norazoey, Normy rox, NorwegianBlue, Not-just-yeti, NotAnonymous0,
Novalis, Nwbeeson, O.Koslowski, O18, Octahedron80, Ocvailes, Oddbodz, Ohl4xoaS, Oleg Alexandrov, Oliphaunt, Omegatron, Omicronpersei8, Owisped, Oxymoron83, P toolan, P.Silveira,
PMHauge, Pak21, Pakaran, Paul August, PeteLaud, Peter ryan 1976, Pharaoh of the Wizards, PhilKnight, Philip Trueman, Pickledweller, Pinethicket, Piotrus, Pkomarov, Policron, Polyester,
Poochy, Ppardal, Psb777, Pstanton, Psychlohexane, Publius3, Purplewowies, Qwfp, RDBury, RJaguar3, RadioKirk, Raffamaiden, Ranjithsutari, RayAYang, Razorflame, Rdbrid, Rdjere,
Rednblu, Reedy, Regitmail, Rememberway, Rettetast, Revipm, Rich Farmbrough, Richard001, Rickyfrizzlebum, Robo Cop, Rocketrod1960, Rompe, Ronz, Rose Garden, Rseay267,
Rspence1234, Rushbugled13, Ryk, Salix alba, Sam Korn, Sameer r, SamuelTheGhost, Sander123, Satellizer, Savidan, Savie Kumara, Scooterpooting, Seaphoto, SebastianHelm, Seresin, Shanes,
Shaun ward, ShelfSkewed, Sietse Snel, Sigma 7, SimonEast, Singhalawap, Skbkekas, SkerHawx, Skittle, Slakr, Slick0805, Slon02, Slowking Man, Smokingduck, Some jerk on the Internet,
Someguy1221, Sonicblade128, Spacepotato, SpeedyGonsales, Speedyboy, Spinality, Spliffy, Sstoneb, Statisfactions, Stemonitis, Stepheng3, Stevvers, StewartMH, Storkk, Stpasha, Suffusion of
Yellow, Surachit, Susurrus, Svick, Swat671, Swcurran, SwisterTwister, THEN WHO WAS PHONE?, Takanoha, Taxman, Tayste, TedE, Tempodivalse, Thadius856, The Thing That Should Not
Be, The sock that should not be, Thingg, Thingstofollow, Thomag, ThomasNichols, ThomasStrohmann, Thr4wn, Tide rolls, Time9, Titoxd, Tlroche, Tom harrison, Tomi, Tompa,
Torsionalmetric, Tosayit, Tpbradbury, TradingBands, Triwbe, Urdutext, Useight, Ute in DC, V DESAI UK, Vaughan Pratt, Verbum Veritas, Versus22, Vice regent, VictorAnyakin,
Vinod.kumar2, VladimirReshetnikov, W4chris, Waggers, Warniats, Wavelength, Wayne Slam, Wbm1058, Widr, Wikipelli, Wildingd, William Avery, Winchelsea, Winsteps, Wmahan,
Wolfkeeper, Wonglkd, Woood, Wykypydya, X-Fi6, Yaara dildaara, Yachtsman1, Yamaguchi先 生, Ylloh, Yochai Twitto, Zafiroblue05, Zenkat, Zenomax, Zhieaanm, Zigger, Zvika, ‫ﺯﺭﺷﮏ‬, 1711
anonymous edits
Samuelson's inequality Source: Contributors: Cybercobra, DrMicro, Drpickem, FrankSamuelson, Headbomb, JmCor, JulienC, Michael
Hardy, Qwfp
Distance correlation Source: Contributors: AdventurousSquirrel, Alcides, Andrewman327, Angelorf, Duoduoduo, Fgnievinski,
Gjshisha, Joel7687, Mathstat, Melcombe, Michael Hardy, Naught101, Queenmomcat, Qweyui, Rjwilmsi, 16 anonymous edits
Geometric standard deviation Source: Contributors: AugPi, Betzkorn, CWenger, Cholling, Ciemo, Clhtnk, Crystallina, Deadbadger,
Leon Granowitz, Leyo, Melcombe, Michael Hardy, Mikael Häggström, MrOllie, Pfeilspitze, Pjrm, Sdurietz, 19 anonymous edits
Bregman divergence Source: Contributors: BradBeattie, BrotherE, Chire, Daniel Mietchen, David Eppstein, Dfrankow, Dicklyon,
Epistemenical, Folajimi, Headbomb, Jheald, Ladybirdintheuk, Lavaka, Linas, MDReid, Melcombe, Michael Hardy, Oleg Alexandrov, Omnipaedista, Politepunk, Prashantva, RJHall, Reyk,
RoboDave, Schizoid, Tekhnofiend, Yasmar, 45 anonymous edits
Bhattacharyya distance Source: Contributors: 3mta3, Amcbride, AtroX Worf, Barticus88, Bluemoose, Btyner, Charles Matthews, Chris
the speller, ChrisGualtieri, Chriscing, CommonsDelinker, Creyes, Ctwardy, Giftlite, Headbomb, HeinzRos, Hoonose, IByte, Jason Quinn, Jheald, Jimfaed, John Palkovic, JohnBlackburne,
Jppellet, Jrvz, KYN, Mairi, Mathfreq, MattWatt, Mcld, Melcombe, Memming, Mukerjee, MwGamera, Myasuda, Oleg Alexandrov, Papadim.G, Policron, SFK2, Seabhcan, Sgsourav, Szzoli,
Tgillet1, Thetawave, Tosha, Vanzandtj, YurritAvonds, 28 anonymous edits
Hellinger distance Source: Contributors: 3mta3, Aetheling, Djozwebo, Epistemenical, Giftlite, GregorB, Kiefer.Wolfowitz, Linas,
Lucaswilkins, Magog the Ogre, Melcombe, Michael Hardy, Mkayala, Nbarth, Oleg Alexandrov, Qwertyus, R.e.b., Repied, SFK2, Schlurcher, Simbalis, Skbkekas, Stpasha, Szzoli, Vanzandtj,
Ylloh, 15 anonymous edits
Mahalanobis distance Source: Contributors: Aetheling, Ahennell, Antegallya, Anubhab91, BenFrantzDale, Bluemoose, Bobke, CaAl,
Calimo, Charles Matthews, Cherkash, Crasshopper, CrazyChemGuy, Dawn Bard, Den fjättrade ankan, Dfrankow, Donner60, Donnesimon, EffeX2, EoGuy, Flammifer, Frecklefoot, Ged.R,
Giftlite, Ilmari Karonen, Innotata, Jbzdak, JohnBlackburne, Jppellet, Kku, Kovlan, Lambiam, Linas, Low-frequency internal, Mahalanobis51, Mahalanobis52, Marsias, Materialscientist, Mcld,
Mdd4696, Melcombe, Mentifisto, Michael Hardy, Moti Levy, Muscicapa, Nesbit, Oleg Alexandrov, Pagh, Papadim.G, Policron, Pot, Qwfp, Raguks, Rlopez, Seabhcan, Seegoon, Semifinalist,
Sgb235, Shabbychef, Shyamal, Silverfish, Sourangshu, Tiha, Timflutre, Timothy Gu, Toniher, Tosha, Urhixidur, Waldir, Warbola, WereSpielChequers, Wimmerm, Y(J)S, 133 anonymous edits
Similarity learning Source: Contributors: Excirial, Gal chechik, OrenBochman, R'n'B, 6 anonymous edits
Yamartino method Source: Contributors: Ajakef, Andreas Kaufmann, Bbik, CharlotteWebb, Fgnievinski, Gouveia2, Headbomb,
Hwestbrook, Mbeychok, Melcombe, Michael Hardy, NickyMcLean, Omnipaedista, Panarchy, Pascal.Tesson, Pcirrus, Rumping, Stemonitis, Wilderness, 5 anonymous edits
Image Sources, Licenses and Contributors
Image Sources, Licenses and Contributors
File:standard deviation diagram.svg Source: License: Creative Commons Attribution 2.5 Contributors:
File:cumulativeSD.svg Source: License: Public Domain Contributors: Normal_Distribution_CDF.svg: Inductiveload
derivative work: Wolfkeeper (talk)
File:Comparison standard deviations.svg Source: License: Public Domain Contributors: JRBrown
Image:Nuvola apps kchart.svg Source: License: GNU Lesser General Public License Contributors: en:David Vignoni,
Image:standard deviation diagram.svg Source: License: Creative Commons Attribution 2.5 Contributors:
Image:Distance Correlation Examples.svg Source: License: Creative Commons Attribution-Sharealike 3.0
Contributors: Naught101
Creative Commons Attribution-Share Alike 3.0

Similar documents