In probability theory, the Bapat–Beg theorem gives the joint probability distribution of order statistics of independent but not necessarily identically distributed random variables in terms of the cumulative distribution functions of the random variables. Ravindra Bapat and Beg published the theorem in 1989,[1] though they did not offer a proof. A simple proof was offered by Hande in 1994.[2]
Often, all elements of the sample are obtained from the same population and thus have the same probability distribution. The Bapat–Beg theorem describes the order statistics when each element of the sample is obtained from a different statistical population and therefore has its own probability distribution.[1]
Statement of theorem
Let
be independent real valued random variables with cumulative distribution functions respectively
. Write
for the order statistics. Then the joint probability distribution of the
order statistics (with
and
) is

where
![{\displaystyle {\begin{aligned}&P_{i_{1},\ldots ,i_{k}}(x_{1},\ldots ,x_{k})={}\\[5pt]&\operatorname {per} {\begin{bmatrix}F_{1}(x_{1})\cdots F_{1}(x_{1})&F_{1}(x_{2})-F_{1}(x_{1})\cdots F_{1}(x_{2})-F_{1}(x_{1})&\cdots &1-F_{1}(x_{k})\cdots 1-F_{1}(x_{k})\\F_{2}(x_{1})\cdots F_{2}(x_{1})&F_{2}(x_{2})-F_{2}(x_{1})\cdots F_{2}(x_{2})-F_{2}(x_{1})&\cdots &1-F_{2}(x_{k})\cdots 1-F_{1}(x_{k})\\\vdots &\vdots &&\vdots \\\underbrace {F_{n}(x_{1})\cdots F_{n}(x_{1})} _{i_{1}}&\underbrace {F_{n}(x_{2})-F_{n}(x_{1})\cdots F_{n}(x_{2})-F_{n}(x_{1})} _{i_{2}-i_{1}}&\cdots &\underbrace {1-F_{n}(x_{k})\cdots 1-F_{n}(x_{k})} _{n-i_{k}}\end{bmatrix}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/498df48010bb07bf50989e933fef2385d82a12d0)
is the permanent of the given block matrix. (The figures under the braces show the number of columns.)[1]
Independent identically distributed case
In the case when the variables
are independent and identically distributed with cumulative probability distribution function
for all i the theorem reduces to
![{\displaystyle {\begin{aligned}&F_{X_{(n_{1})},\ldots ,X_{(n_{k})}}(x_{1},\ldots ,x_{k})\\[8pt]={}&\sum _{i_{k}=n_{k}}^{n}\cdots \sum _{i_{2}=n_{2}}^{i_{3}}\sum _{i_{1}=n_{1}}^{i_{2}}m!{\frac {F(x_{1})^{i_{1}}}{i_{1}!}}{\frac {(1-F(x_{k}))^{m-i_{k}}}{(m-i_{k})!}}\prod \limits _{j=2}^{k}{\frac {\left[F(x_{j})-F(x_{j-1})\right]^{i_{j}-i_{j-1}}}{(i_{j}-i_{j-1})!}}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e30ca5282f9efe51bde8429afc6b58ec74b6b44c)
- No assumption of continuity of the cumulative distribution functions is needed.[2]
- If the inequalities x1 < x2 < ... < xk are not imposed, some of the inequalities "may be redundant and the probability can be evaluated after making the necessary reduction."[1]
Complexity
Glueck et al. note that the Bapat–Beg "formula is computationally intractable, because it involves an exponential number of permanents of the size of the number of random variables"[3] However, when the random variables have only two possible distributions, the complexity can be reduced to O(m2k).[3] Thus, in the case of two populations, the complexity is polynomial in m for any fixed number of statistics k.
References