Properties of the Distribution
Calculation of Mean, Variance, and Moment Generating Function
To calculate the mean of the ( ) distribution, we can simply integrate over x *(ChisquarePDF), per the definition of mathematical expectation.
> restart;
> with(plots, display):
> f:=x->ChisquarePDF(nu,x);
> EX:=int(x*f(x),x=0..infinity);
Calculating Var( X ), we will employ the formula: Var( X ) = E( ) -
> E_X_SQ:=int((x^2)*f(x),x=0..infinity);
> VarX:=simplify(E_X_SQ-EX^2);
The Moment Generating Function (MGF) can be easily calculated via Maple. Recall the moment generating function of a random variable X is defined as M ( t ) = E( ), provided this expectation exists. In taking the integral of
*ChisquarePDF(x) over the range (0, ), the argument of the exponential term is , and for the resulting integral to be convergent, we must have <0. Equivalently, we require .
> assume(t<1/2);
> assume(nu>0);
> int(exp(t*x)*f(x),x=0..infinity);
So we have the moment generating function for a ( ) random variable is given by
M ( t ) =
We will now define M ( t ) as a function of t .
> restart:
> with(plots):
> M:=t->(1-2*t)^(-nu/2);
The moment generating function provides us alternative ways to calculate the mean and variance by way of the formula.
(0) = E( ) , where ( t ) denotes the r th derivative of M ( t ) with respect to t .
This formula holds as long as M ( t ) exists in an open interval containing zero. See, for example, Mathematical Statistics and Data Analysis by John A. Rice for more on the moment generating function. We now use this formula to find the mean of the Chi-square.
> M_p:=diff(M(t),t);
> simplify(M_p);
> simplify(subs(t=0,M_p));
And therefore, if X is a ( ) variable, then E( X ) = , which agrees with what we found earlier. Now turning to the second moment.
> M_pp:=diff(M_p,t);
> simplify(subs(t=0,M_pp));
Therefore, E( ) again agrees with what was calculated previously. The variance is now quickly calculated as
Var( X ) = E( ) -
=
= .
>
Special Properties
You probably noticed in Probability Distribution Function and Shape that the probability density function for a ( ) distribution looked close to Normal (i.e. bell-shaped) when was large. Let's look again at the shape of the ( ) distribution, as varies from 1 to 25. A normal curve with the same mean and variance as each will be overlayed for comparison.
> for nu from 1 to 25 do
> H[nu]:=plot(ChisquarePDF(nu,x),x=1..50,color=blue):
> N[nu]:=plot(NormalPDF(nu,2*nu,x),x = 1..50):
> num:=convert(nu,string):
> tracker[nu]:=textplot([30,0.2,`nu is `.num],color=blue):
> P[nu]:=display({H[nu],N[nu],tracker[nu]}):
> od:
> display([seq(P[nu], nu=1..25)], insequence=true,title="Chisquare (blue) to Normal (red)");
>
Indeed, the
(
) distribution approaches a Normal(
) distribution as
grows large. This can also be seen by realizing that if
Y
is a
variable, then
Y
has the same distribution as
+ ... +
where the
's are independent and identically distributed
(1) for
. By the Central Limit Theorem, we know that the distribution of
+ ... +
approaches normality as
becomes larger, and therefore, so does the distribution of a
(
) variable.