Properties of the Distribution

Calculation of Mean, Variance, and Moment Generating Function

To calculate the mean of the [Maple Math] ( [Maple Math] ) distribution, we can simply integrate over x *(ChisquarePDF), per the definition of mathematical expectation.

> restart;

> with(plots, display):

> f:=x->ChisquarePDF(nu,x);

[Maple Math]

> EX:=int(x*f(x),x=0..infinity);

[Maple Math]

Calculating Var( X ), we will employ the formula: Var( X ) = E( [Maple Math] ) - [Maple Math]

> E_X_SQ:=int((x^2)*f(x),x=0..infinity);

[Maple Math]

> VarX:=simplify(E_X_SQ-EX^2);

[Maple Math]

The Moment Generating Function (MGF) can be easily calculated via Maple. Recall the moment generating function of a random variable X is defined as M ( t ) = E( [Maple Math] ), provided this expectation exists. In taking the integral of

[Maple Math] *ChisquarePDF(x) over the range (0, [Maple Math] ), the argument of the exponential term is [Maple Math] , and for the resulting integral to be convergent, we must have [Maple Math] <0. Equivalently, we require [Maple Math] .

> assume(t<1/2);

> assume(nu>0);

> int(exp(t*x)*f(x),x=0..infinity);

[Maple Math]

So we have the moment generating function for a [Maple Math] ( [Maple Math] ) random variable is given by

M ( t ) = [Maple Math]

We will now define M ( t ) as a function of t .

> restart:

> with(plots):

> M:=t->(1-2*t)^(-nu/2);

[Maple Math]

The moment generating function provides us alternative ways to calculate the mean and variance by way of the formula.

[Maple Math] (0) = E( [Maple Math] ) , where [Maple Math] ( t ) denotes the r th derivative of M ( t ) with respect to t .

This formula holds as long as M ( t ) exists in an open interval containing zero. See, for example, Mathematical Statistics and Data Analysis by John A. Rice for more on the moment generating function. We now use this formula to find the mean of the Chi-square.

> M_p:=diff(M(t),t);

[Maple Math]

> simplify(M_p);

[Maple Math]

> simplify(subs(t=0,M_p));

[Maple Math]

And therefore, if X is a [Maple Math] ( [Maple Math] ) variable, then E( X ) = [Maple Math] , which agrees with what we found earlier. Now turning to the second moment.

> M_pp:=diff(M_p,t);

[Maple Math]

> simplify(subs(t=0,M_pp));

[Maple Math]

Therefore, E( [Maple Math] ) again agrees with what was calculated previously. The variance is now quickly calculated as

Var( X ) = E( [Maple Math] ) - [Maple Math]

= [Maple Math]

= [Maple Math] .

>

Special Properties

You probably noticed in Probability Distribution Function and Shape that the probability density function for a [Maple Math] ( [Maple Math] ) distribution looked close to Normal (i.e. bell-shaped) when [Maple Math] was large. Let's look again at the shape of the [Maple Math] ( [Maple Math] ) distribution, as [Maple Math] varies from 1 to 25. A normal curve with the same mean and variance as each [Maple Math] will be overlayed for comparison.

> for nu from 1 to 25 do

> H[nu]:=plot(ChisquarePDF(nu,x),x=1..50,color=blue):

> N[nu]:=plot(NormalPDF(nu,2*nu,x),x = 1..50):

> num:=convert(nu,string):

> tracker[nu]:=textplot([30,0.2,`nu is `.num],color=blue):

> P[nu]:=display({H[nu],N[nu],tracker[nu]}):

> od:

> display([seq(P[nu], nu=1..25)], insequence=true,title="Chisquare (blue) to Normal (red)");

[Maple Plot]

>

Indeed, the [Maple Math] ( [Maple Math] ) distribution approaches a Normal( [Maple Math] ) distribution as [Maple Math] grows large. This can also be seen by realizing that if Y is a [Maple Math] variable, then Y has the same distribution as [Maple Math] + ... + [Maple Math] where the [Maple Math] 's are independent and identically distributed [Maple Math] (1) for [Maple Math] . By the Central Limit Theorem, we know that the distribution of [Maple Math] + ... + [Maple Math] approaches normality as [Maple Math] becomes larger, and therefore, so does the distribution of a [Maple Math] ( [Maple Math] ) variable.