Properties of the Distribution

Calculation of Mean, Variance, and Moment Generating Function

Calculating E( X ), the expectation or mean of the Negative Binomial ( r , p ) distribution.

> restart;

> with(plots):

> f:=x->NegBinomialPDF(r,p,x);

[Maple Math]

> EX:=simplify(sum(x*f(x),x=r..infinity));

[Maple Math]

Calculating Var( X ), the variance of the Negative Binomial ( r , p ) distribution.

We will employ the formula: Var( X ) = E( [Maple Math] ) - [Maple Math]

> E_X_SQ:=simplify(sum(x^2*f(x),x=r..infinity));

[Maple Math]

> VarX:=simplify(E_X_SQ-EX^2);

[Maple Math]

The Moment Generating Function (MGF) can be easily calculated via Maple.

Recall the moment generating function of a random variable X is defined

as M ( t ) = E( [Maple Math] ), provided this expectation exists.

> simplify(sum(exp(t*x)*f(x),x=r..infinity));

[Maple Math]

So we have the moment generating function for a Negative Binomial ( r , p ) random variable

is given by

M ( t ) = [Maple Math]

We will define M ( t ) as a function of t .

> M:=t->(exp(t)*p/(1-exp(t)+exp(t)*p))^r;

[Maple Math]

The moment generating function provides us alternative ways to calculate the

mean and variance by way of the formula.

[Maple Math] (0) = E( [Maple Math] ) , where [Maple Math] ( t ) denotes the r th derivative of M ( t ) with respect to t .

This formula holds as long as M ( t ) exists in an open interval containing zero.

See, for example, Mathematical Statistics and Data Analysis by John A. Rice for

more on the moment generating function.

> M_p:=diff(M(t),t);

[Maple Math]

> simplify(M_p);

[Maple Math]

> simplify(subs(t=0,M_p));

[Maple Math]

And therefore if X is a Negative Binomial ( r , p ) variable, then E( X ) = r / p , which agrees with

what we found earlier. Now turning to the second moment.

> M_pp:=diff(M_p,t);

[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]

> simplify(subs(t=0,M_pp));

[Maple Math]

>

Therefore E( [Maple Math] ) = [Maple Math] , which again is in agreement with the value

calculated previously. The variance is now quickly calculated as

Var( X ) = E( [Maple Math] ) - [Maple Math]

= [Maple Math]

= [Maple Math] .

Special Properties

You probably noticed that the probability histogram for a Negative Binomial ( r,p )

distribution looked close to Normal (i.e. bell-shaped) when r was large. Take a look

again at the shape of the Negative Binomial ( r , p ) distribution,

as r varies from 1 to 20, assuming a fixed value of p . A normal curve with the same

mean and variance as the Negative Binomial will be overlayed for comparison.

>

> p:=0.5:

> for r from 1 to 20 do

> num:=convert(evalf(r), string):

> tracker[r]:=textplot([60,0.25,`r is `.num],color=blue):

> H[r]:=ProbHist(NegBinomialPDF(r,p,x),1..80,80):

> N[r]:=plot(NormalPDF(r/p,r*(1-p)/p^2,x),x = 1..80):

> P[r]:=display({H[r],N[r],tracker[r]}):

> od:

> display([seq(P[r], r=1..20)], insequence=true, title="Normal Approx. to the Negative Binomial. p=0.5 and r is increasing");

[Maple Plot]

>

Indeed the Negative Binomial ( r,p ) distribution approaches a Normal( [Maple Math] , [Maple Math] )

distribution as r grows large. This can be easily seen by realizing that if X is a

Negative Binomial ( r,p ) variable, then X has the same distribution as

G 1 + G 2 + ... + Gr ,

where Gi are independent and identically distributed Geometric( p ) random variables.

G 1 can be interpreted as the number of Bernoulli trials required to observe the first

success, G 2 the number of Bernoulli trials following the first success required to observe

the second success, etc. By the central limit theorem, we know that the distribution of

G 1 + G 2 + ... + Gr approaches normality as r grows large, and therefore so does the

distribution of a Negative Binomial ( r,p ) variable.