4/6/2009
- A Markov chain describes a process of moving between different states (positions, locations, situations, settings, etc.)
- The probability of moving to a future state depends only on the current state (a special property of this process called the Markov property).
- The probability of moving from one state to another is called a transition probability.
- Transition probabilities are summarized in a transition matrix.
- An aborbing state is a special state - the probability of remaining in an aborbing state is 1.
- The transition matrix is used to calculate the probability of reaching states after a specific number of moves. E.g., the transition matrix specifies the probabilities of reaching various states in one move. Squaring the transition matrix provides the probabilities of reaching states in two moves.
- To find the expected number of times that the process will be in certain states before being absorbed, we must remove the row and column associated with the aborbing state and then use some matrix calculations. Let Q denote the matrix without the row and column for the aborbing state. The matrix of expected values, say E, is equal to the inverse of (I - Q), where I is the identity matrix.
A half inning of baseball as a Markov Chain
- 25 possible states - see Table 9.4 on p. 249
- Batting play (hit, out, or walk)
- Runs scored = (R_before + O_before + 1) - (R_after + O_after)
Demonstration
- Entering matrices into Minitab
- Editor > Enable Commands and Calc > Matrices > Read
- Data > Copy > Columns to Matrix
- Matrix Multiplication
- Inverting Matrices
In Class Exercises
Leadoff Exercise, 9.1, 9.2, 9.3
Please complete your reading of Chapter 9 (case studies 9-3 through 9-5) for class on Wednesday