Markov Analysis

subject Type Homework Help
subject Pages 5
subject Words 1173
subject School MCL
subject Course BS Accountancy

Unlock document.

This document is partially blurred.
Unlock all pages and 1 million more documents.
Get Access
3/23/2018
1
Chapter 4
Markov Analysis
4-2
Learning Objectives
1. Determine future states or conditions by
using Markov analysis.
2. Compute long-term or steady-state
conditions by using only the matrix of
transition probabilities.
3. Understand the use of absorbing state
analysis in predicting future conditions.
After completing this chapter, students will be able to:
4-3
Chapter Outline
4.1 Introduction
4.2 States and State Probabilities
4.3 Matrix of Transition Probabilities
4.4 Predicting Future Market Shares
4.5 Markov Analysis of Machine Operations
4.6 Equilibrium Conditions
4.7 Absorbing States and the Fundamental
Matrix: Accounts Receivable Application
4-4
Introduction
Markov analysis is a technique that deals with
the probabilities of future occurrences by
analyzing presently known probabilities.
It has numerous applications in business.
Markov analysis makes the assumption that
the system starts in an initial state or
condition.
The probabilities of changing from one state
to another are called a matrix of transition
probabilities.
Solving Markov problems requires basic
matrix manipulation.
4-5
Introduction
This discussion will be limited to Markov
problems that follow four assumptions:
1. There are a limited or finite number of
possible states.
2. The probability of changing states
remains the same over time.
3. We can predict any future state from the
previous state and the matrix of transition
probabilities.
4. The size and makeup of the system do
not change during the analysis.
4-6
States and State Probabilities
States are used to identify all possible conditions
of a process or system.
It is possible to identify specific states for many
processes or systems.
In Markov analysis we assume that the states are
both collectively exhaustive and mutually
exclusive.
After the states have been identified, the next
step is to determine the probability that the
system is in this state.
page-pf2
3/23/2018
4-7
States and State Probabilities
The information is placed into a vector of state
probabilities:
(i) = vector of state probabilities
for period i
= (
1,
2,
3, … ,
n)
Where:
n= number of states
1,
2, … ,
n= probability of being in state 1,
state 2, …, state n
4-8
States and State Probabilities
In some cases it is possible to know with complete
certainty in which state an item is located:
Vector states can then be represented as:
(1) = (1, 0)
where
(1) = vector of states for the machine
in period 1
1= 1 = probability of being in the
first state
2= 0 = probability of being in the
second state
The Vector of State Probabilities for
Three Grocery Stores Example
States for people in a small town with three
grocery stores.
A total of 100,000 people shop at the three
groceries during any given month:
Forty thousand may be shopping at American Food
Store state 1.
Thirty thousand may be shopping at Food Mart state 2.
Thirty thousand may be shopping at Atlas Foods state
3.
The Vector of State Probabilities for
Three Grocery Stores Example
Probabilities are as follows:
State 1 American Food Store: 40,000/100,000 = 0.40 = 40%
State 2 Food Mart: 30,000/100,000 = 0.30 = 30%
State 3 Atlas Foods: 30,000/100,000 = 0.30 = 30%
These probabilities can be placed in the following
vector of state probabilities
(1) = (0.4, 0.3, 0.3)
where
(1) = vector of state probabilities for the three
grocery stores for period 1
1= 0.4 = probability that person will shop at
page-pf3
page-pf4
page-pf5

Trusted by Thousands of
Students

Here are what students say about us.

Copyright ©2022 All rights reserved. | CoursePaper is not sponsored or endorsed by any college or university.