Markov - Mathematics

Download Report

Transcript Markov - Mathematics

Markov Chains

Brian Carrico

The Mathematical Markovs

  

Vladimir Andreyevich Markov

(1871-1897)   Andrey Markov’s younger brother With Andrey, developed the Markov brothers’ inequality

Andrey Andreyevich Markov Jr

  Andrey Markov’s son (1903-1979) One of the key founders of the Russian school of constructive mathematics and logic  Also made contributions to differential equations,topology, mathematical logic and the foundations of mathematics Which brings us to:

Andrey Andreyevich Markov

Андрей Андреевич Марков June 14, 1856 – July 20, 1922       Born in Ryazan  (roughly 170 miles Southeast of Moscow) Began Grammar School in 1866 Started at St Petersburg University in 1874 Defended his Masters Thesis in 1880 Doctoral Thesis in 1885 Excommunicated from the Russian Orthodox Church

Precursors to Markov Chains

 Bernoulli Series  Brownian Motion  Random Walks

Bernoulli Series

  Jakob Bernoulli (1654-1705)  Sequence independent random variables X 1 , X 2 ,X 3 ,... such that  For every i, X i is either 0 or 1  For every i, P(X i )=1 is the same Markov’s first discussions of chains, a 1906 paper, considers only chains with two states  Closely related to Random Walks

Brownian Motion

 Described as early as 60 BC by Roman poet Lucretius  Formalized and officially discovered by botanist Robert Brown in 1827  The seemingly random movement of particles suspended in a fluid

Random Walks

    Formalized in 1905 by Karl Pearson The formalization of a trajectory that consists of taking successive random steps The results of random walk analysis have been applied to computer science, physics, ecology, economics, and a number of other fields as a fundamental model for random processes in time Turns out to be a specific Markov chain

So what is a Markov Chain?

 A random process where all information about the future is contained in the present state  Or less formally: a process where future states depend only on the present state, and are independent of past states  Mathematically:

Applications of Markov Chains

      Science Statistics Economics and Finance Gambling and games of chance Baseball Monte Carlo

Science

 Physics  Thermodynamic systems generally have time invariant dynamics  All relevant information is in the state description  Chemistry  An algorithm based on a Markov chain was used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products

Economics and Finance

 Markov Chains are used model a variety of different phenomena, including asset prices and market crashes.  Regime-switching model of James D. Hamilton  Markov Switching Multifractal asset pricing model  Dynamic macroeconomics

Gambling and Games of Chance

 In most card games each hand is independent  Board games like Snakes and Ladders

Baseball

 Use of Markov chain models in baseball analysis began in 1960  Each at bat can be taken as a Markov chain

Monte Carlo

 A Markov chain with a large number of steps is used to create the algorithm for the basis of the Monte Carlo simulation

Statistics

 Many important statistics measure independent trials, which can be represented by Markov chains

An Example from Statistics

   A thief is in a dungeon with three identical doors. Once the thief chooses a door and passes through it, the door locks behind him. The three doors lead to:    A 6 hour tunnel leading to freedom A 3 hour tunnel that returns to the dungeon A 9 hour tunnel that returns to the dungeon Each door is chosen with equal probability. When he is dropped back into the dungeon by the second and third doors there is a memoryless choice of doors. He isn’t able to mark the doors in any way. What is his expected time of escape?

Note:

Example (cont)

     We plug the values in for x i  E(X)=6*(1/3)+x 2 *(1/3)+x 3 *(1/3) and p(x i ) to get: But what are x 2 and x 3 ?

Because the decision is memoryless, the expected time after returning from tunnels 2 or 3 doesn’t change from the initial expected time. So, x2=x3=E(X).

So,  E(X)=6*(1/3)+E(X)*(1/3)+E(X)*(1/3) Now we’re back in Algebra 1

Sources

  Wikipedia The Life and Work of A.A. Markov. Basharin, Gely P. et al. http://decision.csl.illinois.edu/~meyn/pages /Markov-Work-and-life.pdf

 Leemis (2009),

Probability