Probability theory

Download Report

Transcript Probability theory

Random Vector
Martina Litschmannová
[email protected]
K210
Random Vectors
 An k- dimensional random vector is a function X = (𝑋1, … , π‘‹π‘˜ )𝑇 that
associates a vector of real numbers with each element is a random
variable.
For example:
 A semiconductor chip is divided into β€˜M’ regions. For the random
experiment of finding the number of defects and their locations, let
𝑁𝑖 denote the number of defects in ith region. Then 𝑡 =
(𝑁1, … , 𝑁𝑀)𝑇 is a discrete random vector.
 In a random experiment of selecting a student’s name, let 𝐻𝑖 =
height of ith student in inches and π‘Šπ‘– = weight of ith student in
pounds. Then 𝑺 = (𝐻𝑖, π‘Šπ‘– )𝑇 is a continuous random vector.
2 – dimensional Random Vectors
 We're going to focus on 2-dimensional distributions (i.e. random
vector consists only of two random variables) but higher
dimensions (more than two variables) are also possible.
Joint Probability Distribution
 Joint distribution for random vector 𝑋, π‘Œ 𝑇 defines the probability
of events defined in terms of both X and Y.
Joint cumulative distribution function for 𝑿 = 𝑋, π‘Œ 𝑇 is given by
𝐹 π‘₯, 𝑦 = 𝑃 𝑋 < π‘₯ ∧ π‘Œ < 𝑦 = 𝑃 𝑋 < π‘₯; π‘Œ < 𝑦 .
𝐹 π‘₯, 𝑦
Joint cumulative distribution function
Joint cumulative distribution function for 𝑿 = 𝑋, π‘Œ 𝑇 is given by
𝐹 π‘₯, 𝑦 = 𝑃 𝑋 < π‘₯ ∧ π‘Œ < 𝑦 = 𝑃 𝑋 < π‘₯; π‘Œ < 𝑦 .
Properties of joint CDF:
1.
2.
3.
βˆ€ π‘₯, 𝑦 ∈ ℝ2 : 0 ≀ 𝐹 π‘₯, 𝑦 ≀ 1,
lim 𝐹 π‘₯, 𝑦 = lim 𝐹 π‘₯, 𝑦 = 0,
π‘₯β†’βˆ’βˆž
lim
π‘₯,𝑦 β†’ ∞,∞
π‘¦β†’βˆ’βˆž
𝐹 π‘₯, 𝑦 = 1,
4. 𝐹 π‘₯, 𝑦 is nondecreasing in each variable,
5. 𝐹 π‘₯, 𝑦 is continous from the left in each variable.
6. 𝑃 π‘Ž ≀ 𝑋 < 𝑏, 𝑐 ≀ π‘Œ < 𝑑 = 𝐹 𝑏, 𝑑 βˆ’ 𝐹 π‘Ž, 𝑑 βˆ’ 𝐹 𝑏, 𝑐 + 𝐹 π‘Ž, 𝑐
Discrete Joint Probability Distributions
The probability function, also known as the probability mass function
for a joint probability distribution is defined such that:
𝑝 π‘₯𝑖 , 𝑦𝑗 = 𝑃 𝑋 = π‘₯𝑖 ∧ π‘Œ = 𝑦𝑗 = 𝑃 𝑋 = π‘₯𝑖 ; π‘Œ = 𝑦𝑗 .
𝑝 π‘₯𝑖 , 𝑦𝑗 = P 𝑋 = π‘₯𝑖 |π‘Œ = 𝑦𝑗 βˆ™ 𝑃 π‘Œ = 𝑦𝑗 = P π‘Œ = 𝑦𝑗 |𝑋 = π‘₯𝑖 βˆ™ 𝑃 𝑋 = π‘₯𝑖
𝑛1 𝑛2
𝑝 π‘₯𝑖 , 𝑦𝑗 = 1,
𝑖=1 𝑗=1
𝑛1
β‰₯ 1, 𝑛2 β‰₯ 1
Discrete Joint Probability Distributions
Probability Mass Function for a Joint Probability Distribution :
𝑝 π‘₯𝑖 , 𝑦𝑗 = 𝑃 𝑋 = π‘₯𝑖 ∧ π‘Œ = 𝑦𝑗 = 𝑃 𝑋 = π‘₯𝑖 ; π‘Œ = 𝑦𝑗
Properties of p.m.f.:
1. 𝑝 π‘₯𝑖 , 𝑦𝑗 > 0 only for a finite or countable set of values π‘₯𝑖 , 𝑦𝑗
2. 0 ≀ 𝑝 π‘₯𝑖 , 𝑦𝑗 ≀ 1,
𝑛1
𝑛2
3.
𝑛1 β‰₯ 1, 𝑛2 β‰₯ 1,
𝑖=1 𝑗=1 𝑝 π‘₯𝑖 , 𝑦𝑗 = 1,
4. 𝑝 π‘₯𝑖 , 𝑦𝑗 = P 𝑋 = π‘₯𝑖 |π‘Œ = 𝑦𝑗 βˆ™ 𝑃 π‘Œ = 𝑦𝑗 = P π‘Œ = 𝑦𝑗 |𝑋 = π‘₯𝑖 βˆ™
𝑃 𝑋 = π‘₯𝑖 ,
5. If X, Y are independent 𝑝 π‘₯𝑖 , 𝑦𝑗 = P 𝑋 = π‘₯𝑖 βˆ™ 𝑃 π‘Œ = 𝑦𝑗
Table of joint probabilities
X\Y
x1
x2
y1
p(x1, y1)
p(x2, y1)
y2
p(x1, y2)
p(x2, y2)
xn1
p(xn1, y1) p(xn1, y2)
...
...
...
...
...
yn2
p(x1, yn2)
p(x2, yn2)
p(xn1, yn2)
1. A random experiment consists of tossing coin (X) and flipping die
(Y). Find probability mass function for a joint probability
distribution of a random vector 𝑋, π‘Œ .
X/Y
0 (head)
1 (tail)
1
1/12
1/12
2
1/12
1/12
3
1/12
1/12
4
1/12
1/12
5
1/12
1/12
6
1/12
1/12
1. A random experiment consists of tossing coin (X) and flipping die
(Y). Find probability mass function for a joint probability
distribution of a random vector 𝑋, π‘Œ .
X/Y
0 (head)
1 (tail)
1
1/12
1/12
2
1/12
1/12
3
1/12
1/12
4
1/12
1/12
5
1/12
1/12
6
1/12
1/12
1. A random experiment consists of tossing coin (X) and flipping die
(Y). Find probability mass function for a joint probability
distribution of a random vector 𝑋, π‘Œ .
X/Y
0 (head)
1 (tail)
1
1/12
1/12
2
1/12
1/12
3
1/12
1/12
4
1/12
1/12
5
1/12
1/12
6
1/12
1/12
1
control cell
2. Probability mass function for a joint probability distribution of a
random vector 𝑋, π‘Œ is given as:
X/Y
-1
0
1
-2
0,13
0
0,23
0
0,11
0,11
0,01
1
0,07
0
0
2
0
0,33
0,01
1
Find:
a) 𝑝 0; 1
b) 𝑝 1; 0
c) 𝑝 3; 1
d) 𝑃 𝑋 < 1,3; π‘Œ < βˆ’0,6
e) 𝑃 𝑋 < 1,3; π‘Œ > βˆ’0,6
f) 𝐹 0,7; 1,3
Continous Joint Probability Distributions
𝐹 π‘₯, 𝑦 =
𝑦
π‘₯
𝑓
βˆ’βˆž βˆ’βˆž
𝑠, 𝑑 𝑑𝑠 𝑑𝑑,
where 𝑓 π‘₯, 𝑦 is Joint Probability Density Function.
Properties of Joint Probability Density Function:
1. 𝑓 π‘₯, 𝑦 β‰₯ 0,
2.
∞ ∞
𝑓
βˆ’βˆž βˆ’βˆž
3. If
πœ•2 𝐹 π‘₯,𝑦
πœ•π‘₯πœ•π‘¦
π‘₯, 𝑦 𝑑π‘₯ 𝑑𝑦 = 1,
exists, pak 𝑓 π‘₯, 𝑦 =
4. 𝑃 π‘Ž ≀ 𝑋 < 𝑏, 𝑐 ≀ π‘Œ < 𝑑 =
πœ•2 𝐹 π‘₯,𝑦
πœ•π‘₯πœ•π‘¦
𝑑 𝑏
𝑓
𝑐 π‘Ž
,
π‘₯, 𝑦 𝑑π‘₯ 𝑑𝑦.
3.
Find the constant c so that function
𝑓 π‘₯, 𝑦 =
𝑐 π‘₯+𝑦
0
π‘₯, 𝑦 ∈ 0; 1 × 0; 1
π‘₯, 𝑦 βˆ‰ 0; 1 × 0; 1
can be a joint probability density function of a random vector 𝑋, π‘Œ 𝑇 .
That the function 𝑓 π‘₯, 𝑦 can be a joint probability density function of a
random vector 𝑋, π‘Œ 𝑇 , it has be satisfy condition
∞ ∞
𝑓
βˆ’βˆž βˆ’βˆž
1 1
𝑐
0 0
1 1
𝑐
0 0
π‘₯, 𝑦 𝑑π‘₯ 𝑑𝑦 = 1.
π‘₯ + 𝑦 𝑑π‘₯ 𝑑𝑦 = 1
π‘₯ + 𝑦 𝑑π‘₯ 𝑑𝑦 = 𝑐
=𝑐
1 π‘₯2
0 2
+ π‘₯𝑦
1 1
+
0 2
1
0
𝑑𝑦 = 𝑐
𝑦 𝑑𝑦 = 𝑐
1
𝑦
2
1 π‘₯2
0 2
+
+ π‘₯𝑦
1
𝑦2
2 0
1
0
𝑑𝑦 =
=𝑐 β‡’ 𝑐=1
Marginal probability distributions
Obtained by summing or integrating the joint probability distribution
over the values of the other random variable.
 Discrete Random Vector
𝑃𝑋 π‘₯𝑖 =
𝑦𝑗
𝑝 π‘₯𝑖 , 𝑦𝑗 , 𝑖 β‰₯ 1,
π‘ƒπ‘Œ 𝑦𝑗 =
π‘₯𝑖
𝑝 π‘₯𝑖 , 𝑦𝑗 , 𝑗 β‰₯ 1.
 Continous Random Vector
𝑓𝑋 π‘₯ =
π‘“π‘Œ 𝑦 =
∞
𝑓
βˆ’βˆž
∞
𝑓
βˆ’βˆž
π‘₯, 𝑦 𝑑𝑦, π‘₯ ∈ ℝ,
π‘₯, 𝑦 𝑑π‘₯ , 𝑦 ∈ ℝ.
4. A random experiment consists of tossing coin (X) and flipping die
(Y). Probability mass function for a joint probability distribution of
a random vector 𝑋, π‘Œ is given as:
X/Y
0 (head)
1 (tail)
1
1/12
1/12
2
1/12
1/12
3
1/12
1/12
4
1/12
1/12
5
1/12
1/12
6
1/12
1/12
1
Find Marginal Probability Mass Functions 𝑃𝑋 π‘₯𝑖 and π‘ƒπ‘Œ 𝑦𝑗 .
4. A random experiment consists of tossing coin (X) and flipping die
(Y). Probability mass function for a joint probability distribution of
a random vector 𝑋, π‘Œ is given as:
X/Y
0 (head)
1 (tail)
π‘ƒπ‘Œ 𝑦𝑗
1
1/12
1/12
2/12
2
1/12
1/12
2/12
3
1/12
1/12
2/12
4
1/12
1/12
2/12
5
1/12
1/12
2/12
6
𝑃𝑋 π‘₯𝑖
1/12 6/12
1/12 6/12
2/12
1
Find Marginal Probability Mass Functions 𝑃𝑋 π‘₯𝑖 and π‘ƒπ‘Œ 𝑦𝑗 .
4. A random experiment consists of tossing coin (X) and flipping die
(Y). Probability mass function for a joint probability distribution of
a random vector 𝑋, π‘Œ is given as:
X/Y
0 (head)
1 (tail)
π‘ƒπ‘Œ 𝑦𝑗
1
1/12
1/12
2/12
2
1/12
1/12
2/12
3
1/12
1/12
2/12
4
1/12
1/12
2/12
5
1/12
1/12
2/12
6
𝑃𝑋 π‘₯𝑖
1/12 6/12
1/12 6/12
2/12
1
Marginal Probability Mass Functions 𝑃𝑋 π‘₯𝑖 and π‘ƒπ‘Œ 𝑦𝑗 .
Y
π‘ƒπ‘Œ 𝑦𝑗
X
𝑃𝑋 π‘₯𝑖
1
2/12
2
2/12
0 (head)
6/12
3
2/12
1 (tail)
6/12
4
2/12
5
2/12
6
2/12
4. A random experiment consists of tossing coin (X) and flipping die
(Y). Probability mass function for a joint probability distribution of
a random vector 𝑋, π‘Œ is given as:
X/Y
0 (head)
1 (tail)
π‘ƒπ‘Œ 𝑦𝑗
1
1/12
1/12
1/6
2
1/12
1/12
1/6
3
1/12
1/12
1/6
4
1/12
1/12
1/6
5
1/12
1/12
1/6
6
𝑃𝑋 π‘₯𝑖
1/12
1/2
1/12
1/2
1/6
1
Marginal Probability Mass Functions 𝑃𝑋 π‘₯𝑖 and π‘ƒπ‘Œ 𝑦𝑗 .
Y
π‘ƒπ‘Œ 𝑦𝑗
X
𝑃𝑋 π‘₯𝑖
1
1/6
2
1/6
0 (head)
1/2
3
1/6
1 (tail)
1/2
4
1/6
5
1/6
6
1/6
5. A certain farm produces two kinds of eggs on any given day;
organic and non-organic. Let these two kinds of eggs be
represented by the random variables X and Y respectively. Given
that the joint probability density function of these variables is
given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
Find:
a) marginal density functions 𝑓𝑋 π‘₯ and π‘“π‘Œ 𝑦 .
∞
𝑓𝑋 π‘₯ =
1
𝑓 π‘₯; 𝑦 𝑑𝑦 =
βˆ’βˆž
∞
π‘“π‘Œ 𝑦 =
0
1
𝑓 π‘₯; 𝑦 𝑑π‘₯ =
βˆ’βˆž
0
2
2
π‘₯ + 2𝑦 𝑑𝑦 = π‘₯𝑦 + 𝑦 2
3
3
1
0
1
2
2
2 π‘₯
π‘₯ + 2𝑦 𝑑π‘₯ =
+ 2π‘₯𝑦
3
3 2
2
= π‘₯+1
3
0
1
= 1 + 4𝑦
3
5. A certain farm produces two kinds of eggs on any given day;
organic and non-organic. Let these two kinds of eggs be
represented by the random variables X and Y respectively. Given
that the joint probability density function of these variables is
given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
Find:
b) 𝑃 𝑋
𝑃 𝑋<
1
;π‘Œ
2
≀
1
1
< ;π‘Œ ≀
2
2
0,5 0,5 2
1
2
=
0
0
3
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
π‘₯ + 2𝑦 𝑑π‘₯𝑑𝑦 =
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
0,5 2 π‘₯ 2
0
3 2
+
5. A certain farm produces two kinds of eggs on any given day;
organic and non-organic. Let these two kinds of eggs be
represented by the random variables X and Y respectively. Given
that the joint probability density function of these variables is
given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
Find:
c) 𝑃 𝑋 <
1
1
1
;π‘Œ
2
𝑃 𝑋 < 2;π‘Œ > 4 =
=
>
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
1
4
1
0,5 2
0,25 0 3
1 2 1
0,25 3 8
π‘₯ + 2𝑦 𝑑π‘₯𝑑𝑦 =
+ 𝑦 𝑑𝑦 =
1
2
0,25 3
2 1
𝑦
3 8
+
π‘₯2
2
0,5
+ 2π‘₯𝑦
1
𝑦2
2
0,25
=
𝑑𝑦 =
0
5
12
βˆ’
1
24
=
πŸ‘
πŸ–
Conditional probability distributions
Conditional Probability Distributions arise from joint probability
distributions where by we need to know that probability of one event
given that the other event has happened, and the random variables
behind these events are joint.
 Discrete Random Vector
𝑃 π‘₯𝑖 |𝑦𝑗
𝑝 π‘₯𝑖 ,𝑦𝑗
= 𝑃 𝑦
π‘Œ 𝑗
𝑝 π‘₯𝑖 ,𝑦𝑗
𝑃 𝑦𝑗 |π‘₯𝑖 =
𝑃𝑋 π‘₯𝑖
,
π‘ƒπ‘Œ 𝑦𝑗 β‰ 0,
,
𝑃𝑋 π‘₯𝑖 β‰  0.
 Continous Random Vector
𝑓 π‘₯|𝑦 =
𝑓 𝑦|π‘₯ =
𝑓 π‘₯,𝑦
π‘“π‘Œ 𝑦
𝑓 π‘₯,𝑦
𝑓𝑋 π‘₯
,
π‘“π‘Œ 𝑦 β‰ 0,
,
𝑓𝑋 π‘₯ β‰  0.
6. Joint and marginal probability distribution of a random vector
𝑋, π‘Œ is given as:
X/Y
-1
0
1
π‘ƒπ‘Œ 𝑦𝑗
Find:
a) 𝑃 𝑋 = 0|π‘Œ
b) 𝑃 π‘Œ = 0|𝑋
c) 𝑃 𝑋 = 0|π‘Œ
d) 𝑃 π‘Œ = 0|𝑋
e) 𝑃 π‘₯|𝑦
f) 𝑃 𝑦|π‘₯
=1
=1
=2
=2
-2
0,13
0
0,23
0,36
0
0,11
0,11
0,01
0,23
1
0,07
0
0
0,07
2
0
0,33
0,01
0,34
𝑃𝑋 π‘₯𝑖
0,31
0,44
0,25
1
7. Joint probability density function of 𝑋, π‘Œ is given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
Find:
a) conditional density function 𝑓 𝑦|π‘₯ ,
𝑓 𝑦|π‘₯ =
𝑓 π‘₯, 𝑦
𝑓𝑋 π‘₯
2
𝑓𝑋 π‘₯ = 3 π‘₯ + 1
0
0≀π‘₯≀1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
7. Joint probability density function of 𝑋, π‘Œ is given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
Find:
b) conditional density function 𝑓 π‘₯|𝑦 ,
𝑓 π‘₯|𝑦 =
𝑓 π‘₯, 𝑦
π‘“π‘Œ 𝑦
1
π‘“π‘Œ 𝑦 = 3 1 + 4𝑦
0
0≀𝑦≀1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
Conditional expected value (expectation)
Conditional expectation is the expected value of a real random variable
with respect to a conditional probability distribution.
Discrete random vector:
 𝐸 𝑋|π‘Œ = 𝑦 = (π‘₯) π‘₯ βˆ™ 𝑃 π‘₯|π‘Œ = 𝑦
 𝐸 π‘Œ|𝑋 = π‘₯ =
(𝑦) 𝑦
βˆ™ 𝑃 𝑦|𝑋 = π‘₯
Continous random vector:
 𝐸 𝑋|π‘Œ = 𝑦 =
 𝐸 π‘Œ|𝑋 = π‘₯ =
∞
π‘₯
βˆ’βˆž
∞
𝑦
βˆ’βˆž
βˆ™ 𝑓 π‘₯|π‘Œ = 𝑦 𝑑π‘₯
βˆ™ 𝑓 𝑦|𝑋 = π‘₯ 𝑑𝑦
Conditional variance
Conditional variance is the variance of a conditional probability
distribution.
𝐷 𝑋|π‘Œ = 𝑦 = 𝐸
𝑋 βˆ’ 𝐸 𝑋|π‘Œ = 𝑦
𝐷 π‘Œ|𝑋 = π‘₯ = 𝐸
π‘Œ βˆ’ 𝐸 π‘Œ|𝑋 = π‘₯
2
|π‘Œ = 𝑦 = 𝐸 𝑋 2 |π‘Œ = 𝑦 βˆ’ 𝐸 𝑋|π‘Œ = 𝑦
2
2
|𝑋 = π‘₯ = 𝐸 π‘Œ 2 |𝑋 = π‘₯ βˆ’ 𝐸 π‘Œ|𝑋 = π‘₯
2
8. Joint and marginal probability distribution of a random vector
𝑋, π‘Œ is given as:
X/Y
-1
0
1
π‘ƒπ‘Œ 𝑦𝑗
Find:
a) 𝐸 𝑋|π‘Œ = 1
b) 𝐷 𝑋|π‘Œ = 1
-2
0,13
0
0,23
0,36
0
0,11
0,11
0,01
0,23
1
0,07
0
0
0,07
2
0
0,33
0,01
0,34
𝑃𝑋 π‘₯𝑖
0,31
0,44
0,25
1
9. Joint probability density function of 𝑋, π‘Œ is given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
Find:
a) E π‘Œ|𝑋 = 0,5 ,
b) D π‘Œ|𝑋 = 0,5 .
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
Independence
Two random variables X and Y are independent if
 Discrete Random Variables
βˆ€π‘–, 𝑗:
𝑝 π‘₯𝑖 , 𝑦𝑗 = 𝑃𝑋 π‘₯𝑖 βˆ™ π‘ƒπ‘Œ 𝑦𝑗
 Continous Random Vector
βˆ€π‘₯, 𝑦:
𝑓 π‘₯, 𝑦 = 𝑓𝑋 π‘₯ βˆ™ π‘“π‘Œ 𝑦
.
Measures of Linear Independence
Covariance:
πœŽπ‘‹π‘Œ = π‘π‘œπ‘£ 𝑋, π‘Œ = 𝐸 𝑋 βˆ’ 𝐸𝑋 π‘Œ βˆ’ πΈπ‘Œ
 πœŽπ‘‹π‘Œ ∈ βˆ’βˆž, ∞
Correlation coefficient:
πœŒπ‘‹π‘Œ
π‘π‘œπ‘£ 𝑋, π‘Œ
=
πœŽπ‘‹ πœŽπ‘Œ
 πœŒπ‘‹π‘Œ ∈ βˆ’1; 1
 πœŒπ‘‹π‘Œ is a scaled version of covariance
= 𝐸 π‘‹π‘Œ βˆ’ 𝐸𝑋 βˆ™ πΈπ‘Œ
Covariance
Covariance:
πœŽπ‘‹π‘Œ = π‘π‘œπ‘£ 𝑋, π‘Œ = 𝐸 𝑋 βˆ’ 𝐸𝑋 π‘Œ βˆ’ πΈπ‘Œ
 πœŽπ‘‹π‘Œ ∈ βˆ’βˆž, ∞
Covariance matrix:
𝐷𝑋
π‘π‘œπ‘£ 𝑋, π‘Œ
π‘π‘œπ‘£ 𝑋, π‘Œ
π·π‘Œ
= 𝐸 π‘‹π‘Œ βˆ’ 𝐸𝑋 βˆ™ πΈπ‘Œ
Correlation
Correlation:
πœŒπ‘‹π‘Œ




πœŒπ‘‹π‘Œ
πœŒπ‘‹π‘Œ
πœŒπ‘‹π‘Œ
πœŒπ‘‹π‘Œ
π‘π‘œπ‘£ 𝑋, π‘Œ
=
πœŽπ‘‹ πœŽπ‘Œ
∈ βˆ’1; 1
> 0 … 𝑋, π‘Œ are positively correlated
< 0 … 𝑋, π‘Œ are negatively correlated
= 0 … 𝑋, π‘Œ are uncorrelated
Correlation matrix:
1
πœŒπ‘‹π‘Œ
πœŒπ‘‹π‘Œ
1
Correlation
𝜌 𝑋, π‘Œ
=1,000
𝜌 𝑋, π‘Œ
=0,967
𝜌 𝑋, π‘Œ
𝜌 𝑋, π‘Œ
= -1,000
𝜌 𝑋, π‘Œ
=0,000
𝜌 𝑋, π‘Œ
=0,934
=0,857
𝜌 𝑋, π‘Œ
=-0,143
𝜌 𝑋, π‘Œ
=0,608
10. Joint and marginal probability distribution of a random vector
𝑋, π‘Œ is given as:
X/Y
-1
0
1
π‘ƒπ‘Œ 𝑦𝑗
-2
0,13
0
0,23
0,36
0
0,11
0,11
0,01
0,23
1
0,07
0
0
0,07
2
0
0,33
0,01
0,34
Find:
a) 𝐸 𝑋 , 𝐸 π‘Œ
b) 𝐷 𝑋 , 𝐷 π‘Œ
c) 𝜎 𝑋 , 𝜎 π‘Œ
d) π‘π‘œπ‘£(𝑋, π‘Œ)
e) 𝜌 𝑋, π‘Œ
f) Are random variable X, Y independent?
g) Are random variable X, Y linear independent?
𝑃𝑋 π‘₯𝑖
0,31
0,44
0,25
1
11. Joint probability density function of 𝑋, π‘Œ is given by
2
𝑓 π‘₯; 𝑦 = 3 π‘₯ + 2𝑦
0
Marginal density functions are:
2
𝑓𝑋 π‘₯ = 3 π‘₯ + 1
0
0≀π‘₯≀1 ,
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’
0 ≀ π‘₯ ≀ 1; 0 ≀ 𝑦 ≀ 1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
1
π‘“π‘Œ 𝑦 = 3 1 + 4𝑦
0
Find:
a) 𝐸 𝑋 , 𝐸 π‘Œ
b) 𝐷 𝑋 , 𝐷 π‘Œ
c) 𝜎 𝑋 , 𝜎 π‘Œ
d) π‘π‘œπ‘£(𝑋, π‘Œ)
e) 𝜌 𝑋, π‘Œ
f) Are random variable X, Y independent?
g) Are random variable X, Y linear independent?
0≀𝑦≀1
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’
Study materials :
 http://homel.vsb.cz/~bri10/Teaching/Bris%20Prob%20&%20Stat.pdf
(p. 64 - p.70)