Statistics for AKT

Download Report

Transcript Statistics for AKT





Mean: true average
Median: middle number once ranked
Mode: most repetitive
Range : difference between largest and smallest.




Find out the Mean, Median, Mode and Range for following.
8, 9, 9, 10, 11, 11, 11, 11, 12, 13
The mean is the usual average:
(8 + 9 + 9 + 10 + 11 + 11 + 11 + 11 + 12 + 13) ÷ 10 = 105 ÷ 10 = 10.5

The median is the middle value. In a list of ten values, that will be the
(10 + 1) ÷ 2 = 5.5th value which will be 11.

The mode is the number repeated most often. 11

The largest value is 13 and the smallest is 8, so the range is
13 – 8 = 5.



Normal Distribution: Mean=Median=Mode
Positive Skewed: Mean>Median>Mode
Negative Skewed: Mean<Median<Mode
Test Result
Disease Present
Disease Absent
Positive
TP
FP
Negative
FN
TN

Sensitivity:
How good is the test at detecting those with the condition
TRUE POSITIVES
ACTUAL NUMBER OF CASES

Specificity:
How good is the test at excluding those without the
condition
TRUE NEGATIVES
ACTUAL NUMBER OF PEOPLE WITHOUT CONDITION

Positive Predictive Value:
How likely is a person who tests +ve to actually have the
condition
TRUE POSITIVES
NUMBER OF PEOPLE TESTING POSITIVE

Negative Predictive Value:
How likely is a person who tests –ve to not have the
condition
TRUE NEGATIVES
NUMBER OF PEOPLE TESTING NEGATIVE

Incorporates both sensitivity and specificity

Quantifies the increased odds of having the disease if you
get a positive test result, or not having the disease if you get
a negative test result.

Positive Likelihood ratio:
Sensitivity
(1 – Specificity)

Negative Likelihood ratio:
(1-Sensitivity)
Specificity
Odds are a ratio of the number of people who incur a
particular outcome to the number of people who do not
incur the outcome.
NUMBER OF EVENTS
NUMBER OF NON-EVENTS
Odds ratio:
The odds ratio may be defined as the ratio of the odds
of a particular outcome with experimental treatment and that
of control.
Odds ratios are the usual reported measure in case-control
studies.
It approximates to relative risk if the outcome of interest is
rare.
ODDS IN TREATMENT GROUP
ODDS IN CONTROL GROUP

For example, if we look at a trial comparing the use of
paracetamol for dysmenorrhoea compared to placebo
we may get the following results
Total no of
Patients
Pain relief
achieved
Paracetamol
60
40
Placebo
90
30

The odds of achieving significant pain relief with
paracetamol = 40 / 20 = 2

The odds of achieving significant pain relief with
placebo = 30 / 60 = 0.5

Therefore the odds ratio = 2 / 0.5 = 4

Prevalence: rate of a disorder in a specified population

Incidence: Number of new cases of a disorder
developing over a given time (normally 1 year)

Relative risk (RR) is the ratio of risk in the
experimental group (experimental event rate, EER) to
risk in the control group (control event rate, CER).

Relative risk is a measure of how much a particular risk
factor (say cigarette smoking) influences the risk of a
specified outcome such as lung cancer, relative to the
risk in the population as a whole.

Absolute risk: Risk of developing a condition

Relative risk: Risk of developing a condition as compared
to another group
EVENTS IN CONTROL GROUP – EVENTS IN TREATMENT
GROUP EVENTS IN CONTROL GROUP
X 100
- My lifetime risk of dying in a car accident is 5%
- If I always wear a seatbelt, my risk is 2.5%
- The absolute risk reduction is 2.5%
- The relative risk reduction is 50%
For example, if we look at a trial comparing the use of
paracetamol for dysmenorrhoea compared to placebo
we may get the following results
Total no of
Patients
Pain relief
achieved
Paracetamol
100
60
Placebo
80
20

Experimental event rate, EER = 60 / 100 = 0.6

Control event rate, CER = 20 / 80 = 0.25

Therefore the relative risk = EER / CER = 0.6 / 0.25 =
2.4



Relative risk reduction (RRR) or relative risk increase
(RRI) is calculated by dividing the absolute risk change
by the control event rate Using the above data,
RRI = (EER - CER) / CER
(0.6 - 0.25) / 0.25 = 1.4 = 140%

Numbers needed to treat (NNT) is a measure that
indicates how many patients would require an
intervention to reduce the expected number of
outcomes by one

It is calculated by 1/(Absolute risk reduction)

Absolute risk reduction = (Experimental event rate)
- (Control event rate)








A study looks at the benefits of adding a new anti platelet
drug to aspirin following a myocardial infarction. The
following results are obtained:
Percentage of patients having further MI within 3 months
Aspirin 4%
Aspirin + new drug 3%
What is the number needed to treat to prevent one patient
having a further myocardial infarction within 3 months?
NNT = 1 / (control event rate - experimental event rate)
1 / (0.04-0.03)
1 / (0.01) = 100

Remember that risk and odds are different.

If 20 patients die out of every 100 who have a
myocardial infarction then the risk of dying is 20 /
100 = 0.2 whereas the odds are 20 / 80 = 0.25.

The null hypothesis is that there are no differences
between two groups.

The alternative hypothesis is that there is a difference.
Type 1 error:
- Wrongly rejecting the null hypothesis
- False +ve
Type II error:
- Wrongly accepting the null hypothesis
- False -ve

Probability of establishing the expected difference
between the treatments as being statistically
significant
- Power = 1 – Type II error (rate of false –ve’s)

Adequate power usually set at 0.8 / 80%

Is increased with
- increased sample size
- increased difference between treatments

A result is called statistically significant if it is unlikely
to have occurred by chance

P values
- Usually taken as <0.05
- Study finding has a 95% chance of being true
- Probability of result happening by chance is 5%
1.
Parametric / Non-parametric
Parametric if: - Normal distribution
- Data can be measured
2.
Paired / Un-paired
Paired if data from a single subject group
(eg before and after intervention)
3.
Binomial – ie only 2 possible outcomes

Student’s T-test
- compares means
- paired / unpaired

Analysis of variance (ANOVA)
- use to compare more than 2 groups

Pearsons correlation coefficient
- Linear correlation between 2 variables





Mann Whitney
- unpaired data
Kruskal-Wallis analysis of ranks / Median test
Wilcoxon matched pairs
- paired data
Friedman's two-way analysis of variance / Cochran Q
Spearman or Kendall correlation
- linear correlation between 2 variables

Compares proportions

Chi squared ± Yates correlation (2x2)

Fisher’s exact test
- for larger samples

The standard deviation (SD) represents the average
difference each observation in a sample lies from the
sample mean

SD = square root (variance)




In statistics the 68-95-99.7 rule, or three-sigma rule, or
empirical rule, states that for a normal distribution
nearly all values lie within 3 standard deviations of the
mean
About 68.27% of the values lie within 1 standard
deviation of the mean.
Similarly, about 95.45% of the values lie within 2
standard deviations of the mean.
Nearly all (99.73%) of the values lie within 3 standard
deviations of the mean.
Thank you for all your patience!!!!