Quality Control

Download Report

Transcript Quality Control

Introduction to Quality Assurance
Quality assurance
vs.
Quality control
Quality Assurance
Program designed to monitor and evaluate
the ongoing and overall quality of the total
testing process
(preanalytic, analytic, and postanalytic)
Quality Control
Activities designed to monitor and evaluate
the performance of instruments and reagents
used in the testing process
Is a component of a QA program
CLIA
• Clinical Laboratory Improvement Act
– Resulted from public and Congressional concerns
about the quality of clinical laboratory testing in the
U.S.
– Basic set of guidelines to apply to all labs,
regardless of size, complexity, or location.
– Implementation and development of working
guideline was assigned to HCFA (Health Care
Finance Agency), now known as CMS (Center for
Medicare and Medicaid Services).
CLIA
• The intent of CLIA is to promote the
development, implementation, delivery,
monitoring, and improvement of high
quality laboratory services.
CLIA
Original consisted of 4 sets of rules describing:
 Laboratory standards




Personnel standards
Quality control requirements
Test complexity model
Quality assessment of the complete testing process
 Application process and user fees
 Enforcement procedures
 Approval of accreditation programs
Total Testing Process
Pre-Analytic
Physician order
Patient preparation
Specimen acquisition
Specimen handling
Sample transport
Analytic
Sample prep
Analyzer setup
Test calibration
Quality Control
Sample analysis
Post-Analytic
Test report
Transmittal of report
Receipt of report
Review of test results
Action on test results
Quality Assurance activities
Patient test management assessment
- specimen collection, labeling, transport
- test requisition
- specimen rejection
- test report format and reporting systems
Quality control assessment
- calibrations and controls
- patient data ranges
- reporting errors
Quality Assurance activities (cont.)
Proficiency testing assessment
- “unknowns” 2-3x/year
Comparison of test results
- different assays or instruments used
for same test
- accuracy and reproducibility
Quality Assurance activities (cont.)
Relationship of patient info. to test results
- results consistent with patient info.
- age, sex, diagnosis, other results
Personnel assessment
- education; competency
Quality Assurance activities (cont.)
Communications and complaint investigations
- communications log
QA review with staff
- review during regular meetings
Quality Assurance activities (cont.)
QA records
- retention for 2 years
Verification of methods
- accuracy, precision
- analytical sensitivity and specificity
- reportable range
- reference range(s) (normal values)
Quality Assurance activities (cont.)
Quality monitors
- TAT (turn-around time)
- smear/culture correlation
- blood culture contamination rates
Assessment of compliance
College of American Pathologists (CAP)
- Professional pathology organization
- Been granted “deemed status” by CMS
- Groups of peers conduct bi-annual site
inspections
- Publish checklists for laboratories to
document compliance
How do we assess the performance
of our tests?
Verification vs. Validation
Verification
•
One-time process used to evaluate or
establish the performance of a system or
test to ensure that it meets the desired
specifications
Validation
•
Ongoing process to demonstrate that a
system or test is meeting the operational
needs of the user
Verification
•
Background
•
CLIA requirement to check (verify) the
manufacturer’s performance specifications
provided in package insert
–
–
Assures that the test is performing as intended by the
manufacturer
» Your testing personnel
» Your patient population
» Your laboratory setting
One time process performed prior to implementation
Verification
•
Accuracy
•
Are your test results correct?
–
Assures that the test is performing as intended by the
manufacturer
» Use QC materials, PT materials, or previously
tested patient specimens
Verification
• Precision
•
Can you obtain the same test result time after
time?
–
–
Same samples on same/different days (reproducible)
Tested by different lab personnel (operator variance)
Verification
• Reportable Range
•
How high and how low can test values be and
still be accurate (qualitative)?
–
•
Choose samples with known values at high and low
end of range claimed by manufacturer
What is the range where the test is linear
(quantitative)?
–
Test samples across the range
Verification
• Reference ranges/intervals (normal
values)
•
Do the reference ranges provided by the test
system’s manufacturer fit your patient
population?
–
–
Start with manufacturer’s suggested ranges
Use published ranges
» Can vary based on type of patient
» May need to adjust over time
» Normal patients should be within range,
abnormal patients should be outside range
Verification
• Number of samples to test
•
Depends on the test system and laboratory
testing volume
–
–
•
•
FDA-approved: 20 positive and negatives
Non-FDA approved: 50 positive and negatives
The number used for each part of the verification
will vary
Laboratory director must review and approve
results before reporting patient results
Sensitivity
• The probability of a positive test result given
the presence of disease
• How good is the test at detecting infection in
those who have the disease?
• A sensitive test will rarely miss people who
have the disease (few false negatives).
Specificity
• The probability of a negative test result
given the absence of disease.
• How good is the test at calling
uninfected people negative?
• A specific test will rarely misclassify
people without the disease as infected
(few false positives).
Sensitivity and Specificity
DISEASE
Present
Absent
True
False
Positive Positive Positive
(TP)
(FP)
TEST
False
True
Negative Negative Negative
(FN)
(TN)
Sensitivity = TP/TP+FN
Specificity = TN/TN+FP
Predictive Value
• The probability of the presence or
absence of disease given the results of
a test
– PVP is the probability of disease in a
patient with a positive test result.
– PVN is the probability of not having
disease when the test result is negative.
Predictive Value
DISEASE
Present
Absent
True
False
Positive Positive Positive
(TP)
(FP)
TEST
False
True
Negative Negative Negative
(FN)
(TN)
Predictive Value Positive (PVP) = TP/TP+FP
Predictive Value Negative (PVN) =TN/TN+FN
Predictive Value
• How predictive is this test result for this
particular patient?
• Determined by the sensitivity and
specificity of the test, and the
prevalence rate of disease in the
population being tested.
Prevalence Rate
Number of cases of illness existing
at a given time divided by the
population at risk
Anatomy of an epidemic:
W eeks fro m Peak
14
12
10
8
6
4
2
0
-2
-4
-6
-8
18
16
14
12
10
8
6
4
2
0
-10
P ercen t of Cases
first case - transition - peak - last case
Hypothetical Influenza Test Performance
Prevalence = 20.0%
+
Test
Disease
-
+
380
64
-
20
1536
Sensitivity = 380/400 = 95.0%
Specificity = 1536/1600 = 96.0%
Predictive Value Positive (PVP) = 380/444 = 85.6%
Predictive Value Negative (PVN) = 1536/1556 = 98.7%
Hypothetical Influenza Test Performance
Prevalence = 1.0%
Disease
Test
+
-
+
19
80
-
1
1900
Sensitivity = 19/20 = 95.0%
Specificity = 1900/1980 = 96.0%
Predictive Value Positive (PVP) = 19/99 = 19.2%
Predictive Value Negative (PVN) = 1900/1901 = 99.9%
100%
80%
60%
40%
20%
0%
2%
4%
6%
8%
10
%
12
%
14
%
16
%
18
%
0%
20
%
Predictive Value Positive
Predictive Value Positive:
Dependence on Sensitivity, Specificity and
Prevalence
Prevalence
Sens/Spec:
80/80
90/90
95/95
99/99
Resources
CAP checklists (available on W: drive)
Clark, RB et al. Verification and Validation of
Procedures in the Clinical Microbiology Laboratory.
2009. Cumitech 31A, ASM Press