Benchmarking Interpreter Services

Download Report

Transcript Benchmarking Interpreter Services

Benchmarking Interpreter Services

Ralph Garcia Mission Health System Asheville, NC with support from Cynthia E. Roat, MPH

Mission Health System

• Located in Asheville NC, Mission is the 6 th largest health system in the state.

• The system’s flagship, Mission Hospital, is a 730 bed tertiary care center with a Level II Trauma Center.

• The system includes Mission Children’s Hospital, Blue Ridge Hospital, McDowell Hospital, Reuters Children Center, Asheville Surgery Center, Asheville Imaging Center, and 15 specialty physician practices.

• Mission serves the 13 Western North Carolina counties. The area has a diverse population including a number of ethnic and racial groups and a significant deaf community.

• The largest LEP group is speaks Spanish with 94% of encounters.

Mission had a question . . .

How does Mission compare with other health systems in the provision of language access services?

So Mission looked to industry benchmarks.

What’s a benchmark?

Merriam Webster says that a benchmark is: “something that serves as a standard by which others may be measured or judged.” UNESCO says that benchmarking is: “A standardized method for collecting and reporting critical operational data in a way that enables relevant comparisons among the performances of different organizations or programmes, usually with a view to establishing good practice, diagnosing problems in performance, and identifying areas of strength.”

What could serve as a benchmark for Interpreter Services?

• Cost per encounter? • % encounters covered by qualified interpreters or bilingual staff?

• % of interpreters who have received training? • Spend per bed per LEPs in the system?

Methodology

• Contract with Buncomb County Medical Society and consultant Cindy Roat. • Literature search: what has been done around benchmarking interpreter services?

• Select and invite hospital systems for study. • Initial pre-screening. • Pilot written survey. • Send out written survey.

The question for each system?

How much is spent to provide what services to how many patients in how many languages at what level of quality using what organizational structure?

This presentation is not about what Mission found.

It’s about what Mission didn’t find.

Benchmarking Cost per Encounter

• Definition of “encounter” was different for each respondent. • Cost per encounter, as a measure, does not account for the quality of the program.

Benchmarking á la

Speaking Together

Speaking Together

: a collaborative of 10 hospitals brought together by researchers at George Washington University with a grant from the Robert Wood Johnson Foundation.

• The collaborative measured – screening for preferred language – patients receiving LS from qualified LS providers – patient wait time – time spent interpreting – interpreter delay time. • None of these indicators were measured consistently enough by enough of our respondents to use as an indicator.

Benchmarking with CSA’s Language Access Ratio

• Common Sense Advisory suggested a measure based on average daily spend per LEP bed, calculated with – the total number of hospital beds – % of people who speak a language other than English at home, based on the zip code of hospital catchment area, – IS expenditures.

• Speaking a language other than English at home is not a useful proxy for being limited-English-proficient. • We substituted number of LEPs in patient database. • Results were all over the map.

What we did learn?

• Trends – use of multiple language access strategies – telephonic interpreting as a backup strategy only – principle dependence on dedicated staff interpreters – higher-than-expected use of videointerpreting – continued use of family, friends and untrained bilinguals • BUT, hospitals are collecting data in such diverse ways that benchmarking is difficult.

– Definition of encounter is highly variable – Differences in documenting LEP patients in the system.

What questions were left?

How can we create common data collection criteria so as to be able to start benchmarking effectively?