Fresh from applying standardised metrics to translation, experts are now trying their hand at doing the same thing to interpreting. Large agencies look to create standard QA protocols for Over the Phone Interpreting. Technologists tout their automated metrics and, of course, there are rubrics from academics. But do we ever ask if these measurements are any good?
What makes a good quality metric?
There is a huge and often pretty unhelpful debate over what “quality” means in interpreting. For some people, it is simply about availability, promptness and fulfilment rates. Obviously, this matters when interpreting is being sold in bulk. But those are measurements of productivity and bums on seats, not whether the interpreting is actually any good.
Universities and schools tend to measure the relationship between the interpreting and the original speech. They measure accuracy, intonation, clarity and the like, according to what the markers expected. This is fine if those marking the interpreting are aware that omission isn’t always a fault and that clarity can mean rephrasing or even rethinking.
Even then, marking interpreting in a classroom is not the same as seeing if it is any good in the real world. Most clients tend not to sit in meetings with a marking rubric and perfect knowledge of both languages. If they were in the position to measure the interpreting that way, they wouldn’t need interpreting. The marking done in universities is fit for its purpose but it is not meant as a general sign of quality in the wider world.
As for automated systems to measure interpreting quality, since I am currently co-writing an article on them, I will leave them for now. The summary is that, for the most part, they are based on false assumptions and broken tools.
Keep reading with a 7-day free trial
Subscribe to Big Concepts to keep reading this post and get 7 days of free access to the full post archives.