Skip to content Skip to navigation

Measuring clinical decision-making skills

March 1, 2010
by Robert Kunio
| Reprints
An efficient and systematic approach to identifying variations in clinical decision making

In late 2008, I was grappling with the problem of trying to prioritize in-house and continuing education training programs for our 100 therapists scattered across seven metropolitan areas in four states. Since our goal as a company is to provide consistent high-quality therapy across all locations, I am constantly trying to find better ways of identifying which skills and techniques are the best ones and then share them company-wide. A specific goal I had in mind was to achieve a higher level of consistency in the decision-making criteria used in the therapy admission and discharge process for each discipline.

Our Area Directors were supposed to regularly visit each site and observe and coach all therapists. However, because they have no control over the types of patients they will be observing, it requires many visits before they can obtain a complete picture of a single staff person's training needs. The information on training needs that they bring back over the course of a year is, therefore, limited and does not yield enough actionable data to allow us to rank the training programs we as a company should be investing in. The travel expenses for these teaching and observation visits can also be quite large.

In a “previous life” as the head of strategic planning for Lexis (the legal research database company), I had several professional market researchers reporting to me. I learned from these researchers that companies like Proctor and Gamble had long ago perfected interviewing techniques to identify and categorize consumer thought processes when making purchasing decisions. Thinking back to my days at Lexis, I postulated that if I could capture and study the thought processes used by my therapists when admitting and discharging patients, I could then see where there were wide variations in clinical decision making. With this information in hand I could then efficiently prioritize training programs to bring about the desired decision-making consistency I was looking for.

The other problem I was trying to solve was to identify scientific tools that could make admission and discharge decisions as objective as possible. As all of us in the long-term care industry know, there are constant debates between therapy and administration when it comes to admitting and discharging patients. In an ideal world, administration would have all residents with Medicare Part A benefits on therapy caseload for as long as possible. On the other hand, therapists have many professional and regulatory concerns about keeping someone on caseload too long. If I could find and implement assessment tools that both therapy and administration agreed upon, that would make the admission and discharge process more of a science and this would then reduce the frequency and length of these debates.

Since I could not find any evidence that the technique of interviewing therapists had been successfully used in the healthcare industry before, there was a risk of failure. To keep my costs and risks low, I decided to first try these techniques with my lowest population discipline, namely my 15 SLPs (Speech-Language Pathologists). The other benefit of starting with SLPs is that the rehab directors who managed them (mostly OTs and PTs) did not have a detailed understanding of their clinical thought processes, and I was hoping that by seeing answers to these structured questions, these supervisors would then be better prepared to support them.

As I had learned when structuring interviews for busy attorneys while at Lexis, questions of highly educated individuals about complex thought processes need to be carefully worded to avoid ambiguous answers. These questions, and the resultant answers, must be thoroughly tested with multiple sample groups before launching a full-scale project. Market research is much like information processing: garbage questions will only yield garbage answers.

With this past experience in mind, I began working on a 20-question interview with two of my SLPs. One SLP was on maternity leave and had a block of time each week to devote to this project. The other was our company's most experienced SLP and in the previous four years, she had been a clinical instructor for three students and had supervised two new graduates who were in their clinical fellowship year. Over the course of several conference calls, we experimented with various types of open-ended questions, hoping to avoid leading the respondent to the right answer while at the same time extracting long, detailed responses. Since these two SLPs had never built a market research instrument before and I had never admitted or discharged a patient before, all three of us learned a lot in these highly interactive discussions.

We also debated whether to use live interviews versus having the respondents read a written survey and write down the answers to the questions. Since we wanted top-of-mind answers that simulated the fast-moving environment of working in a skilled nursing facility, we decided to have a live person ask the questions and record the answers online, rather than give the respondents time to dwell on an issue and possibly give an answer that did not reflect their true thought processes. As an example, since we wanted to find out who was using formal cognitive tests as well as which people may not be using tests at all, we specifically did not mention the word “test.” Instead, we repeatedly asked the following, inserting various diagnoses into the same question: “When you evaluate someone for possible addition to your caseload, what do you look for in the area of (diagnosis)?”

Pages

Topics