OR WAIT null SECS
It isn't hard or time-consuming to measure results. One key: Start small.
It isn't hard or time-consuming to measure results. One key: Start small.
Practice re-engineering can seem daunting to physicians who associate it with complex, time-consuming quality-improvement studies that never seem to improve anything. But modern re-engineering techniques aim to shrink the frustrating delays between diagnosing problems in a practice and implementing solutions. Ideas on how to improve efficiency and care can be conceived, tested, and adopted or rejected, not in months or years, but in weeks or even days.
You still need to measure how efficient your practice currently is in order to determine the extent of the problems that need fixing and set goals for improvement. And you still need to measure results in order to determine whether a redesign effort is working. But these measurements need not be exhaustively precise; they need only be "accurate enough."
"The goal of measurement in office redesign is to quickly diagnose a practice's weaknesses and strengths in clinical, satisfaction, and financial outcomes," observes Ann Marie R. Hess, senior analyst at the Boston-based Institute for Healthcare Improvement, which is spearheading the re-engineering of office-based practices. "A good measurement system enables practices to improve and innovate faster, moving from quarterly measures to weekly or daily ones. It attempts to answer the question: 'What can we do tomorrow to make an immediate improvement?' "
Solo doctors may be able to measure their entire practices at once. But if your practice is larger, start by measuring a small part of itone or two doctors, their staff, and their patientsso as not to get overwhelmed. Then expand your measurements to other "microsystems" within the practice.
Multi-doctor practices can also let the part stand for the whole when measuring the results of re-engineering efforts. Using a concept called "rapid cycle testing," you can try an innovationsuch as a way to improve patient flowwith 50 or 100 patients and measure the results. If the change has no effect or is counterproductive, drop it. If it works, try it incrementally on a larger scale. Expand constructive changes step by step until they're practice-wide.
Rapid-cycle testing is a fancy term for a simple concept. Scott Decker, quality manager at ThedaCare, an integrated delivery system based in Appleton, WI, explains that "it's based on answering three questions: What are you trying to accomplish? What changes can you make that will result in improvements? And how will you know whether those changes are, indeed, better? Success comes from adopting a what-can-we-do-differently-next-Tuesday mindset. That's what makes this kind of redesign different from traditional quality improvement, where we used to meet for three months trying to figure out what data we wanted, and then another three months trying to figure out what to do about the data we collected."
Although ThedaCare employs about 90 doctors in 21 clinics, its redesign efforts started small. The organization focused on two family practices, each with four doctors and a couple of NPs or PAs. "We did flow charts," recalls Decker. "We interviewed staff. We had staff members role-play patients. We measured how long a visit took from the moment a patient arrived in the parking lot until the moment she left the clinic."
The redesign teams at ThedaCare's test sites discovered that too many handoffsfrom phone calls and lab slips to patient referralswere taking place. By listing who did what on a simple grid and analyzing the work flow, the practices were able to reallocate responsibilities to make the office processes more efficient.
For example, telephone switchboard operators were tying up triage nurses with patient calls for medication refills, even though each site had a dedicated phone line for refill messages. Surveys of patients revealed that many were unaware that these phone lines existed. The surveys also showed that the few patients who tried to use the lines couldn't understand the computerized voice instructions. So ThedaCare's redesign teams enlisted patients' help in rewriting the instructions in plain English, and made sure that the entire staff knew that the lines were available.
"It didn't take a lot of time for switchboard operators to tell patients about the medication refill lines," says Decker, "and the number of calls to the phone lines then shot up. Once patients started using the lines, it freed our nurses to do something else."
Measurement should be integrated into the re-engineering effort right from the start. In fact, once a redesign team has been chosen, it should designate a measurement leader. (See "Pick the team and write the game plan," Feb. 21, 2000.) This individual's job is to ensure that measurement is part of goal-setting. She also oversees team efforts to devise measures that can be acted on when the results are in, conduct patient and staff surveys, and collect data from various sources.
"Keep measurements simple," urges Vicki P. Kahn, senior information coordinator at Dartmouth-Hitchcock-Southern Region, a 250-doctor multispecialty group with clinics in southern New Hampshire. For instance, Kahn describes how receptionists are asked to keep "tick sheets," on which they make a mark each time a patient phones to request a same-day appointment (or whatever else the practice wants to track). Simple tools like tick sheets make it easy for front-line staff to play a valuable role in measurement.
"Graph paper and pencil are often superior to fancy spreadsheets," Kahn explains. "If you ask people to do something complicated and time-consuming, they tend to find an excuse not to do it. They may say, 'I don't have a computer,' or, 'I don't know how to use Microsoft Excel.' But everyone can plot dots on a sheet of graph paper."
"Most benchmarks for redesign efforts are already available in your practice. You just have to find them," adds Ann Marie Hess of the Institute for Healthcare Improvement. Suppose you want to know what percentage of patients see their physician of choice. "Check the schedule for the previous month to see how many patient-doctor matches you have," she suggests. "Do that for six months. The results for each month become a data point on the chart.
"If you can't get that information from your computer system, do a quick pre-visit survey of the first 50 patients who come in. Regardless of what you're trying to measure, 50 patients will generally serve as a good-enough starting point. Ask them, 'Will you be seeing the doctor you wish to see today?' Give them three choices: 'Yes,' 'No,' or 'It doesn't matter.' "
What's the very first redesign question you need to answer? It's: How many patients do you have? "Many practices have no idea," says FP Mark Murray, president of Murray Tantau Associates, a consulting firm in Chicago Park, CA, that advises groups and hospitals on re-engineering. "You must know your panel size to predict demand." That's critical if you plan to adopt same-day scheduling, match patients with their physician of choice, reduce unnecessary ER visits by patients, and institute other improvements. (See "You mean I can see the doctor today?" March 20, 2000. )
It's vital, however, that you measure true demand. "If we ask a practice, 'What was your demand for appointments in 1999?' they might respond, 'Well, we had 543,622 outpatient visits last yearthat was our demand,' " Murray says. "Wrong! That's not demandthat's supply. If we go back to, say, last April and look at the appointment schedule, we realize that most of those visits were generated in previous months. Many patients wait weeks or months to get in to see a doctor. So an April appointment might have been generated in February or March. Then the demand for access in April often gets deflected into May, June, and July."
To measure true demand for April, starting on April 1, see how many appointment requests you receive on that day, Murray advises. "Realize that these aren't patients who necessarily say, 'I want to be seen on April 1.' They're appointment requests for any timeApril 1, April 2, April 10, or sometime in May or June. That's your true demand."
Your receptionist could use a simple calendar to note both the day on which appointment requests were made in a given month and the date on which those appointments were actually scheduled. Some computer scheduling systems can also capture this information.
If you're part of a group, measuring demand includes measuring panel size for each physician as well as for the entire practice. Otherwise, says Murray, "you get a lot of mismatches where patients don't get to see the doctors they want to see. That, in turn, adversely affects access."
Patients forced to see doctors who don't know them have higher revisit rates, clogging the appointment schedules, he explains. Or they may decide to skip the office visit and head for the ER instead. From a patient's perspective, going to the ER may be more convenient. After all, there's no delay getting an appointment, and one unfamiliar doctor is the same as another. "In one group studied by the Institute for Healthcare Improvement, 85 percent of ER visits occurred during office hours," says Hess. "That reflects how difficult it is for patients to see their own doctors."
Problems are compounded when patients don't even know who their own doctors are. This happens when patients have so much trouble getting appointments they decide not to specify a personal physician. They've learned that if they're willing to see any doctor who's available, they can get an appointment much sooner. Unfortunately, this strategy can lower their quality of care.
"Practices with mammography rates of 42 percent are able to raise them to 95 percent when patients know who their doctors are," says Murray. "When a woman patient flip-flops through the system, no one will say, 'I think you need a mammogram.' If you really want to do right by patients, you must know which doctors in your practice are accountable for their care."
How do you find out which patients belong to which doctors? "If you're in a managed care situation and have a defined enrollment, at least you know which doctors those patients are assigned to," says Murray. To estimate the number of your fee-for-service patients, have your staff check scheduling records to find out how many patients with that kind of insurance visited your practice over the past 18 months. Then, when each of these pa- tients next calls for an appointment, have your scheduler ask her, "Who's your doctor?"and note it. If the patient hasn't selected a personal physician, make sure she's assigned one. "This method will miss some people who don't visit you during that period, but you'll also over-count patients you saw, say, nine months ago, who have since switched to another practice," Murray says.
For a complete picture of your demand, you also need to know how many patients visit your practice who don't phone for an appointment with your scheduler. These include walk-ins, people who request appointments by fax or e-mail, and, in a multispecialty group, appointments made by schedulers in other departments. Finally, consider demand that you generate yourself: your return visit rate. If you don't have a computer system that can zero in on this information, have your scheduler put a tick mark next to a doctor's name each time one of his patients is given a follow-up appointment.
Once you've determined your demand, says Hess, here's some additional baseline information you should gather to uncover other problem areas in your office:
Once the redesign team has collected sufficient data to establish a starting point, it's ready to set targets for improvement. For example, if the team finds that only 70 percent of patients surveyed saw their physician of choice, that becomes the baseline measure. The team then tries to decide by how much that measure could be improved. Suppose they agree that 90 percent of patients could see their own doctor under normal conditions. After the team members devise and implement a strategy to boost the rate, they ask the schedulers to survey patients at regular intervals to measure progress toward their goal.
Even if a strategy appears to be working, beware of prematurely concluding that a problem is solved after collecting data for only a week or two. You may have had an atypically hectic or slow week. Collecting information over a period of months will more accurately reveal improvement trends. It will also account for seasonal differences and temporary glitches.
Because the various systems in a practice are interrelated, "Don't focus only on one clinical, cost, or satisfaction measure," cautions Dartmouth-Hitchcock's Vicki Kahn. "Try to measure concurrently different things that affect each other."
Suppose you're measuring what percentage of patients see their doctor of choice. Related measures include how long it takes patients to get an appointment, how many patients seek treatment at an ER, and how many patients say they're satisfied with their doctor's explanation of their treatment and problems.
Or say you're measuring the quality of doctor-patient communication by looking at the percentage of calls from patients seeking medical advice. Realize that those calls also have a bearing on telephone access, which can affect the number of new patients a practice will attractand its profitability.
Post data charts and survey results on a "data wall"this could be a bulletin board in the back officefor team members and other staffers to see. Such a graphic display helps make data more tangible. It also "gives team members an emotional investment in the data and in making improvements, because it enables them to see their progress," Hess says.
Some redesign teams paper the data wall with complex spreadsheets and a sea of other documents. Experts advise against this, however. Sophisticated spreadsheets are harder to understand than simple charts, and that tends to dampen enthusiasm. Also, too much information may obscure how one part of a practice is affected by anothera crucial connection for team members to make in order to devise effective improvements. Hess recommends posting measurement results for no more than five or six office systems at once.
You can also create a data wall in the waiting room for patients to see. It might display the results of satisfaction surveys. Giving patients survey feedback rewards them for their participation and makes it more likely that they'll complete future questionnaires. Posting simple charts that show the practice operating at a high level of efficiency and clinical qualityor at least demonstrating improvementcan build patient loyalty and attract new patients through favorable word of mouth.
Carol Pincus, Ken Terry, ed. . Re-engineering your practice: Easy ways to track your progress. Medical Economics 2000;10:131.