You may be satisfied with them, but what you don&t know about these doctors could hurt your patients.
|Jump to:||Choose article section...Hospitals are reluctant to drop physicians Physicians feel they can judge clinical skills Managed care is a minor factor in most areas Will performance data be used in the future?|
You may be satisfied with them, but what you don't know about these doctors could hurt your patients.
You've probably been referring to the same specialists for years, and you feel pretty good about most of them. Your patients rarely complain about them; they do thorough workups and communicate the results to you; and as far as you know, they have excellent outcomes.
Unlike some doctors who refer to specialists because they're friends, golfing buddies, or former classmates, you've been conscientious about getting recommendations from other physicians, patients, and maybe even your own relatives before trying out specialists. Perhaps you've heard these consultants speak at CME meetings or worked with them in the hospital. And you carefully assess their care of your patients before routinely referring to them.
There's only one problem with this approach, say health care quality and credentialing experts: Referring doctors are evaluating specialists' care in a "data-free zone." Unless primary care physicians assist at surgery, these observers say, they're not usually in a position to judge consultants' clinical skills, except indirectly from their reports and patient feedback.
Performance data, such as mortality or complication rates from specific procedures, or measurements of chronic disease care, would give referring doctors a more objective yardstick. But neither inpatient nor outpatient performance data on specialists are available to referring physicians in most parts of the country.
Michael L. Millenson, a consultant at William M. Mercer in Chicago and author of Demanding Medical Excellence (University of Chicago Press, 1997), notes that many hospitals have outcomes data on surgeons, but don't disseminate them to referring doctors. Adds family practitioner Ronald S. Jolda, a medical consultant to the state of Massachusetts: "Hospital outcomes aren't available in a form where you can sift through them. You can't see how many carotids a surgeon did and how many of those patients had problems, such as strokes."
Would referring physicians pay attention to such information if it were available in risk-adjusted form? Reaction to the CABG performance data that's been publicly available in New York and Pennsylvania for several years suggests that they wouldn't. In one study, conducted in 1995, 87 percent of cardiologists in Pennsylvania said that the Consumer Guide to Coronary Artery Bypass Surgery, which lists risk-adjusted mortality for heart surgeons, had little or no influence on their referral recommendations. Other studies have shown that the New York CABG report card had little impact on referral patterns in that state.
Why are physicians so resistant to scientifically credible data? "It's a cultural issue. It's difficult for doctors to believe that other doctors can be accurately measured," replies Millenson. "And since they don't understand the statistics behind the measurements, you can't convince them that the case-mix adjustment makes sense."
Although physicians do glance at bulletins from their state medical boards, they generally don't factor disciplinary actions or malpractice suits into their referral decisions. None of the practicing physicians interviewed for this article knew that the Federation of State Medical Boards makes available on the Internet a comprehensive listing of disciplinary actions against physicians in every state going back to the 1960s (www.docinfo.org). And few of them say they'd consult that siteor any of the state-sponsored Web sites that feature medical board sanctions and malpractice settlementsthe next time they have to select a new specialist.
Even if they had the time to investigate specialists' backgrounds, physicians tend to distrust malpractice and disciplinary data. They know that many suits against doctors are frivolous or are settled by insurers to save legal costs.
As for disciplinary actions, FP Richard E. Waltman of Tacoma, WA, says they need to be interpreted in context. The FSMB data bank lists restrictions on licenses, as well as the general category of each offensesuch as quality of care, substance abuse, sexual abuse, and unprofessional conduct. But it doesn't give specifics on the charges against a physician.
Many doctors depend on the hospital credentialing process to weed out questionable practitioners. "It's more stringent than anything I could do," says family physician Patricia J. Roy of Muskegon, MI. She adds that hospitals must be careful about granting privileges, because they're liable for the actions of their staffs.
But observers say that hospitals don't always rigorously credential or supervise physicians. "There's been plenty of evidence over the years that hospitals could do a much better job of vetting in the credentialing process," says internist David B. Nash, a professor of health policy at Jefferson Medical College in Philadelphia.
Douglas L. Elden, a Chicago-based attorney and credentialing expert, notes that hospital credentialing committees will generally check a doctor's background thoroughly before granting him privileges. The real problem, he saysand the reason health plans have never accepted hospital credentialing as evidence of qualityis that most hospitals are reluctant to terminate a doctor's privileges unless he's grossly incompetent or commits a flagrant ethical violation.
"Once a physician is on the staff, throwing him off is very difficult," says Elden. "Let's say someone was a good physician for 25 years, and now he's getting old. The doctors in the hospital administration are hesitant to drop him; they're afraid it could be them one day. That's where hospital credentialing falls off."
Elden has also heard hospital administrators say they can't afford to drop a physician who brings in a lot of business. And if the specialist is well-heeled, they know that ejecting him could cost a small fortune in legal bills. "By the time a hospital is ready to get rid of a doctor, he's really a bad apple," says Elden. "Even then, the doctor will mount such a vigorous defense that it can cost hundreds of thousands of dollars to throw somebody off the staff."
Hospitals are likely to discipline doctors before terminating them. But referring doctors may be willing to overlook these actions, says Lee J. Johnson, a malpractice and health care attorney in Mt. Kisco, NY. "In one case, a doctor was sanctioned for stealing a dialysis machine from the hospital and putting it in his office, as well as for using drugs," she says. "And the doctors who sat on the credentialing committeeinternists and specialists alikekept referring cases to this guy. Some of them were monitoring him for urine tests at the same time.
"When there's a credentialing dispute, the doctor usually thinks his competitors are controlling the hospital board and manipulating the process. So if a referring doctor sees a specialist being sanctioned by the hospital, he probably wonders whether it has something to do with competence or whether it's just politics."
Some doctors fear the consequences of investigating their specialists. Charles Davant III, an FP in Blowing Rock, NC, says he'd rather not know whether a specialist has had disciplinary actions or suits against him. "Because if there is a lawsuit, I can see myself being dragged in. The plaintiff's attorney will say, 'You referred to Dr. Jones, and you knew that four years ago his license was suspended for drug problems.' That's the kind of thing that comes up, even though he may be rehabilitated and perfectly capable."
However much or little individual physicians know about the consultants they use, it's unlikely their referrals will rope them into malpractice suits, says Johnson. "If a referring doctor were going to be held liable for the action of a specialist, the legal standard would be whether a reasonable physician would have known about the consultant's performance record. If the average physician doesn't research that at all, that lack of knowledge would be the standard."
Johnson can recall only a handful of suits brought against referring physicians. But she cautions that if you don't know any of the specialists on a health plan list, you're probably better off letting the patient choose one. Elden concurs, with one caveat: If you know a competent specialist in that field who's out of network, he suggests, you should recommend him to the patient. "That's the safest thing for a physician to say, and probably the most ethical," he says.
The primary care physicians we spoke with expressed strong confidence in the specialists they refer to and how they chose them. Although our sources admitted that they usually find specialists through word of mouth, accidental meetings in the hospital, or CME conferences, they've all devised ways of evaluating the specialists' knowledge, skill, cooperation, and bedside manner.
Referring doctors listed clinical competence as their top criterion, but they also rate nonsurgical specialists on availability, ability to communicate with patients, and the quality of their workups as reflected in phone conversations and progress notes. Most prize less tangible impressions of a consultant's personal style, as well. "At least for meand I think this is true for a lot of other doctorsit's a matter of who you have a good working and an interpersonal relationship with," says FP Edward M. Yu of Mountain View, CA.
Realistically, how well can primary care physicians assess surgeons' clinical skills? In this age of hospitalists, fewer primaries do any inpatient work, and many insurers have cut back or eliminated payments for surgery assistance. So it's less common than it used to be for primary care physicians to directly observe the surgeons they refer to.
FP Richard Waltman of Tacoma, WA, laments that he no longer scrubs with surgeons routinely, because he doesn't get paid for it. "I still do it occasionally on the weekend or at night for a hip or an acute abdomen," he says. "But I miss more-regular contact with surgeons, because I got to see how they operated."
In Muskegon, MI, however, surgeons depend on primary care doctors to help out in the OR. Consequently, Patricia Roy has scrubbed with nearly every surgeon in town, and has decided not to send patients to some of them. "I once scrubbed with someone I'd never worked with before," she recalls. "I wasn't pleased with his surgical technique, his attitude during surgery, or his judgment. And this surgeon, she adds, had a very good reputation. "That doesn't matter. It's what you observe firsthand," she says.
Waltman believes he gets a good feel for the competence of proceduralists from patient feedback. "I've been in practice long enough to see what specialists do, to see the complications they get for colonoscopies or endoscopies. The patients will say, 'He hurt me and was abrupt,' or 'He didn't hurt me and was nice.' And you file those things away."
FP Scott R. Helmers of Sibley, IA, relies mostly on patient feedback and his impression of the specialist's evaluation in assessing clinical competence. But he recalls that, after one cardiac surgeon performed CABGs on his patients, there were "more cardiac wall infections than expected. We were a little worried about it. This particular surgeon left the area, but if he hadn't, we would have stopped referring patients to him."
Internist Catherine R. Landers of Skokie, IL, says that poor communications with specialists have been a bigger problem for her than their clinical competence. "Almost all the people I've referred to have made good clinical decisions and diagnoses," she says. "But there have been issues of interpersonal relationships. Even when I tried calling, I haven't gotten responses. And that's frustrating. If I page somebody twice, I should get a response. That would influence my referral patterns."
FP Paris E. Phillips of Jericho, NY, bases her judgments of consultants not only on the outcomes of her patients, but also on who's done well by members of her own family: "I have my own circle of specialists who I use for family members. They're the ones I tell the patients about. I say, 'You need a cardiologist? My father had quadruple bypass surgery a year and a half ago, and this is the doctor he used." Similarly, she refers to the general surgeon who removed her mother's gallbladder and did hernia repairs on her father and brother.
Surprisingly, managed care hasn't affected referral patterns very much. That's partly because, in the last couple of years, both HMOs and PPOs have tried to offer employers the widest possible physician panels. Also, closed-panel HMOs never took root in some markets. So in many parts of the country, most specialists are on all the major plans.
There are notable exceptions, however. In Blowing Rock, NC, Charles Davant III complains that he often has to send patients 40 or 50 miles to see a specialist who participates in a particular health plan. The reason has as much to do with Blowing Rock's remote location as with the plans' contracting methods. Since there are only a handful of specialists in fields such as cardiology, dermatology, gastroenterology, and orthopedic surgery, those doctors have no incentive to join a plan. So Davant can't refer his managed care patients to local specialists unless they're willing to pay cash.
In Muskegon, MI, most specialists are on most plans, says Patti Roy. But nephrologists have refused to join managed care plans, so referrals to them are always out of network, she adds. But the insurers will cover it, because "even the most heartless of plans isn't going to make someone drive 50 miles three times a week for dialysis."
Aside from that, she says, her role as a care coordinator for HMOs has motivated her to work with specialists who will educate her on some of their techniques, so that she can do more for patients herself. If a specialist isn't willing to do that, she says, she'd be less likely to refer to him.
Managed care on the West Coast has had a greater impact on referrals than in other regions. But that effect has less to do with patients joining or switching plans than with restrictions on the numbers of specialists that primary care doctors can refer to.
In large multispecialty groups, for instance, primaries don't have to switch specialists when patients change plans, because they and most of their consultants are covered by the same contracts. But they usually have to refer within their group, unless it doesn't include the kind of consultant they need.
IPAs, similarly, offer referring doctors a minimal choice of specialists. For example, the Bay Area Community Medical Group, a large IPA based in Santa Monica, CA, takes full professional risk from HMOs. Consequently, primary care physicians in the group are expected to use a relatively small number of specialists who've agreed to accept the IPA's discounted rates.
FP Bernard J. Katz, the IPA's medical director and CEO, notes that only a portion of the specialists who participate with any HMO are available to Bay Area Community's members if they pick a primary care doctor in the IPA. On the other hand, if the patients switch to a new plan that contracts with Bay Area Community, they're assured of access to the same specialists, because all the consultants who do business with the IPA contract with the HMOs through Bay Area Community.
Risk contracting that involves professional services has hit a wall, and global capitation contracts that cover inpatient care are fading fast.* Yet David Nash of Jefferson Medical College in Philadelphia, believes that global risk arrangements will stage a comeback someday. When that happens, he says, a lot of primary care doctors will be economically motivated to use performance data in choosing specialists.
"Only when we have the incentives aligned will we see a change in referral behavior," he explains. "If you refer to people who are less competent, you're going to have a longer length of stay, more complications, and more medical errors, and those consequences will be felt economically by the referring doctor if his group is taking financial risk for care."
Nash distinguishes this approach from the one used by many capitated groups, which encourage primary care doctors to refer sparingly and only to specialists who utilize tests and procedures judiciously. "When doctors make referrals based on performance data, they're going beyond utilization and rewarding specialists for results," he says.
If reliable performance data become available, Katz agrees, physicians should use them as one factor in picking the best specialists. But he isn't so sure that at-risk groups like his would benefit financially.
For example, the Pacific Business Group on Health and a California state agency are jointly preparing to release outcomes data on hospitals that perform bypass surgeries. If they also rated cardiac surgeons, says Katz, and the HMOs began to select heart surgeons from PBGH's "A" list, "then the plans would have to be financially responsible for bypass surgery. They can't delegate that responsibility to IPAs and groups, and then say, 'You can use only these doctors,' who refuse to give you any discount."
He also points out that risk-taking groups don't necessarily save money by providing first-rate chronic disease care. "If I have a diabetic patient for a year or two, and then that patient switches IPAs, am I going to save money on dialysis down the line for that patient by providing good diabetic control now? No. The patient won't be my enrollee. So at an individual medical group level, the argument falls apart."
Even in the absence of economic incentives, some physicians say they'd welcome specific performance data on specialists. For instance, even though Waltman and Helmers are both skeptical about outcomes data in general, they'd like to know how many colonoscopies performed by individual gastroenterologists resulted in perforated colons. Waltman refers to two established GI groups, and he trusts them to hire good people. But if he had data showing that one had a much higher rate of perforations, he says, he'd ask them what was going on.
Another school of thought is typified by family practitioner John Egerton of Friendswood, TX. Even if outcomes data on individual doctors were available, he says, he wouldn't use the information, "because I wouldn't know if it was accurate. I know some very good doctors who've had bad outcomes, but it's not because they're bad doctors. It's because they've had very sick patients."
What if the data were properly risk-adjusted? "I'm a little wary about statistics and putting everything in a graph," says Egerton, who's been in practice for 26 years. "I'm old enough to remember when you didn't have all this computerized stuff. You just met somebody and you had an instinct. If the patient came back and said, 'Yeah, he's okay,' and nobody came to any harm, that was enough."
Nevertheless, plans, employers, and patients are all clamoring for more and better performance data on physicians. That's why FP Edward Yu, who's been in practice for six years, believes that performance profiles on physicians will eventually become publicly available. In the process, he says, referring physicians will also gain access to data on the specialists they use.
"I have patients who need CABGs and are looking for centers of excellence and for top doctors," he says. "They want to know how many cases the surgeon has done, his success rates, and so on. We can provide those numbers. Although they may be hard to interpret, doctors have a responsibility to digest that data and select who they want the patient to go to. So that's coming."
*See "Has capitation reached its high-water mark?" Feb. 19, 2001.
If comprehensive outcomes data were available on specialists, would you use it as a main factor in referring patients to them? Visitwww.memag.com to cast your vote.
Ken Terry. How good are the specialists you refer to?.