30 November 2012

What malpractice looks like

I review a lot of cases in my professional life. Some of them are just ones that our QA group comes across in our practice. Some are cases related to our liability policy. Some are cases I'm sent for review, or educational cases I present. We see a lot of cases which could have been done better, or in which the documentation is imperfect (or even downright bad). But, fortunately, most of the cases that pass across my desk are within the standard of care.

We get into a lot of arguments over when care provided (or documented) falls below the "standard of care." This term is widely misunderstood, especially in academic circles, and this causes a lot of controversy. Many docs interpret the "standard of care" to mean "best practice." So any care that deviates from best practice, they contend, is prima facie a failure to meet the standard of care (and hence, malpractice).  Unfortunately, this is the interpretation that plaintiff's experts also prefer to embrace! However, it's important to understand that "standard of care" is a legal term with a clear definition that is much more expansive: the level at which an ordinary, prudent professional having the same training would practice under the same or similar circumstances. So the standard of care is not only not perfect care, it is not even average care, because by definition that would imply that 50% of care is below the standard.

This is a pretty low bar, actually. As I explain to our docs and trainees, you are allowed to be wrong. You are allowed to make errors. You are not allowed to be negligent. There is a difference. This is all, of course, limited to the abstract world of theory and pre-trial evaluation. Actual juries have notoriously variable determinations as to the standard of care. But when reviewing cases in advance, deciding which to defend, or what you would testify in favor of, it's a good guideline.

The cases I review tend (obviously) to involve bad outcomes, and generally present with varying degrees of imperfection, but it's pretty rare for me to see a case and stone cold identify it as malpractice. Part of this is because most docs are not, in fact, negligent, and part may be because I have a bias towards the defendant physicians. Most of the deficiencies I see generally involved a diagnostic error, or a minor lapse that probably did not impact the outcome of the case, or simply poor supportive documentation of the thought processes that drove the decision-making the way it went.

Sometimes, though, there is a case that you review and immediately reach for your checkbook.

This is an example of one such case.

A 19-year old male presented to the ER with a fever and headache. He was generally well-appearing, though febrile and tachycardic and as ill-appearing as a young person with the flu typically appears. He had no focal symptoms to suggest a source for the fever (i.e. no cough or sore throat, etc), just generalized fatigue and bodyaches. He was alert with a totally normal neurologic exam. He had no meningismus; his neck was described as supple on two separate exams. He was given 2 liters of IV fluids and tylenol after which his vital signs normalized and he felt much better. He was re-examined twice and demonstrated improvement on both exams, which were well documented and timed. Nursing notes agreed that the patient was much improved. The doc, a conscientious and compulsive sort, did a fairly thorough work-up. Chest x-ray was normal, as was bloodwork, with the exception of a WBC 11,000, just at the upper limit of normal. Influenza swab was negative. Blood cultures were sent, but antibiotics were not given. Because of the severity of the headache, he also did a spinal tap, which was normal. The patient was discharged home in the care of his parents with instructions to follow up with his doctor the next day for a recheck if he wasn't feeling better, and a voicemail was left with the PCP to ensure access to follow-up care. The discharge diagnosis was "Fever, uncertain source; possible viral syndrome."

So... before reading on, do you see any inadequacies in this case? I don't. If anything, the case was more aggressively worked up than was indicated, and for sure more workup was done than I would have, generally.

Except for one thing. The doctor documented a "normal" spinal tap when in fact the lab reported 110 WBCs, mostly neutrophils. This indicates that the patient had meningitis, quite probably bacterial.

More baffling, the doctor knew about this. The lab called the charge RN, and the charge RN notified the doctor, who added on CSF PCR studies for viral pathogens.

And yet he discharged the patient. Didn't call the diagnosis meningitis. Didn't tell him there was a possibility of serious illness. I have no clue why. It's baffling.

Now it's really easy to bash him as incompetent and dangerous, but I know this guy well. He's an MD/PhD who is double boarded in EM and critical care. He's smart as hell, and generally a great and conscientious physician. We don't know what happened here. Of course this case went on to the predictable bad outcome. The doc does not remember the case, so he can't really explain or defend it either. One can only presume that it was busy and he got confused or distracted, maybe had the discharge teed up and ready to go, expecting the negative LP results, and failed to change course on getting the results. It is, in any event, as clear-cut a case of a medical error as I can ever recall seeing. Most of us will never see such a case, unless you're doing expert review.

Now ask yourself, if he had not done the LP, the outcome would have been the same, and the allegation of negligence would still have been there: Fever and headache — how can you justify not doing the LP? If you've been in the trenches, though, you know that everyone with the flu also has a headache. It's part of the febrile syndrome. But the decision whether or not to LP is a judgement call. you can make a wrong judgement without being negligent. I would not have done the LP, based on the case as presented. I'd have been wrong, but in such a case that decision would have been well within the standard of care.

This is also a trend that I see when reviewing series of closed cases where the doctor lost in court or settled. Sure, there are cases where the care was fine but it settled because of a sympathetic plaintiff, or where a jury miscarried justice. But remember that the odds that a physician will prevail in a malpractice case is about five to one. We almost always win. When we lose, more often than not, there was a "WTF?" moment when you review the doctor's actions. It makes it really hard to present these cases for educational purposes: the docs reviewing the case can't put themselves in the position of making such an egregious error. The only possible conclusion is that the doctor who screwed up was an idiot or lazy or a "bad doctor." It's not true, though. There are bad doctors out there, but there are many more good ones. Of the good ones, we are all human and we all are subject to cognitive biases and errors, no matter how smart we are. And ER docs all bear the burden of a distracting environment with systems prone to error (hand-offs, triage cuing, overcrowding), working night shifts, seeing patients who may not be able to tell us what's going on. A set-up for errors.

In the last decade I have cared for about 15,000 patients, and I am sure that I have made an error just like this. I must have been lucky, since mine didn't blow up in my face. Maybe I caught it, or a nurse did, or it was for a less lethal condition. If you're honest with yourself, know that you will make errors like this, too.

So bear this in mind, when you think about "malpractice" and the "standard of care." Negligence, when you see it, is usually not debatable; it's obvious and flagrant. If there's a reasonable case to be made that the care provided was within the standard, it probably was an ordinary error or a mistake of judgement. This is not to say you will win in court! But perhaps you can think of it like pornography, in the words of Justice Potter Stewart, "I know it when I see it."

27 November 2012

Vindication

I love, it, I really love it, when one of my strongly-held prejudices is borne out by actual, you know, facts and science.

For years, I have been arguing against the practice of performing a routine lumbar puncture (aka LP or spinal tap) in patients with the "worst headache of their life." This is done after a CT scan of the brain, typically, to look for a subarachnoid hemorrhage (SAH). The SAH is feared because in some cases they represent a leaking aneurysm which is at risk of bursting, often with devastating or lethal consequences.

The need to do the LP is one of the sacred cows of Emergency Medicine, written in stone, and has been for longer than I have been practicing. The reason is that SAH is dangerous, the CT scan is imperfectly sensitive for SAH whereas LP is highly sensitive (in fact, the "gold standard") and relatively easy and safe to do. This was perhaps more true long ago when the resolution of a CT scan was lower than it is with modern machines, but the dogma remains. There is, however, a huge variation in actual practice out there. Many docs seem to do very few LPs for headaches, and some seem to LP everybody. I performed a unscientific survey of ER docs on twitter and found that about half "always" still do the LP or are strongly inclined to do it routinely. Some were, in fact, required by their employer to do the LP!

Now my experience over the years was that the LP seemed to be a horrific waste of time. It was traumatic for the patient, consumed a lot of ER resources, and never ever showed anything. Twice -- twice! -- in a decade I spotted the unicorn and had a genuine negative CT followed by a positive LP. In both cases, the patient went on to have negative angiograms, so either the LP was a false positive or they were non-aneurysmal bleeds (which, as it happens, do not require treatment).

So I dug into our data. Pulling a year's worth of cases, I found that we had about 2,800 headaches present annually, slightly under 3% of all of our visits. 18 of those were subsequently diagnosed as SAH, for a prevalence of about 0.6% within all-comers of headaches. But that's not entirely fair, since over half of the headaches were either migraine type headaches or other chronic/recurrent headaches, and these folks are not those for whom we are highly suspicious of SAH. Of the headache patients, about 900 had CT scans ordered. While I might argue that not all of those truly needed a CT, and certainly not all would have gotten one in other countries, for this discussion it's reasonable to use that as an index of how many headache patients we had for whom our doctors were worried about SAH. So we have about a 2% prevalence of disease in our "acute" headache population (18/900). The traditional data was that CT was about 90% sensitive for SAH, so the negative predictive value of a CT is very good -- somewhere well north of 99% likelihood that the patient does not have SAH. Now you can play with the numbers and tighten it up a bit by more rigorously screening out headaches that are not "worst ever" and not sudden onset, but even if you get to a pretest prevalence of 10%, which would be quite high, the NPV is still very good, certainly better than we can rule out other serious diseases like PE or unstable angina.

But this was very rough math from a single practice with small numbers. So it is not exactly something I was able to endorse as a standard of care. Just contextual information I could offer a patient guiding them whether or not to accept the LP I was offering. Most declined, but some preferred the assurance that the gold standard test offers.

I've been quite pleased, though, to see more and more new and more rigorous data emerge on the topic. It seems, ever so slowly, the tide of opinion is turning against the routine LP.  First David Newman over at SMART EM did a great deep dive on the topic, showing that for LP, the Number Needed to Treat is somewhere around 500, which means that you'll do a lot of LPs to find a single SAH in a patient for whom it will make a difference. (Updated podcast on SAH here - worth listening to!) Then there was the Perry article in the BMJ last year which showed that the sensitivity of early CT is very very good, perhaps as high as 100% for SAH. Then there was this August 2012 article in the highly influential journal, Stroke, authored by none other than Dr. Jonathan Edlow:

Diagnosis of Subarachnoid Hemorrhage : Time to Change the Guidelines?[...] Given this analysis, we believe that practice should change. Neurologically intact patients who present with thunderclap headache and undergo CT scan within 6 hours of symptom onset no longer need an LP to exclude SAH if the CT scan is negative.
This is the same Dr Edlow who was lead author on the ACEP clinical policy, only 4 years ago, which did recommend routine LPs! (Link: PDF) The times, they are a-changin'!

So I feel comfortable claiming victory here. I was right all along and shame on you for ever doubting me.

(insert nuanced discussion here about shared decision-making with patients and the need to assess each patient as an individual.)

26 November 2012

This is what it takes to get published nowadays

For Pete's sake. This popped up in my newsfeed today, with multiple lay media citations:

Pediatric Inflatable Bouncer–Related Injuries in the United States, 1990–2010 
METHODS: Records were analyzed from the National Electronic Injury Surveillance System for patients ≤17 years old treated in US emergency departments (EDs) for inflatable bouncer–related injuries from 1990 to 2010. 
RESULTS: An estimated 64 657 (95% confidence interval [CI]: 32 420–96 893) children ≤17 years of age with inflatable bouncer–related injuries were treated in US EDs from 1990 to 2010. From 1995 to 2010, there was a statistically significant 15-fold increase in the number and rate of these injuries, with an average annual rate of 5.28 injuries per 100 000 US children [...] Most injuries were fractures (27.5%) and strains or sprains (27.3%), and most injuries occurred to the lower (32.9%) or upper (29.7%) extremities.  
CONCLUSIONS: The number and rate of pediatric inflatable bouncer–related injuries have increased rapidly in recent years. This increase, along with similarities to trampoline-related injuries, underscores the need for guidelines for safer bouncer usage and improvements in bouncer design to prevent these injuries among children.
Sweet Jesus on a pogo stick. So you mine a database for some trivial but catchy mechanism of injury and slap a ramshackle statisical analysis on it (somewhere between 30K-100K injuries? that confidence interval is as wide as a barn door) and presto blammo you're in Pediatrics and USA Today and on CNN solemnly intoning on the dangers of letting your kids go to Jump Planet.

Is this where we are as a society? Have we run out of actual public health concerns that we find this sort of minutia worth researching? Or are car crashes and gun accidents and drug overdoses gotten too boring to publish and report on? Or, I suspect, is the culture of academia so degenerate that the mandate of "publish or perish" overwhelms common-sense judgement in deciding whether a topic is publication-worthy? Yup, that's it. Bring on the trivia!

Next thing you know they will be warning you of the dangers of tripping over your pets. Oh, wait. They already did that study.

That meteor can't get here soon enough.

EDIT: Great minds etc etc etc

22 November 2012

Be careful out there

Cute little PSA -- Dumb ways to die

21 November 2012

On objectivity

Now, I'm not a radiologist, oncologist or an epidemiologist. So I am not claiming any expert opinion of the science, but I was not surprised to see yet another major article released regarding the value of early detection of breast cancer via screening mammography -- it tends to detect a lot more early cancers, but doesn't seem to reduce the number of advanced cancers.

From today's NEJM:

The introduction of screening mammography in the United States has been associated with a doubling in the number of cases of early-stage breast cancer that are detected each year, from 112 to 234 cases per 100,000 women — an absolute increase of 122 cases per 100,000 women. Concomitantly, the rate at which women present with late-stage cancer has decreased by 8%, from 102 to 94 cases per 100,000 women — an absolute decrease of 8 cases per 100,000 women. With the assumption of a constant underlying disease burden, only 8 of the 122 additional early-stage cancers diagnosed were expected to progress to advanced disease. ... breast cancer was overdiagnosed (i.e., tumors were detected on screening that would never have led to clinical symptoms) in 1.3 million U.S. women in the past 30 years. We estimated that in 2008, breast cancer was overdiagnosed in more than 70,000 women; this accounted for 31% of all breast cancers diagnosed.
This is not a new finding at all. Numerous previous studies have been published questioning the value of rountine mammography screening in younger women, and a major controversy was ignited a few years ago when the experts at the USPSTF finally recommended that women under the age of 50 not be routinely screened by mammograms.

As I disclaim above, I do not have a vested interest or a strong opinion on this, though I do have a bias towards accepting the conclusion as the body of science accumulates. But one thing that I noted in the media coverage of this newest study was this statement which was almost universally cited:
The American College of Radiology issued a statement saying the report was "deeply flawed and misleading"
While I understand that journalists should try to present both sides of an issue, especially one which is so controversial and emotionally charged, maybe an organization which has such a strong, vested, economic interest in the value of mammography might not be the most credible source to turn to for an expert opinion? As Upton Sinclair famously said, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


Discharge a PE? that's crazy talk!

So I recently sent home a patient with a Pulmonary Embolism (PE) for the first time. Or perhaps I should say that that it was the first time I've knowingly sent home a patient with a PE, but that's neither here nor there.

This was an unusual case, to be sure. The patient was young and healthy, a triathlete in exceptional condition. He had had arthroscopic surgery on his left knee about a month ago, and a few days after that developed this sharp pleuritic left chest pain. The pain was quite severe, but he ignored it for about three weeks until finally, since it wasn't going away he presented to his doctor, who diagnosed the PE on CT and sent him to me for treatment.

The PE was small but not tiny, segmental as I recall. He otherwise looked great, with no tachycardia or shortness of breath. Functionally, he was doing great. He wasn't back to running yet, but he was cycling and swimming and performing at about his usual level. So I guess that made him functionally "well-preserved." Given that he had symptoms for over three weeks, I guess that qualified him as stable, so we started him on low molecular weight heparin (LMWH) and sent him home.

And I suspect that this is where we are going in the future - outpatient management of stable PE patients.

I threw out the question on twitter at 2AM, and woke to find a vigorous conversation ongoing on the topic among ER physicians on three continents, including one principle investigator of a major trial on the topic. Twitter is awesome. You can read the conversation, in part, here on Storify. Michelle Lin over at Academic Life in Emergency Medicine put together a PV Card on the topic and received some more feedback.The consensus was that most non-US ER docs have already or are beginning to embrace the concept of risk stratifying and discharging some PE patients, while the US practice has not moved much and is deeply skeptical of the idea.

Can we safely send home some PE Patients? 

There are many patients with PEs who are clearly ill. They're easy to spot if you've a smidgen of clinical judgement - they're dyspneic, tachycardia, hypoxic, hypotensive, etc. There is a nicely validated scoring system to sort out those who are more likely to have a bad outcome, and presumably, these folks are the ones who would benefit from hospitalization. But, of the well-appearing PEs with lower risk, the risk is still not zero. There are some people who present with small clots who will proceed to have recurrent embolic events and die. We've all seen it. Is it possible to quantify how commonly that happens? More importantly, is it possible to predict which of the well-seeming patients are more likely to have these bad outcomes?

There is some research out there to support a selective approach to outpatient management of PEs. There was this study which supported the safety of early discharge. More recently there is the Hestia trial which was a prospective study supporting the safety of outpatient treatment, and one unblinded randomized controlled trial of outpatient treatment which also supported outpatient management. If you haven't, I would strongly encourage you to listen to Rob Orman's ERCast podcast on this topic.

I would also add that the value of inpatient treatment as currently practiced seems limited. The well-appearing PEs in the US tend to get a very brief inpatient stay, less than 24 hours, which I suppose might screen for stability but I'm not sure there's any evidence to support the utility of the brief admission. Talking with some european docs, not only is outpatient management common over there (in some countries), it can take 3 days to get a CT-PA, so in many cases they are discharging suspected PEs on LMWH until they get their study, and if it's positive then they get admitted. (Which makes no sense at all, but there you have it.)

The signs seem pretty clear: low-risk patients, as judged by an objective risk stratification score like PESI plus some good old-fashioned clinical judgement (size and location of clot, total clot burden, risk indicators maybe not built into PESI) probably will allow us to safely discharge patients with PE. But can we get there? I'm not sure. The culture of the ER, especially with a perceived high-mortality diagnosis like PE, is highly risk-averse. Merely mentioning the notion elicits gasps of horror from my colleagues, and mutters of "over my cold, dead body." A further, and larger, obstacle to changing practice is our zero-risk-tolerance, highly litigious medical environment. Who wants to be the first ER doc sued for sending home a PE? Plaintiffs' experts will be lining up around the block to testify against you.

And this is a problem. We know that some people with PE will suffer recurrent embolic events despite anticoagulation, though it's a small number. Being hospitalized will not prevent the recurrent embolization, though it may provide earlier detection and therapy. Since we do not know in advance among the low-risk group who will suffer recurrent emboli, it's a catch-22. You can admit them all, a very large number of patients, to detect a very rare complication, or send them home with the risk that when a complication does happen, you are ar risk for being "blamed" for the decision to discharge.

I think we are not ready for prime time here, but it's coming. US docs will demand better data before warming to the notion. Strong institutional support will be needed from hospitals, meaning defined care protocols supporting the practice, in order to convince skittish doctors that they have the backing of the facility in the event of a bad outcome.

You've been warned.

15 November 2012

The Catch-22 of documentation fraud

I just wanted to expand on something I wrote yesterday, which relates to my other sort-of-recent post on upcoding. I wrote, about scribes and compliance:

Knowing that the scribe cannot document a complete ROS unless I actually did that ROS, I am more compulsive about making sure I hit all ten systems. (Even when it's not clinically relevant. Such is the Kafkaesque world we live in.) And I make sure to do a full exam where before I may have elided over a few systems. This is, of course, only for cases where the complexity of the case will justify a service level requiring the complete H&P. 
This hits at the heart of the upcoding debate. Remember this front-page article in the New York Times from six weeks ago, in which the increased billing levels of ER doctors is asserted as prima facie evidence of fraud and abuse, and the follow-up in which the powers that be asserted their intent to reclaim these hundreds of millions of dollars in "inappropriate payments." We are not looking at a hypothetical threat here, and the financial risk to care providers is enormous.

The rules, for those not familiar with them (and who the hell would be reading a blog post about medical coding if you weren't?) are that to bill at a level 5, which is the highest ordinary level of service in the ER, the physician must document the following:

  • An extended history 
  • A complete review of systems
  • A comprehensive exam 
  • High complexity medical decision-making

In order to quality for a level 5, all of these must be met, but the sine qua non is the medical decision-making (MDM). This is, in fact, the ultimate driver of the visit level. MDM consists of three components: the number of diagnostic options (i.e. your differential), the amount of data you must review (i.e. tests, re-examinations) and the risk inherent in the presenting problem. If the MDM isn't met, no matter how nicely documented the rest of the chart is, a high service level may not be justified. To put it another way, an ankle sprain, no matter how thoroughly documented, is still just an ankle sprain.

Previously, it was common to have cases "downcoded" when a doctor had a high-complexity MDM but slipped up on the other items, most commonly on the ROS. Over the years, physicians have gotten better educated about the system and more sophisticated at making sure the ROS and other requirements have been met so that the billing level can, appropriately, be determined by the MDM.

This rankles. Always has. When I see a patient with chest pain and a heart attack, in order to get paid appropriately I have to ask a bunch of completely irrelevant questions about unrelated systems: do you have burning when you urinate? Do you have any rashes? Nobody would argue that the complexity and risk don't justify the level 5, but I have to document a bunch of medically unnecessary trivia to compliantly bill at the level the MDM deserves.

And this is where the bureaucratic hassle now becomes a catch-22. "Medical Necessity." Medicare considers it fraud to bill for things which are medically unnecessary. If I see an ankle sprain and order blood tests and a CT scan to try and get the bill up to a high level, that legitimately is fraud because the tests ordered are not medically necessary. But what is happening now is that Medicare (in the form of the private contractors who administer it regionally, along with some private payers) are reviewing charts and claiming that the physicians are fraudulently upcoding because we are documenting complete Reviews of Systems when they were not ... wait for it ... medically necessary.

To be clear: Medicare set the rules, and made them arbitrary and disconnected from reality, and now is coming back and punishing physicians for attempting to follow the rules to the letter of the law.

And the format this takes is scary. You get a letter from the Medicare carrier (or a RAC or a Medicare Advantage administrator) telling you that you've been reviewed, found guilty of upcoding, and this finding, based on a handful of charts, is extrapolated back several years. The result is a large demand for reparations, usually in the mid-to-high six figures. The physician group can either write a check or lawyer up and argue it chart by chart in front of an administrative law judge.

What I hate about this is the underlying dishonesty. This is about saving money. I get that, and that is in fact a reasonable goal. Healthcare is astoundingly expensive, and as a society we need to ratchet back the expense. If there's an argument to be made that physicians are paid too much, then let's have that debate on its merits. But the attempt to save money by harassing physicians and exploiting the contradictions within the rules that the government itself wrote is beyond maddening.

14 November 2012

Write this down for me

I have a lovely pen. It's a Mont Blanc Meisterstück fountain pen. My group bought it for me on my tenth anniversary as a partner in our Emergency Medicine practice.



It's a luxury I would never have paid for myself, though I have loved and used fountain pens since I was in college. Ironically, about the time I got it, the window of opportunity to use it in my professional life closed. For a decade, we had a hybrid paper-and-dictation documentation system, but around the time I hit my milestone, we went to an Electronic Medical Record (EMR). And with that, I never again had to touch pen to paper, except to sign the odd prescription. Such is life.

I am a computer guy, tech-savvy and fearless, and I was one of the few docs who saw the move to an EMR as a good thing.  My documentation improved, and now that we are with Epic I would say it's even better. As I am a quick typist, the workload of documentation was only modestly increased by the transition to full physician documentation in the EMR. The other docs in my group varied in how well they adapted, from a few whose productivity improved, to the mass who accepted it with grumbles and minor complaints, to a few outliers who simply refused to use it at all.

Recently, though, we started a pilot program using medical scribes.

Honestly, I resisted the scribe initiative for years, though there were a few docs who really wanted them. I wasn't opposed, but I was too busy to do it, and it wasn't high enough on my priority list to make it happen. It finally happened when I challenged one of our younger, energetic docs to "make it happen," and she went out and did just that. Very impressive initiative. She formed a committee, put together a business plan, had presentations from scribe vendors, took competitive bids, and soon enough there were young enthusiastic faces greeting us in the ER. I watched, bemused, from the sidelines for a couple of months and finally took the plunge and signed up for a scribe myself for a few shifts.

These are my thoughts and observations so far, after about a dozen shifts with my own personal scribe.

First, the general structure of the program, for our group. We pay a flat hourly rate to a scribe vendor. The vendor recruits the scribes from a local university, mostly pre-med students, and manages all the HR functions associated with such a program. Docs who are interested in having scribes sign up and choose which shifts they want a scribe for. The cost of the scribe is deducted (pretax) from the doc's individual paycheck. The program is entirely voluntary and about a third of our docs have signed up so far, usually just for the busier shifts.

The social aspect of having a scribe is more than a little weird, though I got used to it quick enough. I added another line to my standard introduction: "I'm Dr Shadowfax, and this is Jenny, who is working with me today." Almost never has the presence of the scribe occasioned any further comment or discussion. The scribes step out of the room for pelvics or other uncomfortably intimate exams and are generally invisible during the H&P (hidden by the large monitor of the computer on wheels they bring with them). During the physical exam, I verbalize what I'm seeing/doing, as if I am talking to the patient. "Your lungs are clear and your heart is regular without murmurs." This allows the scribe to document my exam in real time, and, from what I can tell, patients seem to like it, since they are getting a sense of what I am looking for and seeing. If there are "issues" such as psych, substance abuse or simply an unpleasant patient, I'll wait till we're out of the room to tell the scribe what I want documented.

I've never had a secretary or personal assistant before and have always prided myself on self-sufficiency, so it feels odd to have someone whose whole job is to do the little scut work (like putting a chart in the rack or pulling reports off the fax machine) for me. I can do that perfectly well myself. I can also document perfectly well myself. Better, in fact, than most. Getting over the idea of someone else doing "my" work for me has been and remains probably the biggest barrier for me in fully accepting the scribe. But these small efficiencies are of course the whole purpose of having a scribe in the first place, so I am getting over that.

The workflow is quite different now. It's actually very pleasant. I have the freedom to simply sit down and talk to the patient. I can take a bit longer and have more of a free-flowing conversation. I'm facing the patient, not facing a computer screen, I'm not making notes on a clipboard, and I'm not frantically trying to remember the necessary data points for the chart. I just chat. I feel like I have more mental energy to spend on the patient and I can simply forget about the chart, confident that the scribe is capturing the important data points. Simply put, I can focus on the patient, and I feel like that allows me to be a better doctor. I suspect, though I have no proof, that it also helps with patient satisfaction, which matters a lot these days.

The quality of the documentation is a little more variable. It's hard to let go of control of the chart. There are some odd little verbal tics some of the scribes have that I would never use. To me, reading these charts are like fingernails on a chalkboard, though they're perfectly accurate and acceptable. Sometimes a really important historical point gets left out of the chart because the scribe didn't realize its significance. It is very important to proofread the charts and make sure they say what you need them to say. I'm learning to "let go" and not spend so much time editing each chart that it negates the point of having a scribe in the first place. And I think the scribes, as they learn, are getting better and better at picking out the important bits of the conversations they are documenting. When there is an important point I want emphasized I can simply repeat it back to the patient as a cue that I want this verbatim in the chart, and if I note an omission I review that afterwards with the scribe as a "teaching point" for them, as I would with a med student. Since they are all pre-med, they really seem to appreciate it. One of the best points (and a pleasant surprise) was when I reviewed my charts and found entries like:

1645 - patient re-evaluated. Abdomen still nontender. Taking po well.
or, 1015 - neurosurgery paged. 1025 - Dr Shadowfax speaking with Dr Jones, who requests MRI

Stuff that I had never before had the discipline to document and time, now 100% of the time in the chart. This is a huge benefit, especially when it comes to med mal defense.

Another thing that this has forced me to do is be more rigorous with my H&P. Once you have been working in an ER for a while, there are quite a few diagnoses you can literally make from the doorway. Say, a kidney stone. I don't need to do a Review of Systems or even a physical exam for a kidney stone patient, and over the years I may have become a little lax on this point from time to time. But we have trained the scribes that "if it didn't happen, you cannot document it." So now, knowing that the scribe cannot document a complete ROS unless I actually did that ROS, I am more compulsive about making sure I hit all ten systems. (Even when it's not clinically relevant. Such is the Kafkaesque world we live in.) And I make sure to do a full exam where before I may have elided over a few systems. This is, of course, only for cases where the complexity of the case will justify a service level requiring the complete H&P. So the scribe effectively helps keep me honest and improves my compliance.

The productivity side is also a net positive. Once I learned to let go and trust the scribe to get all the charting with minimal oversight, this freed up my time enormously. I can go from room to room to room seeing new patients, with only a brief interlude to enter orders (which the scribes are not allowed to do in our hospital). I've always been able to see 2+ patients per hour with no problem, and with the scribes 3+ has been easy, when volumes permit. I think I could go even higher but I haven't had a really busy shift since the program began.

At this point I am, I think, not making money on the scribes. I think, in fact, that I am losing money. I have been told by experts that in the startup phase of a scribe program you should expect to lose money for the first year. This seems reasonable with our experience. We have 8 docs on duty in our ER at peak times, and only a fixed number of patients. To the degree that I can see more patients, that's taking money from my partners' wallets, which puts an upper bound on my appetite, out of courtesy. Worse, if I have a scribe on a slow shift, it grates on me that I am paying for them do essentially do nothing. If I have a scribe, I feel pressure to be more productive than I otherwise would. Over time, I hope, we can contract physician staffing to the point that we will all realize increased productivity and revenue. This requires more than a 1/3 physician buy-in, which we have yet to achieve. We will see. For the moment, I can at least hope to break even on the program, though some of it may come at my partners' expense. Maybe that will induce them to get their own scribes as a defensive measure.

The final, and perhaps most important, point for me is this: quality of life. If I have a scribe shift, it's a good shift. I save so much mental energy not having to chart. When I have a five-minute conversation with a patient, ordinarily, I am carefully committing about 30 key points to my short-term memory. I then have to dash out of the room, while it's still fresh in my mind, and enter that into the computer. I never realized how much that was wearing me down till I didn't have to do that any more. My "external memory" is passively (from my point of view) capturing all these data points and I can focus on my clinical impression from the get-go. I can forget the details and focus on the big picture. The saved "brain strain" takes a busy shift and makes it seem nearly effortless. When I have five free minutes, which is rare enough, I can check twitter or my email or text my wife rather than frantically trying to catch up on my charting. And when my shift is over, I am generally done with my charts and can walk out the door as soon as the last patient is dispo'd. Granted, I was generally one to leave at the end of my shift even without a scribe, but that took work. Now it's easy. I like my job better. I've never felt like I was one of those docs susceptible to burnout, but it is endemic within emergency medicine, but for someone who is riding that razor's edge, a scribe could be the difference in job satisfaction between having to leave the field and keeping their career going another decade.

I'll update this when I've more experience, but so far I am continuing my scribe utilization and would describe myself as very happy with the experiment. Now I just need to figure out how to get them to blog for me.